What is the use of AIXI in AI research?

 From Wikipedia:

AIXI ['ai̯k͡siː] is a theoretical mathematical formalism for artificial general intelligence. It combines Solomonoff induction with sequential decision theory. AIXI was first proposed by Marcus Hutter in 2000[1] and the results below are proved in Hutter's 2005 book Universal Artificial Intelligence.[2]

Albeit non-computable, approximations are possible, such as AIXItl. Finding approximations to AIXI could be an objective way for solving AI.

Is AIXI really a big deal in artificial general intelligence research? Can it be thought of as a central concept for the field? If so, why don't we have more publications on this subject (or maybe we have and I'm not aware of them)?

Answered by Andrew Jenkins

"Current artificial intelligence research" is a pretty broad field. From where I sit, in a mostly CS realm, people are focused on narrow intelligence that can do economically relevant work on narrow tasks. (That is, predicting when components will fail, predicting which ads a user will click on, and so on.) For those sorts of tools, the generality of a formalism like AIXI is a weakness instead of a strength. You don't need to take an AI that could in theory compute anything, and then slowly train it to focus on what you want, when you could just directly shape a tool that is the mirror of your task. I'm not as familiar with AGI research itself, but my impression is that AIXI is, to some extent, the simplest idea that could work--it takes all the hard parts and pushes it into computation, so it's 'just an engineering challenge.' (This is the bit about 'finding approximations to AIXI.') The question then becomes, is starting at AIXI and trying to approximate down a more or less fruitful research path than starting at something small and functional, and trying to build up? My impression is the latter is much more common, but again, I only see a small corner of this space.



Your Answer

Interviews

Parent Categories