[extropy-chat] Re: Structure of AI

Eliezer Yudkowsky sentience at pobox.com
Mon Nov 22 03:48:15 UTC 2004


J. Andrew Rogers wrote:
> 
> To sum up rather crudely, you can formally integrate universal 
> induction, decision theory, and some other bits into an elegant 
> universal mathematical definition of intelligence, and derive system 
> models from it that one can prove are universally optimal predictors and 
> decision makers.  Unfortunately, while we can show that all intelligent 
> systems have to be a derivative system of this in some fashion, the 
> theoretically pure system derivation is utterly intractable due 
> primarily to the universal induction aspect.  The nature and shape of 
> the algorithm space suggested by this mathematics is very different than 
> the traditional assumptions of AI research.
> 
> It is interesting to note that while the basic theory of universal 
> induction was published in the late 1970s, to date no useful and 
> tractable approximation has ever been described in literature despite 
> the fact that this was a thoroughly trod area even prior to the 
> mathematical formalization.  From the standpoint of the above 
> mathematics, the problem of general AI is reduced to a long-standing 
> theoretical computer science problem of tractable induction.

This is an example of what I mean by nitpick #2, that FAI is a special case 
of AI.  Saying that your ideal criterion of decision-making can be summed 
up in a von Neumann-Morgenstern utility measure, from which we derive a 
measure of expected utility, and thence a total ordering over actions, 
which we use to derive a greatest action, (takes breath), is a special case 
of decision-making that empirically does not hold true of humans, and thus 
the "Collective Volition" proposal is based around the general problem of 
abstracting, transforming, and approximating a generalized decision 
function, with expected utility being a special case of a decision function 
that can be abstracted in an unusually simple way.

Also, classical induction and classical decision theory make broken 
assumptions such as that the AI is hermetically sealed from the rest of the 
universe except for an input channel and an output channel.  AIXI cannot 
model the consequences for its own cognitive process of hitting itself over 
the head with a hammer.  I think this is as broken for real-world AI, as a 
reward channel riveted to the input channel is broken for Friendly AI.  If 
you actually instantiated AIXI, it would commit suicide.  If AIXI didn't 
commit suicide, it would kill you.  Neither of these are trivial problems.

-- 
Eliezer S. Yudkowsky                          http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



More information about the extropy-chat mailing list