[extropy-chat] Re: Structure of AI

J. Andrew Rogers andrew at ceruleansystems.com
Mon Nov 22 05:26:25 UTC 2004


On Nov 21, 2004, at 7:31 PM, Eliezer Yudkowsky wrote:
> 1)  The point of AI is to come up with an algorithm that will be smart 
> in any of the tiny set of contexts that represent low-entropy 
> universes.  We may make this assumption since a maxentropy universe 
> could not contain an AI.  If we do not make this assumption we run 
> into no-free-lunch theorems.  It may not sound practically important 
> (how many maxentropy universes did we plan to run into, anyway?) but 
> from a theoretical standpoint this is one hell of a huge nitpick:  The 
> real universe is an atypical special case.


Very true, but there is some audience context to consider.  I would 
assume that the above would be obvious to  anyone who understood the 
math well enough to really consider it, and confusing to those that 
didn't.  I could make quite a number of shocking mathematical 
assertions, but I do not see that it would serve any purpose.  My 
technical omissions were intentional.

There are many, many layers, something you already know.  Hell, I only 
mentioned intelligent systems in the abstract and didn't even mention 
that the entire universe of mathematics within that class of system 
(e.g. where Friendliness comes in).  There are a lot of sacred cows one 
could slay in this space e.g. the theoretical implications of 
algorithmically finite systems on intelligent agents within those 
systems, but I was trying to keep it somewhat conversational.

How far down the rabbit hole do we want to go?

j. andrew rogers




More information about the extropy-chat mailing list