[extropy-chat] SIAI: Donate Today and Tomorrow

J. Andrew Rogers andrew at ceruleansystems.com
Sun Oct 24 05:15:53 UTC 2004


On Oct 23, 2004, at 5:37 PM, Mike Lorrey wrote:
> I had made the same assertion that day in 2001: that intelligence is
> just a matter of a massive lookup table and an engine capable of using
> it effectively. Now the inventor of the PDA is making the same
> assertion. Perhaps people don't help because they don't think you are
> approaching the problem effectively.


Stating that a computational model can be described as a giant lookup 
table is, well, pretty circular and a really long way from profound.  
This is equivalent to saying "intelligence is possible on finite state 
machinery",  a basic, necessary, and well-founded assumption in AI 
research.

I've stated in the past that there is precious little computation to 
"intelligence", and with good reason, but there is a little more to it 
than just "big memory" or we would have had mag-reel AI many decades 
ago.  The problem is not having enough memory per se -- we have disk 
arrays with plenty of space today -- but in storing all the *other* 
information beyond the data stream isolate, which is not really a space 
problem, and in a structure with tractable access.  There are very 
tough theory and design problems here that are being glossed over as 
though they don't exist that are really at the core of all this.  A 
study of the relevant mathematics will show quite clearly that you 
really can't beat this problem space with faster hardware and more 
storage, contrary to popular belief.  At least vanilla exponential 
complexity can be brute-forced into interesting spaces with better 
hardware; geometric complexity functions can't be brute-forced in our 
universe as we know it, ever.


> In keeping with my article "Unsafe at Any Law", creating a "Friendly
> AI", if one develops onee on my and Hawkins' theory, is simply a matter
> of creating a friendly environment of input for an AI kernel to create
> its lookup table from. Raising your AI in a laboratory or a factory
> would therefore not be conducive to creating Friendliness. It needs to
> have a family of some sort to become friendly.


I haven't really read Hawkin's stuff in detail, so I'll assume you are 
misrepresenting his position.

There are gross and obvious errors in theory here, as well as some more 
subtle errors related to what I wrote above.  Your above assertion is 
not supported by some pretty basic theorems and mathematics relevant to 
the discussion at hand.  Hell, if it was that simple there would be 
nothing to talk about.

Rule of Thumb:  If you come up with a simple and obvious solution to a 
problem space that has been thoroughly combed over for many years by 
people with a great deal of expertise in the field, you are almost 
certainly mistaken about your "solution".  Doubly so if you do not have 
expertise in the theoretical foundations of the field.


j. andrew rogers




More information about the extropy-chat mailing list