[extropy-chat] SIAI: Donate Today and Tomorrow

J. Andrew Rogers andrew at ceruleansystems.com
Mon Oct 25 03:54:32 UTC 2004


On Oct 23, 2004, at 10:55 PM, Mike Lorrey wrote:
> Theoretical foundations of a field which has produced exactly zero AIs?
> You are saying that I need to first learn about all the ways that
> everybody else has been failing for decades before I can have any
> meaningful contribution? Perhaps you are right, at least so I'd know
> exactly all the ways to NOT create an AI.


In essence, yes, if you knew a lot about theory you would understand 
WHY so many attempts have failed from a theoretical standpoint.  We 
know a lot more about intelligent systems these days than I think you 
think we do.   What do you think are the theoretical foundations of 
intelligence?  What is its mathematical description?

The reason so many AI projects failed is because there were no real 
theoretical foundations.  Most AI has been a random sampling of the 
computer science phase space.


> Oh, BTW: NO, we don't have enough memory yet today. A brain capable of
> remembering details of events dating back decades likely has a data
> capacity far in excess of anything existing today, maybe even the NSA.


Nonsense.  You are wrong on two accounts.

First, memory of the kind useful for intelligence is not like a tape 
recorder.  It is stored extremely efficiently in an information 
theoretic sense.  That we do not store our computer data even remotely 
as efficiently is not an indication of capability, though there are 
good engineering reasons that we choose not to.

Second, people remember almost nothing except a few bits of metadata 
that allow them to reconstruct virtual memories.  Do you remember 
anything about May 16, 1997 (picked at random)?  What you wore, what 
you ate, where you went, what the weather was like?  Neither do I.  
Everything that you know about that day is reconstructed from scant 
bits of metadata that are shared by many other indistinguishable days 
that might have actually happened but which you don't remember anything 
about.  This does not require extraordinary amounts of memory at all, 
particularly not when you have extremely efficient lossy memory per my 
first point.


> I can tell you exactly when a human intelligence starts to create the
> majority of its basic rules: that precious agen when the child asks
> 'why' and/or 'can I' about so many things and is told why, or no or
> yes, or maybe. From this point on, it is all about experiencing and
> associating, and categorizing things in accordance with these rules,
> and their later enhancements.


This has nothing to do with creating basic rules.  Intelligence learns 
patterns it is exposed to, period.  A parent may intentionally provide 
some additional context for those patterns, but that is hardly absolute 
as those are patterns like any other.  You act like those patterns are 
special and immutable, when their evaluation will drift and bias with 
experience like all the others.

Unless you have ironclad control of all experience and can predict how 
the experiences that do happen will modify the overall context, you 
have no guarantees.  And what use is an AI where you already can 
deterministically know every answer it will give you before it gives it 
to you?

j. andrew rogers





More information about the extropy-chat mailing list