[ExI] ai class at stanford

G. Livick glivick at sbcglobal.net
Mon Aug 29 04:45:23 UTC 2011


Spike

On 8/28/2011 7:39 AM, spike wrote:
>
> -----Original Message-----
> From: extropy-chat-bounces at lists.extropy.org
> [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of G. Livick
> Sent: Saturday, August 27, 2011 10:05 PM
> To: ExI chat list
> Subject: Re: [ExI] ai class at stanford
>
>> ...I've followed this thread for a bit, but can't for the life of me see
> how the AI class, the offering of LISP as a preferred means for creating AI
> software, and the use of spreadsheets in lieu of hard coding, all fit
> together...  We won't be developing AI, just learning some of the basic
> probability theory and numerical methods in the current tool-set; extremely
> dull stuff for anyone mainly interested in the Great Oz, and not the man
> behind the curtain.  FutureMan
>
>
> Sure, FutureMan, but the big fiery guy was really a lot more interesting
> than the goof behind the curtain.  He really had the old SILENCE!  thing
> going.  And the whole bursting into flame bit, don't we wish we could do
> that?  It would be great at annual performance review time.

Everyone likes the OZ.  It kind of spoils things to look behind the 
curtain to discover that it's all illusion.

>
>
>
> Granted the whole notion of weak AI, teaching cars to drive themselves and
> doing however it is that Google figures out what we want from a few words,
> may have exactly nothing to do with AGI and may offer nothing at all to help
> us in understanding AGI.  But I see it as a worthwhile exercise in that it
> may help us understand a little better how our own brains work.
About learning how our brain works from approaches we use to simulate 
its capabilities; not likely?  We don't actually know how our brains 
work -- by 'we', I mean 'they', those who would know if anybody did.  
That's where the "artificial" of AI comes in.
> I brought
> up the example of the WW2 fighter games, and how the software opponents seem
> to make reasonable and humanlike decisions on what to do in any particular
> case.  I concluded way they did that is to make an enormous look-up table
> from watching humans play, which is not intelligence.  But if we look at
> online chatter in general, it is easy to conclude that human activity is
> largely the bio equivalent of an enormous lookup table.

This class will help those who stick it out get a handle on how things 
such as you describe above are implemented in software.   It's not 
look-up tables....  But this knowledge comes at a price: we are looking 
behind the curtain, learning to do what the guy there does, with the 
attendant loss of technological innocence (or the availability to feign 
innocence).  Any claims from graduates that certain things in the world 
of AI are practical and their emergence predictable can expect 
challenges from other graduates as to possible methods.  Defense of such 
concepts as "uploading" will become impossible for people tasked with 
also explaining, in reasonably practical terms, just how it could be 
accomplished.  Unless, of course, we all take a minor in Ruby Slippers.
>
> spike
>
All this said, I expect that some portion of our discussions here might 
become richer as those taking the course start to have ideas about how 
certain classes of unsolved problems might be tackled with the new 
tools.  Perhaps the Dyson Shield idea will fall to practical concepts of 
ultra-fast computers running in superconductive environments, thus 
taking no power at all, executing algorithms that take advantage of 
Artificial_General_Intelligence.h.  I look forward to beginning.

FutureMan




More information about the extropy-chat mailing list