[ExI] ai class at stanford

G. Livick glivick at sbcglobal.net
Mon Aug 29 07:16:27 UTC 2011

On 8/28/2011 10:54 PM, Adrian Tymes wrote:
> On Sun, Aug 28, 2011 at 9:45 PM, G. Livick<glivick at sbcglobal.net>  wrote:
>>   But this knowledge comes at a price: we are looking behind the
>> curtain, learning to do what the guy there does, with the attendant loss of
>> technological innocence (or the availability to feign innocence).  Any
>> claims from graduates that certain things in the world of AI are practical
>> and their emergence predictable can expect challenges from other graduates
>> as to possible methods.  Defense of such concepts as "uploading" will become
>> impossible for people tasked with also explaining, in reasonably practical
>> terms, just how it could be accomplished.
> Uh huh.  I suppose black powder is just an interesting toy of no military value,
> there's no such thing as atomic decay because we can't conceive of it, and
> heavier than air flight - having never been demonstrated before - is impossible,
> then?
Not saying that at all.
> Seriously.  Just because we do not today know how to do something (and if
> we could explain it in reasonably practical terms, we probably could do it
> today), does not mean we never will, nor that we can not see how to go
> about discovering how.  If you want to make such claims, the onus is upon
> you to prove that it is not, in fact, possible.
Hardly.  The scientific method places the onus on the claimant.  
Otherwise, we'd have to accept everything imaginable as possible, since 
"nothing is impossible."
> There are untold number of projects using AI techniques to simulate different
> parts of what the human brain can do, to different degrees of success.  It
> appears that the main remaining challenges are to improve those pieces, then
> wire them all together.  We know enough about how the human brain works
> that it seems more likely than not that that will work...even if we can not
> describe exactly how the end result will work right now.
One will not find such claims about the workings of the brain, let alone 
mind, in the scientific literature.  The layman's knowledge of the state 
of the art of things does not open for him the awe of the researcher 
over how little is known.  As a result, an unfounded optimism is not 
unusual among interested observers.  As for the number of "AI 
Techniques" we already have being nearly sufficient, after a little more 
tweaking, to knit into a grand unified solution that will cover all 
bases: the people taking the Stanford AI course should emerge from it 
able to assess that statement directly.  My fiddling with this stuff 
over the years has me thinking quite the contrary, though; the field is 
in it's infancy as I see it.
> This is basic stuff, man.  This is what it means to develop technology.
Technology is developed from previous technology in incremental 
fashion.  It never emerges out of whole cloth.
>>   Unless, of course, we all take a
>> minor in Ruby Slippers.
> Or that, if you want to call it that.  But remember, it took the man behind the
> curtain to give Dorothy that power.  Knowing how it worked was part of him.
> The showmanship of Oz was gone before the topic came up.
True enough.  But without the man making it work, there would have been 
no magic.  Dorothy's observation of it did not cause the magic to 
present itself to her.
> We live, every day, among things our ancestors of many generations ago
> would call miraculous.  They don't seem that way to us merely because we
> know how they work.  Outside of abstract philosophical arguments that
> keep getting special-cased around by reality, I am not aware of any serious
> evidence that we can not create human-equivalent AIs - or even emulate
> humans in silico.  For instance, take the famous thought experiment where
> you replace a brain with artificial neurons, one neuron at a time.  We have
> that capability today; it is merely impractically expensive, not impossible,
> to actually conduct that experiment (say, on an animal, or a human who'd
> otherwise be about to die, to avoid ethical problems).  "Impractically
> expensive" is the kind of thing that development tends to take care of.
It's not just impractically expensive to replicate the actions of a 
neuron in software today, it's impossible.  We don't have the whole 
picture yet; I'd guess, at best, only 10%, of which 9% will be shown in 
time to be inaccurate.  Pull up a medical text on basic neurology and 
see for yourself.  We know even less how they function as communicators 
in networks, how the 60-some neurotransmitters affect things, how the 
glia support or suppress activity, etc.
Adrian, I see you've signed up for this course, so perhaps as we get 
into it we can explore some of these areas in light of the new knowledge 
thus acquired.  This should be very interesting!


More information about the extropy-chat mailing list