[ExI] AI Motivation revisited
Kelly Anderson
kellycoinguy at gmail.com
Wed Jun 29 01:40:53 UTC 2011
2011/6/28 Samantha Atkins <sjatkins at mac.com>:
> Are you sure? Given the known average speed and other performance
> characteristics of ordinary PCs (and their OS) and any particular model of
> an AGI it should be quite possible to say pretty definitively whether that
> model can be usefully realized on that hardware. This is an engineering
> task that does not require deep definitive knowledge of what mechanisms are
> capable of producing intelligence.
Ok, Samantha, if you think this is possible, then you go first... :-)
-Define intelligence very strictly.
-What is the minimum capacity required to achieve that?
The computational power required to defeat Kasparov in '96 is many
orders of magnitude above the computational power required to play
chess at that same level now. Who's to say that once we understand
intelligence, we couldn't get some form of it to run on a lowly PC?
We will certainly have the first intelligence on supercomputers. Then
we'll learn about it, then optimize it. After a while, who is to say
that we couldn't have some kinds of intelligence on a contemporary PC?
But I don't think we know enough to determine that now.
If you look at the human brain, a lot of it processes sensory input,
motor output, and other autonomic activities that don't have much to
do with intelligence per se. Turing's view of intelligence, for
example, involves zero visual processing. So there is much less
computation required to achieve "intelligence" as defined by Turing
than there is in the typical skull. Yet, an intelligence that could
pass the Turing test would be completely incapable of distinguishing
between a picture of a cat or a dog.
I wonder if any of the Turing challengers ever sent ASCII pics to the
contestants? :-) That might buy us ten more years before the computers
win!
-Kelly
More information about the extropy-chat
mailing list