[ExI] Oxford scientists edge toward quantum PC with 10b qubits.

Kelly Anderson kellycoinguy at gmail.com
Mon Jan 31 16:36:24 UTC 2011


2011/1/28 Dave Sill <sparge at gmail.com>:
> 2011/1/28 John Clark <jonkc at bellsouth.net>
>>
> These isolated systems act intelligent, but they're not really intelligent.
> They can't learn and they don't understand. Deep Blue could dominate me on
> the chess board but it couldn't beat a 4-year-old at tic tac toe. Make a
> system that knows nothing about tic tac toe but can learn the rules (via
> audio/video explanation by a human) and play the game, and I'll be
> impressed.

Actually, it is pretty trivial for a computer to learn tic-tac-toe
without any explanation at all. With a 1970s era neural network, the
only feedback required for it to learn the rules of the game are
whether it won or lost. It even learns to take turns if you define
"lose" as playing with the wrong "color" or taking two turns in a row.
This is not particularly impressive of course because tic-tac-toe is a
pretty small game tree.

Now, if you want a challenge for a computer, try the oriental board
game Go. As far as I know, there aren't any computers that can grok
that as good as people yet. I'm sure it's coming soon though. :-)

I think the problem is really related to the definition of
intelligence. Nobody has really defined it, so the definition seems to
fall out as "Things people do that computers don't do yet." So what is
"Things computers do that people can't do"? Certainly it is not ALL
trivial stuff. For example, using genetic algorithms, computers have
designed really innovative jet engines that no people ever considered.
Is that artificial intelligence (i.e. the kind people can't do?)

-Kelly



More information about the extropy-chat mailing list