[ExI] Oxford scientists edge toward quantum PC with 10b qubits.
Richard Loosemore
rpwl at lightlink.com
Fri Jan 28 20:53:30 UTC 2011
Adrian Tymes wrote:
> 2011/1/28 Dave Sill <sparge at gmail.com>:
>> These isolated systems act intelligent, but they're not really intelligent.
>> They can't learn and they don't understand. Deep Blue could dominate me on
>> the chess board but it couldn't beat a 4-year-old at tic tac toe. Make a
>> system that knows nothing about tic tac toe but can learn the rules (via
>> audio/video explanation by a human) and play the game, and I'll be
>> impressed.
>
> Just to toss out a bit for contemplation:
>
> How do we know that there is not some similar trick, whereby a system could
> do this and still not be what we would consider intelligent?
>
> Or rather, what kinds of tricks might allow for such a thing?
>
> Can't think of any? Neither could those who declared that chess grandmastery
> required true intelligence...but they might not have known of the types of AI
> tricks that were to come.
>
> There may be a good answer. If there is, it would be useful, in this
> discussion,
> to have it.
There are two answers to your question.
First, I don't think anyone seriously said that chess grandmastery
required true intelligence .... they knew quite well that they were
building AI systems that played chess without general intelligence.
What they actually believed was that building a chess AI would *help*
them on the road to building a general AI.
Second, if someone built an AGI that could develop its own general
concepts, and learn new skills by itself, I simply do not believe that
anyone would then come along and say "These are just tricks: real
intelligence is something more than this".
AI folks are fond of that excuse they made up: "As soon as we build a
program that does something intelligent, everyone turns around and
claims that THAT is not intelligence after all." This is and always
was a piece of exaggerated nonsense. What actually happened was that AI
programs were able to do some smart things in ways that contained no
generality to them, and people outside the field quite rightly pointed
out that unless the system was capable of generalizing its knowledge and
skills, it was not intelligent.
Nobody changed their tune about what "intelligence" really is. Rather,
the AI community was caught selling fake goods, and somebody called them
on it.
Richard Loosemore
More information about the extropy-chat
mailing list