[ExI] Oxford scientists edge toward quantum PC with 10b qubits.

Kelly Anderson kellycoinguy at gmail.com
Mon Jan 31 18:46:35 UTC 2011


On Mon, Jan 31, 2011 at 10:31 AM, Adrian Tymes <atymes at gmail.com> wrote:
> On Mon, Jan 31, 2011 at 8:57 AM, Kelly Anderson <kellycoinguy at gmail.com> wrote:
>> So when IBM creates a machine with the specific programming task of
>> "Pass the Turing Test" that won't be intelligence either, because it
>> was programmed to pass the Turing test... right???
>
> There is reason to believe that the Turing Test can not be passed, without
> the kind of generality needed for AI (or, more properly, Artificial General
> Intelligence, which is what people often mean when they mention "true"
> AI), and that Watson, chessmaster computers, and other specific-feat
> programs have yet to display.

Clearly, to pass a stronger Turing test, a computer would have to have
greater capabilities than Watson. That being said, I don't know how
many components would have to be added to Watson to get there. My
sense is that Watson is more than half way there. An interesting
aspect of passing the Turing test is for a computer to pretend NOT to
know some things. If you ran into a person who's knowledge was TOO
encyclopedic, you might get suspicious. Sometimes I wonder if Ken
Jennings is human... :-)

It might be a bit difficult to tell the difference between Ken and
Watson if your chat were in the form of Jeopardy questions...

> The reason?  Talk about one subject.  Then talk about something else.  A
> human can handle this - even if they are not an expert in all things (which
> no human is, though some try to pretend they are).  These AIs completely
> break down.  If they were capable of conversing on one topic using limited
> terms and grammar, they can not form coherent responses on any other
> topic.

If you can program a convincing dialog within a domain, then you can
go a long way with more memory, more programming and more processing
power. There is of course more to it than that because of cross domain
issues...

> Which leads to the interesting question: how, exactly, does one distinguish
> the best current conversational AIs from humans?  It is easy for most people
> to do (if they are aware that they might be talking to an AI and have been
> tasked with identifying it), but is the process easy to describe?

I would guess that you could come up with algorithms that MIGHT
confuse most conversational programs, but it would be a kind of arms
race between the writers of such algorithms and the programmers of the
conversational program.

If I were going to try and catch a would be Turing intelligent
machine, I would probably start by telling jokes, then asking if it
was funny, then asking why it was funny. I think that will be a pretty
hard domain for most computer programs for a some time to come. My
reason for thinking so is that even people learning foreign languages
have a tremendously difficult time with humor in the new language. Of
course, this might result in false negatives with Indian tech support
staff...

> Among the things I am aware of:
> 1. Lack of memory.  In many cases, the AI won't remember what you said
> two sentences ago, let alone display human-equivalent medium to long
> term memory.

ELIZA (circa 1970ish) has a very weak memory... going back one or
maybe two sentences.

The Ebay support girl has a memory. You start getting smart with her,
and she stays huffy for a while. There's some emotional momentum with
her. I got into it with her one night, and we had a raging argument. I
tried asking a question at the end, and she was still mad. It was
really quite fun. :-)  She even claims to have a boyfriend... if you
get her in the right mood. That has nothing to do with Ebay support,
but she apparently has quite a back story.

> 2. Inability to learn - which is a consequence of 1.  You can not teach one
> of these AIs even a simple game, in the manner you would conversationally
> teach an 8 year old.

I believe Watson, by nearly any definition, has the capacity to learn.
It can only learn some kinds of things, but there is clearly learning
going on there. At least I would call that learning. As I said in
another email, it would not take more work than programming Watson to
program a computer to learn arbitrary board games. That is well within
the capacity of this generation of AIs, IMHO, even if it hasn't yet
been programmed.

> 3. Lack of initiative.  Most of these AIs are reactive only.  When deprived of
> outside stimuli, such as a human talking to it, they just sit there and do
> nothing, as if unaware of the passage of time.  (In a human, this would be
> called "vegetative state", and is one of the criteria used to legally designate
> a given human body as something to be no longer treated as a full human
> being unless and until it recovers from that condition - which, in most cases,
> is seen as effectively impossible due to the causes of that condition.)

See now we're going beyond "what is intelligence" to "what is
human"... so the target moves again... :-)

Time slicing computers are much better at using their spare cycles
than any human being. Programming an AI to have initiative would be
one of the easiest things to do. You would simply have to give it a
set of goals. People don't have initiative, they just like eating,
breathing, drinking (sometimes to excess), learning, etc. Set up a
computer with goals (solve the problem of world hunger) and you would
probably see more initiative than you would get from a Washington
page. If peace is ever achieved in the Middle East, it will probably
be negotiated by an AI.

Being self directed is similarly easy. You just give the computer a
lot of goals to choose from, then instruct it to pick the best goal,
and work towards it for a while, then if it is not satisfied that it
is getting good results, go back and pick another goal. Initiative is
one of the easiest things to program. Those Google spiders have plenty
of initiative... :-)

-Kelly




More information about the extropy-chat mailing list