[ExI] Oxford scientists edge toward quantum PC with 10b qubits.

Kelly Anderson kellycoinguy at gmail.com
Mon Jan 31 18:08:58 UTC 2011


2011/1/31 Dave Sill <sparge at gmail.com>:
> On Mon, Jan 31, 2011 at 11:57 AM, Kelly Anderson <kellycoinguy at gmail.com>
> wrote:
>>
>> So when IBM creates a machine with the specific programming task of
>> "Pass the Turing Test" that won't be intelligence either, because it
>> was programmed to pass the Turing test... right???
>
> Wrong, because the Turing test is designed to test general intelligence.
> And there's no "pass the Turing test". It's not like the SATs, there's not
> one single Turing test that, if passed, grants an AI a certificate of
> intelligence. But an AI that accumulates a record in various Turing tests
> against different interviewers equivalent to its human competitors would
> demonstrate human-equivalent intelligence.

This goes to the weak/strong Turing test. Fooling someone who doesn't
know they are administering a Turing test is the weakest form. They
just want an answer, for example, to a technical problem. There is no
doubt that some programs sometimes pass this weakest form of the
Turing test already.

The strongest Turing test is when someone who knows a lot about
natural language processing and it's weaknesses can't distinguish over
a long period of time the difference between a number of humans, and a
number of independently trained Turing computers.

There are, of course, a number of intermediate forms. So when people
say "pass the Turing test" it is a lot like saying "pass the SAT".
What does that mean? With the SAT, it's good enough to get admitted to
(School of your choice).

So perhaps I suggest a new test. If a computer is smart enough to get
admitted into Brigham Young University, then it has passed the
Anderson Test of artificial intelligence. Is that harder or easier
than the Turing test? How about smart enough to graduate with a BS
from BYU?

Another test... suppose that I subscribed an artificial intelligence
program to this list. How long would it take for you to figure out
that it wasn't human? That's a bit easier, since you don't have to do
the processing in real time as with a chat program.

>> Again, I just don't think anyone has a clue how to define intelligence
>> or consciousness.
>
> Intelligence is pretty straightforward. See Wikipedia. What does
> consciousness have to do it, though?

I suppose that's just another emergent aspect of the human brain.
There seems to be a supposition by some (not me) that to be
intelligent, consciousness is a prerequisite.

>>
>> > The key is learning and understanding. It doesn't matter if it's a man
>> > or a
>> > machine, or if the machine is using one or more clever tricks. A machine
>> > that plays one game brilliantly but has no ability to learn other games
>> > isn't intelligent.
>>
>> The right question here seems to me to be "Does Watson Learn?"
>> Everything I have read seems to indicate that Watson knows answers to
>> questions because Watson has processed a huge amount of free text from
>> the Internet or perhaps Wikipedia or something. The point is that
>> nobody sat down and programmed Watson to answer specific questions.
>> This seems like "learning" by "reading" to me, and if so, that is a
>> tremendous new capability (at least at this level of utility) for
>> computers.
>
> It's learning in the sense that Google "learns" what's on the web by sucking
> down a copy of it. OK, it's a little more sophisticated than that since it
> has to do some parsing.

That's the difference between taking a picture, and telling you what
is in the picture. HUGE difference... this is not a "little" more
sophisticated.

> But does Watson learn from its mistakes? Does it
> learn from its opponent's successes? I don't know. Does it understand
> anything? I doubt it.

Once again, we run into another definition issue. What does it mean to
"understand"? In my mind, when I understand something, I am
consciously aware that I have mastery of a fact. This presupposes
consciousness. So is there some weaker form of "understanding" that is
acceptable without consciousness? And if that form is such that I can
use it for future computation, to say answer a question, then Watson
does understand it. Yes. So by some definitions of "understand" yes,
Watson understands the text it has read.

>>
>> If you asked Watson questions about Jeopardy, I'd bet it could answer
>> a lot of them. It isn't that it "knows" anything. I don't have any
>> belief that Watson is conscious or anything like that.
>
> Wait a minute...you just got done saying Watson learned all kinds of stuff
> by reading it. Now you say it doesn't know any of that because it isn't
> conscious?

Ok, my bad. I got sloppy in my wording here. The "knows" is in quotes
because when I "know" something, I am consciously aware of my
knowledge of it. When a computer "knows" something, that is a lesser
form of "knowing". If you say Watson knows that 'Sunflowers' was
painted by 'Van Gogh', then on that level of knowing, Watson does know
things. It just doesn't know that it knows it in the sense of
conscious knowing. Maybe this still doesn't make total sense, this is
hard stuff to define and talk intelligently about.

-Kelly



More information about the extropy-chat mailing list