[ExI] Oxford scientists edge toward quantum PC with 10b qubits.

Dave Sill sparge at gmail.com
Mon Jan 31 19:30:10 UTC 2011


On Mon, Jan 31, 2011 at 1:08 PM, Kelly Anderson <kellycoinguy at gmail.com>wrote:
>
>
> The strongest Turing test is when someone who knows a lot about
> natural language processing and it's weaknesses can't distinguish over
> a long period of time the difference between a number of humans, and a
> number of independently trained Turing computers.
>

No, language processing is only one aspect of intelligence. The strongest
Turing test would also measure the ability to learn, to learn from past
experiences, to plan, to solve problems...all of the things the Wikipedia
definition mentions, and maybe more.

So perhaps I suggest a new test. If a computer is smart enough to get
> admitted into Brigham Young University, then it has passed the
> Anderson Test of artificial intelligence.


You mean achieve an SAT score sufficient to get into BYU? Or do you mean
that it has to go through school or take a GED, fill out an application to
BYU, etc. like a human would have to do?


> Is that harder or easier than the Turing test?


Depends on the Turing test, I'd say.

How about smart enough to graduate with a BS from BYU?
>

How about it? It'd be an impressive achievement.


> Another test... suppose that I subscribed an artificial intelligence
> program to this list. How long would it take for you to figure out
> that it wasn't human? That's a bit easier, since you don't have to do
> the processing in real time as with a chat program.
>

Depends how active it is, what it writes, and whether anyone is clued to the
fact that there's a bot on the list. A Watson-like bot that answers
questions occasionally could be pretty convincing. But it'd fall apart if
anyone tried to engage it in a discussion.

I suppose that's just another emergent aspect of the human brain.
> There seems to be a supposition by some (not me) that to be
> intelligent, consciousness is a prerequisite.
>

OK, then let's leave it out for now because I don't think it's necessary,
either.

That's the difference between taking a picture, and telling you what
> is in the picture. HUGE difference... this is not a "little" more
> sophisticated.
>

No, parsing a sentence into parts of speech is not hugely sophisticated.


> Once again, we run into another definition issue. What does it mean to
> "understand"?


http://en.wikipedia.org/wiki/Understanding

In my mind, when I understand something, I am
> consciously aware that I have mastery of a fact. This presupposes
> consciousness. So is there some weaker form of "understanding" that is
> acceptable without consciousness?


It's not necessarily weaker to leave consciousness out of.


> And if that form is such that I can
> use it for future computation, to say answer a question, then Watson
> does understand it. Yes. So by some definitions of "understand" yes,
> Watson understands the text it has read.
>

 Granted, at a trivial level Watson could be said to understand the data
it's incorporated. But it doesn't have human-level understanding of it.

Ok, my bad. I got sloppy in my wording here. The "knows" is in quotes
> because when I "know" something, I am consciously aware of my
> knowledge of it. When a computer "knows" something, that is a lesser
> form of "knowing". If you say Watson knows that 'Sunflowers' was
> painted by 'Van Gogh', then on that level of knowing, Watson does know
> things. It just doesn't know that it knows it in the sense of
> conscious knowing. Maybe this still doesn't make total sense, this is
> hard stuff to define and talk intelligently about.


Just leave consciousness out of it. It's irrelevant.

-Dave
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20110131/e5d27efc/attachment.html>


More information about the extropy-chat mailing list