[ExI] Watson On Jeopardy.

Kelly Anderson kellycoinguy at gmail.com
Tue Feb 22 03:00:44 UTC 2011

On Fri, Feb 18, 2011 at 11:01 AM, Richard Loosemore <rpwl at lightlink.com> wrote:
> Kelly Anderson wrote:
>> On Fri, Feb 18, 2011 at 6:33 AM, Richard Loosemore <rpwl at lightlink.com>
>> wrote:
> This is good.  I am happy to try.  Don't interpret the post I just wrote as
> being too annoyed (just a *little* frustrated is all).  ;-)

Don't think I'm too annoyed with you either. It is frustrating to ask
a seemingly straightforward question, and then get an answer to a
different question.

>> If you wrote a paper entitled "Why Watson is an Evolutionary Dead
>> End", and you were convincing to your peers, I think you would get it
>> published and it would be helpful to the AI community.
> Well, can I point out that the numbers are not 99% in favor?

That sentence does not quite parse.

> Ben Goertzel
> just published an essay in H+ magazine saying very much the same things that
> I said here.  Ben is very widely respected in the AGI community, so perhaps
> you would consider comparing and constrasting my remarks with his.

I assume you are talking about this:

Quoting from his article...
"My initial reaction ... was a big fat ho-hum. ... I’m an AI guru so I
know pretty much exactly what kind of specialized trickery they’re
using under the hood.   It’s not really a high-level mind, just a
fancy database lookup system.”

"But while that cynical view is certainly technically accurate, I have
to admit that when I actually watched Watson play Jeopardy! on TV —
and beat the crap out of its human opponents — I felt some real
excitement … and even some pride for the field of AI.   Sure, Watson
is far from a human-level AI, and doesn’t have much general
intelligence.  But even so, it was pretty bloody cool to see it up
there on stage, defeating humans in a battle of wits created purely by
humans for humans — playing by the human rules and winning."
<End Quote

He seems to be giving Watson and the team more credit than you did Richard.

"But even so, the technologies underlying Watson are likely to be part
of the story when human-level and superhuman AGI robots finally do
>End Quote

And you have said exactly the opposite of this, on this list.

"Both the Watson strategy and the human strategy  are valid ways of
playing Jeopardy! But, the human strategy involves skills that are
fairly generalizable to many other sorts of learning (for instance,
learning to achieve diverse goals in the physical world), whereas the
Watson strategy involves skills that are only extremely useful for
domains where the answers to one’s questions already lie in knowledge
bases someone else has produced."
>End quote

I think this comes closest to saying something both reasonable, and in
agreement with what you've said. Note however, the gentler softer
language used. Watson was not denigrated as "trivial" but pointed out
(correctly) as having solved the problem in an entirely different
manner than human beings would.

The final question is whether the eventual AI that evolves from all of
the current experimentation will be a huge collection of parlor
tricks, or something that reasons more like a real human being. I
would assume you think the latter, and in that you may well be

Give Watson the assignment to collect "common sense" from the Internet
for a few years, and he might be able to assemble a very large
collection of common sense. Perhaps large enough to never make obvious
stupid mistakes. Perhaps.

> I don't want to write about Watson, because I have seen so many examples of
> that kind of dead end and I have already analzed them as a *class* of
> systems.  That is very important.  They cannot be fought individually. I am
> pointng to the pattern.

What is this pattern? I am (as a human being) an expert pattern
recognizer. I am familiar with a number of approaches to AGI. Yet, I
am having trouble recognizing the pattern you seem to think is so
clear. Can you spell it out succinctly? (I realize this is a true

>>> Also, why do you say "self-described scientist"?  I don't understand if
>>> this is supposed to be
>>> me or someone else or scientists in general.
>> Carl Sagan, a real scientist, said frequently, "Extraordinary claims
>> require extraordinary evidence." (even though he may have borrowed the
>> phrase from Marcello Truzzi.) I understand that you are claiming to
>> follow the scientific method, and that you do not think of yourself as
>> a philosopher. If you claim to be a philosopher, stand up and be proud
>> of that. Some of the most interesting people are philosophers, and
>> there is nothing wrong with that.
> :-)  Well, you may be confused by the fact that I wrote ONE philosophy
> paper.

Hehe... which one was that? They all seemed pretty philosophical to my
mind. None of them said... here is an algorithm that might lead to
general artificial intelligence...

> But have a look through the very small set of publications on my website.
>  One experimental archaeology, several experimental and computational
> cognitive science papers.  One cognitive neuroscience paper.....
> I was trained as a physicist and mathematician.

For Odin's sake man!!! Why didn't you say this in the beginning?!?
This explains EVERYTHING!!

> I just finished teaching a
> class in electromagnetic theory this morning.  I have written  all those
> cognitive science papers.  I was once on a team that ported CorelDraw from
> the PC to the Mac.

Did you live in Orem? Perhaps we have run into each other before. Your
name sounds familiar.

> I am up to my eyeballs in writing a software tool in OS
> X that is designed to facilitate the construction and experimental
> investigation of a class of AGI systems that have never been built
> before.....    Isn't it a bit of a stretch to ask me to be proud to be a
> philosopher? :-) :-)

I am only going off of the papers I could find. Point me to the one
specific paper that you feel is most scientific and I'll read it
again. Happily.

>>> And why do you assume that I am not doing experiments?!  I am certainly
>>> doing that, and
>>> doing masive numbers of such experiments is at the core of everything I
>>> do.
>> Good to hear. Your papers did not reflect that. Can you point me to
>> some of your experimental results?
> No, but I did not say that they did.  It is too early to ask.

Sigh. This reminds me of a story. A mathematician was asked to build a
fence around a herd of cattle. He built a small corral being careful
not to surround any cows, and then defined the larger area to be
"inside". Problem solved.

> Context.  Physicists back in the 1980s who wanted to work on the frontiers
> of particle physics had to spend decades just building one tool - the large
> hadron collider - to answer their theoretical questions with empirical data.
>  I am in a comparable situation, but with one billionth the funding that
> they had.  Do I get cut a *little* slack? :-(

I will never say "It will never fly Orville"... (other perhaps than in
some very specific narrow issue) that is the slack I will give you.
When you share the results of your experimentation such that other
scientists can replicate your amazing results, then I will say "well
done." As for the physicists, they built a lot of smaller colliders
along the way. There was one about ten feet across in the basement of
the science building at BYU... I'm sure that the preliminary results
that they achieved with those smaller colliders gave the people
funding the hadron collider confidence that they weren't throwing
their money down a rat hole.

> More when I can.

Fair enough.


More information about the extropy-chat mailing list