[ExI] Watson On Jeopardy.

Kelly Anderson kellycoinguy at gmail.com
Wed Feb 23 08:30:11 UTC 2011

On Tue, Feb 22, 2011 at 9:13 AM, Richard Loosemore <rpwl at lightlink.com> wrote:
> Kelly Anderson wrote:
> I am very much aware that he had nice things to say about Watson, but my
> point was that he expressed many reservations about Watson, so I was
> using his article as a counterfoil to your statement that "99% of
> everyone else thinks it is a great success already".  I just felt that
> it was not accurate to paint me as the lone, 1% voice against, with 99%
> declaring Watson to be a great achievement on the road to real AGI.

Your turn to misunderstand what I said. I did not say that 99% of
people would say that Watson was on the road to AGI, but merely that
it was a substantial achievement that SUCCEEDED at it's stated goal of
defeating human Jeopardy champions. Surely, you aren't arguing that
Watson lost... :-)   So by any reasonable examination of their short
term goal, the Watson team succeeded. And 99% of everyone would say
"cool". You are the 1% that says, "so what." This "bah humbug"
attitude is what I find so off putting. Only that.

>    "Some AI researchers believe that this sort of artificial
>     general intelligence will eventually come out of incremental
>     improvements to 'narrow AI' systems like Deep Blue, Watson
>     and so forth.   Many of us, on the other hand, suspect that
>     Artificial General Intelligence (AGI) is a vastly different
>     animal."

I don't disagree with that. I don't think I've stated that Watson is
definitely on the evolutionary road to AGI, merely that it was
successful and cool. You just won't give them that, and I guess that's
what bugs me the most about your position. I won't definitively say
that Watson isn't on the road to some kind of AGI either. Admittedly,
it probably wouldn't be a very human like AGI... but it could be
intelligent and general. maybe.

> My position is more strongly negative than his (and his position, in
> turn, is more negative than Kurzweil's).

Kurzweil gives Watson a little too much IMHO. In the sense that Watson
can make money, he and I are on the same page.

> Well, a lot of that was explained in the complex systems paper.

As you know, I read that with a fine tooth comb.

> At the risk of putting a detailed argument in so small a space, it goes
> something like this.
> AI researchers have, over the years, publicized many supposedly great
> advances, or big new systems that were supposed to be harbingers of real AI,
> just around the corner.  People were very excited about SHRDLU.  The
> Japanese went wild over Prolog.  Then there was the "knowledge based
> systems" approach, aka "expert systems".  Earlier on there was a 1960s craze
> for "machine translation".  In the late 1980s there were "neural networks"
> vendors springing up all over the place.  And these were just the paradigms
> or general clusters of ideas ... never mind the specific systems or programs
> themselves.

So the problem is that people have over promised, and under delivered.
Are you absolutely sure you aren't over promising?

> Now, the pattern is that all these ideas were good at bringing down some
> long-hanging fruit, and every time the proponents would say "Of course, this
> is just meant to be a demonstration of the potential of this new
> technique/approach/program:  what we want to do next is expand on this
> breakthrough and find ways to apply it to more significant problems". But in
> each case it turned out that extending it beyond the toy cases was
> fiendishly hard, and eventually the effort was abandoneed when the next
> bandwagon came along.

Yup, that's a really big problem. You do propose (vaguely) a method
for getting around that, and that is quite exciting. I'll be very
interested when you publish more about that.

>> Hehe... which one was that? They all seemed pretty philosophical to my
>> mind. None of them said... here is an algorithm that might lead to
>>  general artificial intelligence...
> Kelly :-(.  You do not know what a real philosophy paper is, eh?
> The word "philosophy", the way you use it in the above, seems to mean
> "anything that is not an algorithm".

No, a philosophy paper is one that says, "here is what I think. what
do you think of that?" A scientific paper says "here is what I did,
and here is how you can do it too." The critical difference is
reproducible results. Kind of like how a patent has to explain to one
"skilled in the art" how to do something.

I like your paper very much in the sense that it made me really think
hard, and potentially in a productive direction.

I will finish with one last question. When do you anticipate
publishing your paper on your framework generator?


More information about the extropy-chat mailing list