[ExI] Watson on NOVA

Richard Loosemore rpwl at lightlink.com
Mon Feb 14 13:24:32 UTC 2011


Kelly Anderson wrote:
> On Sun, Feb 13, 2011 at 9:39 AM, Richard Loosemore <rpwl at lightlink.com> wrote:
>> Sadly, this only confirms the deeply skeptical response that I gave earlier.
>>
>> I strongly suspected that it was using some kind of statistical "proximity"
>> algorithms to get the answers.  And in that case, we are talking about zero
>> advancement of AI.
>>
>> Back in 1991 I remember having discussions about that kind of research with
>> someone who thought it was fabulous.  I argued that it was a dead end.
>>
>> If people are still using it to do exactly the same kinds of task they did
>> then, can you see what I mean when I say that this is a complete waste of
>> time?  It is even worse than I suspected.
> 
> For me the question is whether this is useful, not whether it will lead to AGI.
> 
> Is Watson useful? I would say yes, it is very close to being something useful.
> 
> Is it on the path to AGI? That's about as relevant as whether we
> descend directly from gracile australopithecines or robust
> australopithecinesthe. Yes, that's an interesting question, but you
> need the competition to see what works out in the end. The evolution
> of computer algorithms will show that Watson or your stuff or reverse
> engineering the human brain or something else eventually leads to the
> answer. Criticizing IBM because you think they are working down the
> Neanderthal line is irrelevant to the evolutionary and memetic
> processes.
> 
> Honestly Richard, you come across as a mad scientist; that is, an
> angry scientist. All approaches should be equally welcome until one
> actually works. And saying that they should have spent the money
> different is like saying we shouldn't save the $1 million preemie in
> Boston because that money could have been used to cure blindness in
> 10,000 Africans. Well, that's true, but the insurance company paying
> the bill doesn't have any right to cure blindness in Africa with their
> subscriber's money. IBM has a fiduciary responsibility to the
> shareholders, and Watson will earn them money if they do it right.

:-)

Well, first off, don't get me wrong, because I say all this with a 
smile.  When I went to the AGI-09 conference, there was one guy there 
(Ed Porter) who had spent many hours getting mad at me online, and he 
was eager to find me in person.  He spent the first couple of days 
failing to locate me in a gathering of only 100 people, all of whom were 
wearing name badges, because he was looking for some kind of mad, 
sullen, angry grump.  The fact that I was not old, and was smiling, 
talking and laughing all the time meant that he didn't even bother to 
look at my name badge.  We got along just great for the rest of the 
conference.  ;-)

Anyhow.

Just keep in mind one thing.  I criticize projects like Watson because 
if you look deeply at the history of AI you will notice that it seems to 
be an unending series of cheap tricks, all touted to be the beginning of 
something great.   But so many of these so-called "advances" were then 
followed by a dead end.  After watching this process happen over and 
over again, you can start to recognize the symptoms of yet another one.

The positive spin on Watson that you give, above, is way too optimistic. 
  It is not a parallel approach, valid and worth considering in its own 
right.  It will not make IBM any money (Big Blue didn't).  It has to run 
on a supercomputer.  It is not competition to any real AI project, 
because it just does a narrow-domain task in a way that does not 
generalize to more useful tasks.  It will probably not be useful, 
because it cheats:  it uses massive supercomputing power to crack a nut.

As a knowledge assistant that could help doctors with diagnosis:  fine, 
but it is not really pushing the state of the art at all.  There are 
already systems that do that, and the only difference between them and 
Watson is..... you cannot assign one supercomputer to each doctor on the 
planet!

The list goes on and on.  But there is no point laboring it.

Here is my favorite Watson mistake, reported by NPR this morning:

Question:  "What do grasshoppers eat?"

Notice that this question contains very few words, meaning that Watson's 
cluster-analysis algorithm has very little context to work with here: 
all it can do is find contexts in which the words "eat" and 
"grasshopper" are in close proximity.  So what answer did Watson give:

"What is 'kosher'?"

Sigh!   ;-)



Richard Loosemore




More information about the extropy-chat mailing list