[ExI] Watson On Jeopardy

Richard Loosemore rpwl at lightlink.com
Wed Feb 16 15:20:07 UTC 2011


David Lubkin wrote:
> Richard Loosemore wrote:
> 
>> This is *way* beyond anything that Watson is doing.
>>
>> What it does, essentially, is this:
>         :
>> It is a brick-stupid cluster analysis program.
>>
>> So, does Watson think about what the other contestants might be doing? 
>> Err, that would be "What is 'you have got to be joking'?"
> 
> You don't seem to have read what I wrote. The only question I raised 
> about Watson's current capabilities was whether it had a module to 
> analyze its failures and hone itself. *That* has been possible in 
> software for several decades.
> 
> (I've worked in pertinent technologies since the late 70's.)

Misunderstanding:  I was addressing the general appropriateness of the 
question (my intention was certainly not to challenge your level of 
understanding).

I was trying to point out that Watson is so close to being a statistical 
analysis of text corpora, that it hardly makes sense to ask about all 
those "comprehension" issues that you talked about.  Not in the same breath.

For example, you brought up the question of self-awareness of your own 
code-writing strategies, and conscious adjustments that you made to 
correct for them (... you noticed your own habit of making large numbers 
of off-by-one errors).

That kind of self-awareness is extremely interesting and is being 
addressed quite deliberately by some AGI researchers (e.g. myself).  But 
to even talk about such stuff in the context of Watson is a bit like 
asking whether next year's software update to (e.g.) Mathematica might 
be able to go to math lectures, listen to the lecturer, ask questions in 
class, send humorous tweets to classmates about what the lecturer is 
wearing, and get a good mark on the exam at the end of the course.

Yes, Watson can hone itself, of course!  As you point out, that kind of 
thing has been done for decades.  No question.  But statistical 
adaptation is far removed from awareness of one's own problem solving 
strategies.  Kernel methods do not buy you models of cognition!

What is going on here -- what I am trying to point out -- is a fantastic 
degree of confusion.  In one moment there is an admission that Watson is 
mostly doing a form of statistical analysis (plus tweaks).  Then, the 
next moment people are making statements that jump from ground level up 
to the stratosphere, suggesting that this is the beginning of the 
arrival of something like real AGI (the comments of the Watson team 
certainly imply that this is a major milestone in AI, and the press are 
practically announcing this as the second coming).

I am just trying to inject a dose of sanity.

And failing.



Richard Loosemore



More information about the extropy-chat mailing list