[ExI] Watson on NOVA

Richard Loosemore rpwl at lightlink.com
Sun Feb 13 18:25:08 UTC 2011


spike wrote:
>  On Behalf Of Richard Loosemore
> Subject: Re: [ExI] Watson on NOVA
> 
> Kelly Anderson wrote:
>>> ... While I am clearly jazzed about Watson, and I do know for sure now
> that Watson uses statistical learning algorithms...
> 
>> ...I strongly suspected that it was using some kind of statistical
> "proximity" algorithms to get the answers.  And in that case, we are talking
> about zero advancement of AI... can you see what I mean when I say that this
> is a complete waste of time?...Richard Loosemore
> 
> 
> 
> 
> Richard I see what you mean, but I disagree.  We know Watson isn't AI, and
> this path doesn't lead there directly.  But there is value in collecting a
> bunch of capabilities that are in themselves marketable.  Computers play
> good chess, they play Jeopardy, they do this and that, eventually they make
> suitable (even if not ideal) companions for impaired humans, which generates
> money (lots of it in that case), which brings talent into the field,
> inspires the young to dream that AI can somehow be accomplished.  It
> inspires the young brains to imagine the potential of software, as opposed
> to wasting their lives and talent by going into politics or hedge fund
> management for instance.  
> 
> For every AI researcher we lose to fooling around with Watson, we gain ten
> more who are inspired by that non-AI exercise.
> 
> In that sense Watson may indirectly advance AI.

This is exactly what has been happening.

But the only people it has drawn into AI are:

(a) People too poorly informed to understand that Watson represents a 
non-achievement ..... therefore extremely low-quality talent, or

(b) People who quite brazenly declare that the field called "AI" is not 
really about building intelligent systems, but just futzing around with 
mathematics and various trivial algorithms.

Either way, the field loses.

I have been watching this battle go on throughout my career. All I am 
doing is reporting the obvious patterns that emerge if you look at the 
situation from the inside, for long enough.

I went to conferences back in the 1980s when people talked about simple 
language understanding algorithms, and I understood exactly what they 
were trying to do and what they had achieved so far.  Then I went to an 
AGI workshop in 2006, and to my utter horror I saw some people present 
their research on a simple langauge understanding system..... it was 
exactly the same stuff that I had seen 20 years before, and they 
appeared to have no awareness that this had already been done, and that 
the technique subsequently got nowhere.

You can discount my opinion if you like, but does it not count for 
anything at all that I have been working in this field since I first got 
interested in it in 1980?   This is not armchair theorizing here:  I am 
just doing my best to summarize a lot of experience.



Richard Loosemore




More information about the extropy-chat mailing list