[ExI] Watson On Jeopardy.
Richard Loosemore
rpwl at lightlink.com
Tue Feb 22 16:13:53 UTC 2011
Kelly Anderson wrote:
> I assume you are talking about this:
> http://hplusmagazine.com/2011/02/17/watson-supercharged-search-engine-or-prototype-robot-overlord/
>
Oh, make no mistake, Ben and I do not agree about a lot of AGI stuff
(though that does not stop us from collaborating on a joint paper, which
we happen to be doing right now).
I am very much aware that he had nice things to say about Watson, but my
point was that he expressed many reservations about Watson, so I was
using his article as a counterfoil to your statement that "99% of
everyone else thinks it is a great success already". I just felt that
it was not accurate to paint me as the lone, 1% voice against, with 99%
declaring Watson to be a great achievement on the road to real AGI.
Now, you did a very good job of picking out out the pro-Watson excerpts
from Ben's essay :-) , but I think you would do better to focus on one
of his concluding remarks...
"Some AI researchers believe that this sort of artificial
general intelligence will eventually come out of incremental
improvements to 'narrow AI' systems like Deep Blue, Watson
and so forth. Many of us, on the other hand, suspect that
Artificial General Intelligence (AGI) is a vastly different
animal."
My position is more strongly negative than his (and his position, in
turn, is more negative than Kurzweil's).
>> Richard Loosemore wrote:
>> I don't want to write about Watson,
>> because I have seen so many examples of that kind of dead end and I
>> have already analyzed them as a *class* of systems. That is very
>> important. They cannot be fought individually. I am pointng to the
>> pattern.
>
> What is this pattern? I am (as a human being) an expert pattern
> recognizer. I am familiar with a number of approaches to AGI. Yet, I
> am having trouble recognizing the pattern you seem to think is so
> clear. Can you spell it out succinctly? (I realize this is a true
> challenge)
Well, a lot of that was explained in the complex systems paper.
At the risk of putting a detailed argument in so small a space, it goes
something like this.
AI researchers have, over the years, publicized many supposedly great
advances, or big new systems that were supposed to be harbingers of real
AI, just around the corner. People were very excited about SHRDLU. The
Japanese went wild over Prolog. Then there was the "knowledge based
systems" approach, aka "expert systems". Earlier on there was a 1960s
craze for "machine translation". In the late 1980s there were "neural
networks" vendors springing up all over the place. And these were just
the paradigms or general clusters of ideas ... never mind the specific
systems or programs themselves.
Now, the pattern is that all these ideas were good at bringing down some
long-hanging fruit, and every time the proponents would say "Of course,
this is just meant to be a demonstration of the potential of this new
technique/approach/program: what we want to do next is expand on this
breakthrough and find ways to apply it to more significant problems".
But in each case it turned out that extending it beyond the toy cases
was fiendishly hard, and eventually the effort was abandoneed when the
next bandwagon came along.
What I tried to do in my 2007 paper was to ask whether there was an
underlying reason for these failures. The answer was that, yes, there
is indeed a pattern, but the reason is subtle.
So, the last thing I am going to do is analyze Watson for its
limitations, because the limitations are not at the surface level of
Watson's specific architecture, they are in the paradignm from which it
comes.
>> :-) Well, you may be confused by the fact that I wrote ONE
>> philosophy paper.
>
> Hehe... which one was that? They all seemed pretty philosophical to
> my mind. None of them said... here is an algorithm that might lead to
> general artificial intelligence...
Kelly :-(. You do not know what a real philosophy paper is, eh?
The word "philosophy", the way you use it in the above, seems to mean
"anything that is not an algorithm".
>> But have a look through the very small set of publications on my
>> website. One experimental archaeology, several experimental and
>> computational cognitive science papers. One cognitive neuroscience
>> paper.....
>>
>> I was trained as a physicist and mathematician.
>
> For Odin's sake man!!! Why didn't you say this in the beginning?!?
> This explains EVERYTHING!!
This is not helping.
>> I just finished teaching a class in electromagnetic theory this
>> morning. I have written all those cognitive science papers. I
>> was once on a team that ported CorelDraw from the PC to the Mac.
>
> Did you live in Orem? Perhaps we have run into each other before.
> Your name sounds familiar.
Well, I lived in Salt Lake City for a short period. I was working for an
extremely annoying company that claimed to be building fpga supercomputers.
>> I am up to my eyeballs in writing a software tool in OS X that is
>> designed to facilitate the construction and experimental
>> investigation of a class of AGI systems that have never been built
>> before..... Isn't it a bit of a stretch to ask me to be proud to
>> be a philosopher? :-) :-)
>
> I am only going off of the papers I could find. Point me to the one
> specific paper that you feel is most scientific and I'll read it
> again. Happily.
You can make your mind up without my help. Not my problem.
>>>> And why do you assume that I am not doing experiments?! I am
>>>> certainly doing that, and doing masive numbers of such
>>>> experiments is at the core of everything I do.
>>> Good to hear. Your papers did not reflect that. Can you point me
>>> to some of your experimental results?
>> No, but I did not say that they did. It is too early to ask.
>
> Sigh. This reminds me of a story. A mathematician was asked to build
> a fence around a herd of cattle. He built a small corral being
> careful not to surround any cows, and then defined the larger area to
> be "inside". Problem solved.
Well, now you have trivialized the situation one time too many. You are
on your own.
>> Context. Physicists back in the 1980s who wanted to work on the
>> frontiers of particle physics had to spend decades just building
>> one tool - the large hadron collider - to answer their theoretical
>> questions with empirical data. I am in a comparable situation, but
>> with one billionth the funding that they had. Do I get cut a
>> *little* slack? :-(
>
> I will never say "It will never fly Orville"... (other perhaps than
> in some very specific narrow issue) that is the slack I will give
> you. When you share the results of your experimentation such that
> other scientists can replicate your amazing results, then I will say
> "well done." As for the physicists, they built a lot of smaller
> colliders along the way. There was one about ten feet across in the
> basement of the science building at BYU... I'm sure that the
> preliminary results that they achieved with those smaller colliders
> gave the people funding the hadron collider confidence that they
> weren't throwing their money down a rat hole.
Sigh.
Richard Loosemore
More information about the extropy-chat
mailing list