[ExI] Watson On Jeopardy.

Kelly Anderson kellycoinguy at gmail.com
Fri Feb 18 08:39:26 UTC 2011

On Thu, Feb 17, 2011 at 5:46 AM, Richard Loosemore <rpwl at lightlink.com> wrote:
> Okay, first:  although I understand your position as an Agilista, and your
> earnest desire to hear about concrete code rather than theory ("I value
> working code over big ideas"), you must surely acknowledge that in some
> areas of scientific research and technological development, it is important
> to work out the theory, or the design, before rushing ahead to the
> code-writing stage.

This is the scientist vs. engineer battle. As an engineering type of
scientist, I prefer to perform experiments along the way to determine
if my theory is correct. Newton performed experiments to verify his
theories, and this influenced his next theory. Without the experiments
it would not be the scientific method, but rather closer to

I'll let "real" scientists figure out how the organelles of the brain
function. I'll pay attention as I can to their findings. I like the
idea of being influenced by the designs of nature. I really like the
wall climbing robots that copy the techniques of the gecko. Really
interesting stuff that. I was reading papers about how the retina of
cats worked in computer vision classes twenty years ago.

I'll let cognitive scientists and doctors try and unravel the brain
using black box techniques, and I'll pay attention as I can to their
results. These are interesting from the point of view of devising
tests to see if what you have designed is similar to the human brain.
Things like optical illusions are very interesting in terms of
figuring out how we do it.

As an Agilista with an entrepreneurial bent, I have little patience
for a self-described scientist working on theories that may not have
applications for twenty years. I respect that the mathematics for the
CAT scanner were developed in the 1920's, but the guy who developed
those techniques got very little out of the exercise. Aside from that,
if you can't reduce your theories to practice pretty soon, the
practitioners of "parlor tricks" will beat you to your goal.

> That is not to say that I don't write code (I spent several years as a
> software developer, and I continue to write code), but that I believe the
> problem of building an AGI is, at this point in time, a matter of getting
> the theory right.  We have had over fifty years of AI people rushing into
> programs without seriously and comprehensively addressing the underlying
> issues.  Perhaps you feel that there are really not that many underlying
> issues to be dealt with, but after having worked in this field, on and off,
> for thirty years, it is my position that we need deep understanding above
> all.  Maxwell's equations, remember, were dismissed as useless for anything
> -- just idle theorizing -- for quite a few years after Maxwell came up with
> them.  Not everything that is of value *must* be accompanied by immediate
> code that solves a problem.

I believe that many interesting problems are solved by throwing more
computational cycles at them. Then, once you have something that
works, you can optimize later. Watson is a system that works largely
because of the huge number of computational cycles being thrown at the
problem. As far as AGI research being off the tracks, the only way
you're going to convince anyone is with some kind of intermediate
result. Even flawed results would be better than nothing.

> Now, with regard to the papers that I have written, I should explain that
> they are driven by the very specific approach described in the complex
> systems paper.  That described a methodological imperative:  if intelligent
> systems are complex (in the "complex systems" sense, which is not the
> "complicated systems", aka space-shuttle-like systems, sense), then we are
> in a peculiar situation that (I claim) has to be confronted in a very
> particular way.  If it is not confronted in that particular way, we will
> likely run around in circles getting nowhere -- and it is alarming that the
> precise way in which this running around in circles would happen bears a
> remarkable resemblance to what has been happening in AI for fifty years.
>  So, if my reasoning in that paper is correct then the only sensible way to
> build an AGI is to do some very serious theoretical and tool-building work
> first.

See, I don't think Watson is "getting nowhere"... It is useful today.

Let me give you an analogy. I can see that when we can create nanotech
robots small enough to get into the human body and work at the
cellular level, then all forms of cancer are reduced to sending in
those nanobots with a simple program. First, detect cancer cells. How
hard can that be? Second, cut a hole in the wall of each cancer cell
you encounter. With enough nanobots, cancer, of all kinds, is cured.
Of course, we don't have nanotech robots today, but that doesn't
matter. I have cured cancer, and I deserve a Nobel prize in

On the other hand, there are doctors with living patients today, and
they practice all manner of barbarous medicine in the attempt to kill
cancer cells without killing patients. The techniques are crude and
often unsuccessful causing their patients lots of pain. Nevertheless,
these doctors do occasionally succeed in getting a patient into

You are the nanotech doctor. I prefer to be the doctor with living
patients needing help today. Watson is the second kind. Sure, the
first cure to cancer is more general, easier, more effective, easier
on the patient, but is simply not available today, even if you can see
it as an almost inevitable eventuality.

> And part of that theoretical work involves a detailed understanding of
> cognitive psychology AND computer science.  Not just a superficial
> acquaintance with a few psychology ideas, which many people have, but an
> appreciation for the enormous complexity of cog psych, and an understanding
> of how people in that field go about their research (because their protocols
> are very different from those of AI or computer science), and a pretty good
> grasp of the history of psychology (because there have been many different
> schools of thought, and some of them, like Behaviorism, contain extremely
> valuable and subtle lessons).

Ok, so you care about cognitive psychology. That's great. Are you
writing a program that simulates a human psychology? Even on a
primitive basis? Or is your real work so secretive that you can't
share your ideas? In other words, how SPECIFICALLY does your deep
understanding of cognitive psychology contribute to a working program
(even if it only solves a simple problem)?

> With regard to the specific comments I made below about McClelland and
> Rumelhart, what is going on there is that these guys (and several others)
> got to a point where the theories in cognitive psychology were making no
> sense, and so they started thinking in a new way, to try to solve the
> problem.  I can summarize it as "weak constrain satisfaction" or "neurally
> inspired" but, alas, these things can be interpreted in shallow ways that
> omit the background context ... and it is the background context that is the
> most important part of it.  In a nutshell, a lot cognitive psychology makes
> a lot more sense if it can be re-cast in "constraint" terms.

Ok, that starts to make some sense. I have always considered context
to be the most important aspect of artificial intelligence, and one of
the more ignored. I think Watson does a lot in the area of addressing
context. Certainly not perfectly, but well enough to be quite useful.
I'd rather have an idiot savant to help me today than a nice theory
that might some day result in something truly elegant.

> The problem, though, is that the folks who started the PDP (aka
> connectionist, neural net) revolution in the 1980s could only express this
> new set of ideas in neural terms.  The made some progress, but then just as
> the train appeared to be gathering momentum it ran out of steam. There were
> some problems with their approach that could not be solved in a principled
> way.  They had hoped, at the beginning, that they were building a new
> foundation for cognitive psychology, but something went wrong.

They lacked a proper understanding of the system they were simulating.
They kept making simplifying assumptions/guesses because they didn't
have a full picture of the brain. I agree that neural networks as
practiced in the 80s ran out of steam... whether it was because of a
lack of hardware to run the algorithms fast enough, or whether the
algorithms were flawed at their core is an interesting argument.

If the brain is simulated accurately enough, then we should be able to
get an AGI machine by that methodology. That will take some time of
course. Your approach apparently will also. Which is the shortest path
to AGI? Time will tell, I suppose.

> What I have done is to think hard about why that collapse occurred, and to
> come to an understanding about how to get around it.  The answer has to do
> with building two distinct classes of constraint systems:  either
> non-complex, or complex (side note:  I will have to refer you to other texts
> to get the gist of what I mean by that... see my 2007 paper on the subject).
>  The whole PDP/connectionist revolution was predicated on a non-complex
> approach.  I have, in essence, diagnosed that as the problem.  Fixing that
> problem is hard, but that is what I am working on.
> Unfortunately for you -- wanting to know what is going on with this project
> -- I have been studiously unprolific about publishing papers. So at this
> stage of the game all I can do is send you to the papers I have written and
> ask you to fill in the gaps from your knowledge of cognitive psychology, AI
> and complex systems.

This kind of sounds like you want me to do your homework for you... :-)

You have published a number of papers. The problem from my point of
view is that the way you approach your papers is philisophical, not
scientific. Interesting, but not immediately useful.

> Finally, bear in mind that none of this is relevant to the question of
> whether other systems, like Watson, are a real advance or just a symptom of
> a malaise.  John Clark has been ranting at me (and others) for more than
> five years now, so when he pulls the old bait-and-switch trick ("Well, if
> you think XYZ is flawed, let's see YOUR stinkin' AI then!!") I just smile
> and tell him to go read my papers.  So we only got into this discussion
> because of that:  it has nothing to do with delivering critiques of other
> systems, whether they contain a million lines of code or not.  :-)   Watson
> still is a sleight of hand, IMO, whether my theory sucks or not.  ;-)

The problem from my point of view is that you have not revealed enough
of your theory to tell whether it sucks or not.

I have no personal axe to grind. I'm just curious because you say, "I
can solve the problems of the world", and when I ask what those are,
you say "read my papers"... I go and read the papers. I think I
understand what you are saying, more or less in those papers, and I
still don't know how to go about creating an AGI using your model. All
I know at this point is that I need to separate the working brain from
the storage brain. Congratulations, you have recast the brain as a Von
Neumann architecture... :-)


More information about the extropy-chat mailing list