[ExI] Complex AGI [WAS Watson On Jeopardy]
Richard Loosemore
rpwl at lightlink.com
Fri Feb 18 17:48:47 UTC 2011
Kelly Anderson wrote:
> On Thu, Feb 17, 2011 at 5:46 AM, Richard Loosemore
> <rpwl at lightlink.com> wrote:
>> Okay, first: although I understand your position as an Agilista,
>> and your earnest desire to hear about concrete code rather than
>> theory ("I value working code over big ideas"), you must surely
>> acknowledge that in some areas of scientific research and
>> technological development, it is important to work out the theory,
>> or the design, before rushing ahead to the code-writing stage.
>
> This is the scientist vs. engineer battle. As an engineering type of
> scientist, I prefer to perform experiments along the way to determine
> if my theory is correct. Newton performed experiments to verify his
> theories, and this influenced his next theory. Without the
> experiments it would not be the scientific method, but rather closer
> to philosophy.
>
> I'll let "real" scientists figure out how the organelles of the brain
> function. I'll pay attention as I can to their findings. I like the
> idea of being influenced by the designs of nature. I really like the
> wall climbing robots that copy the techniques of the gecko. Really
> interesting stuff that. I was reading papers about how the retina of
> cats worked in computer vision classes twenty years ago.
>
> I'll let cognitive scientists and doctors try and unravel the brain
> using black box techniques, and I'll pay attention as I can to their
> results. These are interesting from the point of view of devising
> tests to see if what you have designed is similar to the human brain.
> Things like optical illusions are very interesting in terms of
> figuring out how we do it.
>
> As an Agilista with an entrepreneurial bent, I have little patience
> for a self-described scientist working on theories that may not have
> applications for twenty years. I respect that the mathematics for the
> CAT scanner were developed in the 1920's, but the guy who developed
> those techniques got very little out of the exercise. Aside from
> that, if you can't reduce your theories to practice pretty soon, the
> practitioners of "parlor tricks" will beat you to your goal.
>
>> That is not to say that I don't write code (I spent several years
>> as a software developer, and I continue to write code), but that I
>> believe the problem of building an AGI is, at this point in time,
>> a matter of getting the theory right. We have had over fifty years
>> of AI people rushing into programs without seriously and
>> comprehensively addressing the underlying issues. Perhaps you feel
>> that there are really not that many underlying issues to be dealt
>> with, but after having worked in this field, on and off, for
>> thirty years, it is my position that we need deep understanding
>> above all. Maxwell's equations, remember, were dismissed as useless
>> for anything -- just idle theorizing -- for quite a few years after
>> Maxwell came up with them. Not everything that is of value *must*
>> be accompanied by immediate code that solves a problem.
>
>
> I believe that many interesting problems are solved by throwing more
> computational cycles at them. Then, once you have something that
> works, you can optimize later. Watson is a system that works largely
> because of the huge number of computational cycles being thrown at
> the problem. As far as AGI research being off the tracks, the only
> way you're going to convince anyone is with some kind of intermediate
> result. Even flawed results would be better than nothing.
>
>> Now, with regard to the papers that I have written, I should
>> explain that they are driven by the very specific approach
>> described in the complex systems paper. That described a
>> methodological imperative: if intelligent systems are complex (in
>> the "complex systems" sense, which is not the "complicated
>> systems", aka space-shuttle-like systems, sense), then we are in a
>> peculiar situation that (I claim) has to be confronted in a very
>> particular way. If it is not confronted in that particular way, we
>> will likely run around in circles getting nowhere -- and it is
>> alarming that the precise way in which this running around in
>> circles would happen bears a remarkable resemblance to what has
>> been happening in AI for fifty years. So, if my reasoning in that
>> paper is correct then the only sensible way to build an AGI is to
>> do some very serious theoretical and tool-building work first.
>
> See, I don't think Watson is "getting nowhere"... It is useful today.
>
>
>
>
> Let me give you an analogy. I can see that when we can create
> nanotech robots small enough to get into the human body and work at
> the cellular level, then all forms of cancer are reduced to sending
> in those nanobots with a simple program. First, detect cancer cells.
> How hard can that be? Second, cut a hole in the wall of each cancer
> cell you encounter. With enough nanobots, cancer, of all kinds, is
> cured. Of course, we don't have nanotech robots today, but that
> doesn't matter. I have cured cancer, and I deserve a Nobel prize in
> medicine!!!
>
> On the other hand, there are doctors with living patients today, and
> they practice all manner of barbarous medicine in the attempt to kill
> cancer cells without killing patients. The techniques are crude and
> often unsuccessful causing their patients lots of pain. Nevertheless,
> these doctors do occasionally succeed in getting a patient into
> remission.
>
> You are the nanotech doctor. I prefer to be the doctor with living
> patients needing help today. Watson is the second kind. Sure, the
> first cure to cancer is more general, easier, more effective, easier
> on the patient, but is simply not available today, even if you can
> see it as an almost inevitable eventuality.
>
>
>> And part of that theoretical work involves a detailed understanding
>> of cognitive psychology AND computer science. Not just a
>> superficial acquaintance with a few psychology ideas, which many
>> people have, but an appreciation for the enormous complexity of cog
>> psych, and an understanding of how people in that field go about
>> their research (because their protocols are very different from
>> those of AI or computer science), and a pretty good grasp of the
>> history of psychology (because there have been many different
>> schools of thought, and some of them, like Behaviorism, contain
>> extremely valuable and subtle lessons).
>
> Ok, so you care about cognitive psychology. That's great. Are you
> writing a program that simulates a human psychology? Even on a
> primitive basis? Or is your real work so secretive that you can't
> share your ideas? In other words, how SPECIFICALLY does your deep
> understanding of cognitive psychology contribute to a working program
> (even if it only solves a simple problem)?
>
>> With regard to the specific comments I made below about McClelland
>> and Rumelhart, what is going on there is that these guys (and
>> several others) got to a point where the theories in cognitive
>> psychology were making no sense, and so they started thinking in a
>> new way, to try to solve the problem. I can summarize it as "weak
>> constrain satisfaction" or "neurally inspired" but, alas, these
>> things can be interpreted in shallow ways that omit the background
>> context ... and it is the background context that is the most
>> important part of it. In a nutshell, a lot cognitive psychology
>> makes a lot more sense if it can be re-cast in "constraint" terms.
>
> Ok, that starts to make some sense. I have always considered context
> to be the most important aspect of artificial intelligence, and one
> of the more ignored. I think Watson does a lot in the area of
> addressing context. Certainly not perfectly, but well enough to be
> quite useful. I'd rather have an idiot savant to help me today than a
> nice theory that might some day result in something truly elegant.
>
>> The problem, though, is that the folks who started the PDP (aka
>> connectionist, neural net) revolution in the 1980s could only
>> express this new set of ideas in neural terms. The made some
>> progress, but then just as the train appeared to be gathering
>> momentum it ran out of steam. There were some problems with their
>> approach that could not be solved in a principled way. They had
>> hoped, at the beginning, that they were building a new foundation
>> for cognitive psychology, but something went wrong.
>
> They lacked a proper understanding of the system they were
> simulating. They kept making simplifying assumptions/guesses because
> they didn't have a full picture of the brain. I agree that neural
> networks as practiced in the 80s ran out of steam... whether it was
> because of a lack of hardware to run the algorithms fast enough, or
> whether the algorithms were flawed at their core is an interesting
> argument.
>
> If the brain is simulated accurately enough, then we should be able
> to get an AGI machine by that methodology. That will take some time
> of course. Your approach apparently will also. Which is the shortest
> path to AGI? Time will tell, I suppose.
>
>> What I have done is to think hard about why that collapse occurred,
>> and to come to an understanding about how to get around it. The
>> answer has to do with building two distinct classes of constraint
>> systems: either non-complex, or complex (side note: I will have
>> to refer you to other texts to get the gist of what I mean by
>> that... see my 2007 paper on the subject). The whole
>> PDP/connectionist revolution was predicated on a non-complex
>> approach. I have, in essence, diagnosed that as the problem.
>> Fixing that problem is hard, but that is what I am working on.
>>
>> Unfortunately for you -- wanting to know what is going on with this
>> project -- I have been studiously unprolific about publishing
>> papers. So at this stage of the game all I can do is send you to
>> the papers I have written and ask you to fill in the gaps from your
>> knowledge of cognitive psychology, AI and complex systems.
>
> This kind of sounds like you want me to do your homework for you...
> :-)
>
> You have published a number of papers. The problem from my point of
> view is that the way you approach your papers is philisophical, not
> scientific. Interesting, but not immediately useful.
>
>> Finally, bear in mind that none of this is relevant to the question
>> of whether other systems, like Watson, are a real advance or just
>> a symptom of a malaise. John Clark has been ranting at me (and
>> others) for more than five years now, so when he pulls the old
>> bait-and-switch trick ("Well, if you think XYZ is flawed, let's see
>> YOUR stinkin' AI then!!") I just smile and tell him to go read my
>> papers. So we only got into this discussion because of that: it
>> has nothing to do with delivering critiques of other systems,
>> whether they contain a million lines of code or not. :-) Watson
>> still is a sleight of hand, IMO, whether my theory sucks or not.
>> ;-)
>
> The problem from my point of view is that you have not revealed
> enough of your theory to tell whether it sucks or not.
>
> I have no personal axe to grind. I'm just curious because you say, "I
> can solve the problems of the world", and when I ask what those are,
> you say "read my papers"... I go and read the papers. I think I
> understand what you are saying, more or less in those papers, and I
> still don't know how to go about creating an AGI using your model.
> All I know at this point is that I need to separate the working brain
> from the storage brain. Congratulations, you have recast the brain
> as a Von Neumann architecture... :-)
>
> -Kelly
Kelly,
Well, I am struggling to find positive things to say, because
you're tending to make very sweeping statements (e.g. "this is just
philosophy" and "this is not science") that some people might interpret
as quite insulting. And at the same time, some of the things that other
people (e.g. John Clark) have said are starting to come back as if *I*
was the one who said them! ;-)
We need to be clear, first, that what we are discussing now has
nothing to do with Watson. John Clark made a silly equation between my
work and Watson, and you and I somehow ended up discussing my work. But
I will not discuss the two as if they are connected, if you don't mind,
because they are not. They are orthogonal.
You have also started to imply that certain statements or claims have
come from me .... so I need to be absolutely clear about what I have
said or claimed, and what I have not. I have not said "I can solve the
problems of the world". I am sure you weren't being serious, but even
so... ;-)
Most importantly I have NOT claimed that I have written down a complete
theory of AGI, nor do I claim that I have built a functioning AGI. When
John Clark's said to me:
> So I repeat my previous request, please tell us all about the
> wonderful AI program that you have written that does things even more
> intelligently than Watson.
... I assumed that anyone who actually read this patently silly demand,
would understand immediately that I was not being serious when I responded:
> Done: read my papers.
>
> Questions? Just ask!
John Clark ALWAYS changes the subject, in every debate in which he
attacks me, by asking that same idiotic, rude question! :-) I have
long ago stopped being bothered by it, and these days I either ignore
him or tell him to read my papers if he wants to know about my work.
I really don't know how anyone could read that exchange and think that I
was quietly agreeing that I really did claim that I had built a
"wonderful AI program ... that does things even more intelligently than
Watson".
So what have I actually claimed? What have I been defending? Well,
what I do say is that IMPLICIT in the papers I have written, there is
indeed an approach to AGI (a framework, and a specific model within that
framework). There is no way that I have described an AGI design
explictly, in enough detail for it to be evaluated, and I have never
claimed that. Nor have I claimed to have built one yet. But when
pressed by people who want to know more, I do point out that if they
understand cognitive psychology in enough detail they will easily be
able to add up all the pieces and connect all the dots and see where I
am going with the work I am doing.
The problem is that, after saying that you read my papers already, you
were quite prepared to dismiss all of it as "philosophizing" and "not
science". I tried to explain to you that if you understood the
cognitive science and AI and complex systems background from which the
work comes, you would be able to see what I meant by there being a
theory of AGI implicit in it, and I did try to explain in a little more
detail how my work connects to that larger background. I pointed out
the thread that stretches from the cog psych of the 1980s, through
McClelland and Rumelhart, through the complex systems movement, to the
particular (and rather unusual) approach that I have adopted.
I even pointed out the very, very important fact that my complex systems
paper was all about the need for a radically different AGI methodology.
Now, I might well be wrong about my statement that we need to do
things in this radically different way, but you could at least realize
that I have declared myself to be following that alternate methodology,
and therefore understand what I have said about the priority of theory
and a particular kind of experiment, over hacking out programs. It is
all there, in the complex systems paper.
But even after me pointing out that this stuff has a large context that
you might not be familiar with, instead of acknowledging that fact, you
are still making sweeping condemnations! This is pretty bad.
More generally:
I get two types of responses to my work. One (less common) type of
response is from people who understand what I am trying to say well
enough that they ask specific, focussed questions about things that are
unclear or things they want to challenge. Those people clear understand
that there is a "there" there .... if the papers I wrote were empty
philosophising, those people would never be ABLE to send coherent
challenges or questions in my direction. Papers that really are just
empty philosophising CANNOT generate that kind of detailed response,
because there is nothing coherent enough in the paper for anyone to get
a handle on.
Then there is the second kind of response. From these people I get
nothing specific, just handwaving or sweeping condemnations. Nothing
that indicates that they really understood what I was trying to say.
They reflect back my arguments in a weird, horribly distorted form
-- so distorted that it has no relationship whatsoever to what I
actually said -- and when I try to clarify their misunderstandings
they just make more and more distorted statements, often wandering
far from the point. And, above all, this type of response usually
involves statements like "Yes, I read it, but you didn't say anything
meaningful, so I dismissed it all as empty philosophising".
I always try to explain and respond. I have put many hours into
responding to people who ask questions, and I try very hard to help
reduce confusions. I waste a lot of time that way. And very often, I
do this even as the person at the other end continues to deliver mildly
derogatory comments like "this isn't science, this is just speculation"
alongside their other questions.
If you want to know why this stuff comes out of cognitive psychology, by
all means read the complex systems paper again, and let me know if you
find the argument presented there, for why it HAS to come out of
cogntive psychology. It is there -- it is the crux of the argument. If
you believe it is incorrect, I would be happy to debate the rationale
for it.
But, please, don't read several papers and just say afterward "All I
know at this point is that I need to separate the working brain from the
storage brain. Congratulations, you have recast the brain as a Von
Neumann architecture". It looks more like I should be saying, if I
were less polite, "Congratulations, you just understood the first page
of a 700-page cognitive pscychology context that was assumed in those
papers". But won't ;-).
Richard Loosemore
More information about the extropy-chat
mailing list