[ExI] Complex AGI [WAS Watson On Jeopardy]
Kelly Anderson
kellycoinguy at gmail.com
Tue Feb 22 13:30:14 UTC 2011
On Fri, Feb 18, 2011 at 10:48 AM, Richard Loosemore <rpwl at lightlink.com> wrote:
> Kelly Anderson wrote:
> Well, I am struggling to find positive things to say, because
> you're tending to make very sweeping statements (e.g. "this is just
> philosophy" and "this is not science") that some people might interpret
> as quite insulting.
I don't mean to be insulting. I am trying to draw out something real,
useful and substantial from you. I have some degree of faith that you
have something interesting to say, and I'm trying to get at it.
I don't think I am confused about what you have said vs what John has
said. Twenty years on mailing lists has focused my mind fairly well on
keeping who said what straight. Funny that I've learned to do that
without any conscious effort... the brain really is amazing.
> So what have I actually claimed? What have I been defending?
YES, YES, YES, that is what I want to know!
> Well,
> what I do say is that IMPLICIT in the papers I have written, there is
> indeed an approach to AGI (a framework, and a specific model within that
> framework). There is no way that I have described an AGI design
> explictly, in enough detail for it to be evaluated, and I have never
> claimed that. Nor have I claimed to have built one yet. But when pressed
> by people who want to know more, I do point out that if they understand
> cognitive psychology in enough detail they will easily be able to add up all
> the pieces and connect all the dots and see where I am going with the work I
> am doing.
Well it's good to know that when you fly the planes into the
established AI buildings, we will be able to say we should have
connected the dots. :-)
> The problem is that, after saying that you read my papers already, you
> were quite prepared to dismiss all of it as "philosophizing" and "not
> science". I tried to explain to you that if you understood the
> cognitive science and AI and complex systems background from which the
> work comes, you would be able to see what I meant by there being a
> theory of AGI implicit in it, and I did try to explain in a little more
> detail how my work connects to that larger background. I pointed out
> the thread that stretches from the cog psych of the 1980s, through
> McClelland and Rumelhart, through the complex systems movement, to the
> particular (and rather unusual) approach that I have adopted.
Referring to a group of other people's work, saying, "read this with
this other thing in mind", is a little like me saying, if you read
Wikipedia thinking about some topic, you'll come up with the result.
Be a little more explicit. A little less vague. That's all I'm asking
for. I didn't get much of a specific nature about your particular
approach from your papers.
> I even pointed out the very, very important fact that my complex systems
> paper was all about the need for a radically different AGI methodology.
> Now, I might well be wrong about my statement that we need to do things in
> this radically different way, but you could at least realize that I have
> declared myself to be following that alternate methodology, and therefore
> understand what I have said about the priority of theory and a particular
> kind of experiment, over hacking out programs. It is all there, in the
> complex systems paper.
What I hear is you railing against the current state of the art, but
without suggesting something different in a specific way. You do
suggest a vague framework generator, which is interesting, but not
useful in a SCIENTIFIC way. i.e. it does not immediately suggest an
experiment that I can reproduce.
> But even after me pointing out that this stuff has a large context that
> you might not be familiar with, instead of acknowledging that fact, you are
> still making sweeping condemnations! This is pretty bad.
I am roughly familiar with most of the context you give. The only
sweeping condemnation I have given is that you sweepingly condemn your
"competition" and that you haven't yet shared any useful results. You
have admitted the second now, so I see that as progress. It is your
callous negation of the work of others that I condemn, not your work.
I don't understand enough about your work to condemn it, and I haven't
condemned your work, just your approach to everyone else.
> More generally:
>
> I get two types of responses to my work. One (less common) type of
> response is from people who understand what I am trying to say well
> enough that they ask specific, focussed questions about things that are
> unclear or things they want to challenge. Those people clear understand
> that there is a "there" there .... if the papers I wrote were empty
> philosophising, those people would never be ABLE to send coherent
> challenges or questions in my direction. Papers that really are just
> empty philosophising CANNOT generate that kind of detailed response,
> because there is nothing coherent enough in the paper for anyone to get
> a handle on.
OK. I haven't given any of those types of responses at this time, but
I give some of my thoughts later in this (longish) email.
> Then there is the second kind of response. From these people I get
> nothing specific, just handwaving or sweeping condemnations. Nothing
> that indicates that they really understood what I was trying to say.
I think I have stated fairly clearly that I haven't understood the
details of your ideas. You haven't shared enough for me to do that.
Perhaps I have unfairly blamed you for that. Perhaps if I were to
spend months digging into your ideas I would come up with something
solid to refute or agree with.
> They reflect back my arguments in a weird, horribly distorted form
> -- so distorted that it has no relationship whatsoever to what I
> actually said -- and when I try to clarify their misunderstandings
> they just make more and more distorted statements, often wandering
> far from the point. And, above all, this type of response usually
> involves statements like "Yes, I read it, but you didn't say anything
> meaningful, so I dismissed it all as empty philosophising".
Richard, there is nothing *empty* about your philosophy. But as a
computer scientist I don't see anything concrete, reproducible and
useful from your papers so far. It isn't a put down to call it
philosophy when all it is is a general idea about where things should
go.
> I always try to explain and respond. I have put many hours into
> responding to people who ask questions, and I try very hard to help
> reduce confusions. I waste a lot of time that way. And very often, I
> do this even as the person at the other end continues to deliver mildly
> derogatory comments like "this isn't science, this is just speculation"
> alongside their other questions.
It must be frustrating. I have a glimpse of where you are going. It
isn't speculation, and some day it may become science. Today, however,
in the paper you had me look at, it is not yet presented in a
scientific manner. That's all I've said, and if you feel that is a
poor description of what you do, all I can say is that is how your
paper reads.
> If you want to know why this stuff comes out of cognitive psychology, by
> all means read the complex systems paper again, and let me know if you
> find the argument presented there, for why it HAS to come out of
> cogntive psychology. It is there -- it is the crux of the argument. If you
> believe it is incorrect, I would be happy to debate the rationale for it.
Here I assume you are referring to your 2007 paper entitled "Complex
Systems, Artificial Intelligence and Theoretical Psychology" (I would
point out that my spending 10 minutes finding what I THINK is the
right paper is indicative of the kind of useless goose chasing I have
to go on to have a conversation with you)...
8:37PM
... Carefully Reading ...
"It is arguable that intelligent systems must involve some amount of
complexity, and so the global behavior of AI systems would therefore
not be expected to have an analytic relation to their constituent
mechanisms."
What other kind of relation would a global result have to the
constituent algorithms? After reading the whole paper, I know what you
are getting at, but this particular sentence doesn't grok well in an
abstract.
"the results were both impressive and quick to arrive."
Quick results, I like the sound of that. Of course the paper is now
nearly four years old... have you achieved any quick results that you
can share?
"If the only way to solve the problem is to declare the personal
philosophy and expertise of many AI researchers to be irrelevant at
best, and obstructive at worst, the problem is unlikely to even be
acknowledged by the community, let alone addressed."
i.e. Everyone else is stupid. That's a good way to get people
interested in your research. I, for one, am trying to get past your
ego.
"A complex system is one in which the local interactions between the
components of the
system lead to regularities in the overall, global behavior of the
system that appear to be
impossible to derive in a rigorous, analytic way from knowledge of the
local interactions."
To paraphrase, a complex system is non-deterministic, or semi-random,
or at least incomprehensible. I suppose that describes the brain to
some extent, so despite the confrontational definition, I admit that
you might be onto something here. At least it is a clear definition of
what you mean by "complex system." It sounds similar to chaos theory
as well, and perhaps you are thinking in that direction.
You discuss the "problem space", and then declare that there is only a
small portion of that space that can be dealt with analytically. Fair
enough, AND the human brain can only attack a small part of the
"problem space" which overlaps partially with the part of the space
that analysis can crack. Think of a Venn Diagram. We obviously need
more kinds of intelligence to crack all of the problems out there in
the "problem space". I would point out that Google and Watson are the
first of a series of problem solvers that may create another circle in
the Venn diagram, but it is hard to say if this really is the case at
this point. I suspect that we will eventually see many circles in such
a Venn diagram.
I have spoken to my friends for years about Gestalt emergent
intelligence (as of ant colonies, neural nets, etc.). I believe in
that. I don't think your global-local disconnect is terribly different
from the common sense notion that the "whole is greater than the sum
of its parts" so I accept that.
Wolfram's "computational irreducibility" comments relate to the idea
that you can't know the results of running some programs until you
actually run them. That is, there is no shortcut to the answer. While
you explain this concept well, I don't see how it applies to AI
systems. That may be my limitation. However, the only way to see how
Watson is going to answer a particular question is to ask it. That
seems to be approximate to computational irreducibility.
I saw how Wolfram himself described how computational irreducibility
relates to Life. He explained it well, and in a manner consistent with
your paper.
"This seems entirely reasonable—and if true, it would mean that our
search for systems that exhibit intelligence is a search for systems
that (a) have a characteristic that we may never be able to define
precisely, and (b) exhibit a global-local-disconnect between
intelligence and the local mechanisms that cause it."
I agree with this statement. I think I understand this statement in a
deep way. Perhaps even in a similar way as you intend it to be
understood. It is more of a philosophical statement than a scientific
one (I don't mean that in a negative way, just a descriptive way, in
that while you can believe it, it may be impossible to prove). I also
would add that systems like Watson have both of these characteristics.
It surprises it's creators every time it plays. In many cases, I think
they are stupefied as to how Watson does it.
"In the very worst case we might be forced to do exactly what he was
obliged to do: large numbers of simulations to discover empirically
what components need to be put together to make a system intelligent."
This sounds like the Kurzweil 'reverse engineer the brain' and then
optimize approach. Thus far, this is one of the more plausible
methodologies I've heard suggested, and there is a lot of great work
going on in this direction. Its a little more directed and
understandible than your search for multiple kinds of intelligences.
"If, as part of this process, we make gradual progress and finally
reach the goal of a full, general purpose AI system that functions in
a completely autonomous way, then the story ends happily."
I think this is what I said about Watson. To be fair, you do quickly
counter this statement.
"This is fairly straightforward. All we need to do is observe that a
symbol system in which (a) the symbols engage in massive numbers of
interactions, with (b) extreme nonlinearity everywhere, and (c) with
symbols being allowed to develop over time, with (d) copious
interaction with an environment, is a system that possesses all the
ingredients of complexity listed earlier. On this count, there are
very strong grounds for suspicion."
You go on to praise the earlier work in back propagation neural
networks. I think those are pretty cool too, and that sort of approach
is inherently more human-like and "complex" than the historical large
LISP symbol processing programs. The problem (IMHO) historically has
been that neural networks haven't been commonly realized in hardware
(this may be changing with FPGAs and such), and that they are
typically implemented as digital systems instead of analog systems.
You encourage your reader to "[remain] as agnostic as possible about the
local mechanisms that might give rise to the global characteristics we
desire" and "we
should organize our work so that we can look at the behavior of large
numbers of different
approaches in a structured manner." I would suggest that you take this
more to heart. If one of the local mechanisms is a lexical analyzer of
text, or a gender analysis, or a neural network weighing the
importance of the various lower levels of the complex system, all that
should be good for you. Yet, you dismiss JUST SUCH a system as
"trivial". Watson is a good test subject for your framework analyzer.
Yes, it was produced by hand by nearly 100 people over four years, but
you didn't put any effort into it.
"But if the Complex Systems Problem is valid, this reliance on
mathematical tractability would be a mistake, because it restricts the
scope of the field to a very small part of the pace of possible
systems. There is simply no reason why the systems that show
intelligent behavior must necessarily have global behaviors that are
mathematically tractable (and therefore computationally reducible).
Rather than confine ourselves to systems that happen to have provable
global properties, we should take a broad, empirical look at the
properties of large numbers of systems, without regard to their
tractability."
I agree with this statement. That may surprise you. I think that a
neural network that can be mathematically proven to be equivalent to a
Bayesian analysis should be replaced with the Bayesian analysis
(unless the NN can be implemented in hardware, and then there is good
reason from an efficiency aspect to use that approach).
"The only way to find this out is to do some real science."
Here is a statement that I can support 100%. No reservations.
You eschew the study of neurons directly in part because "we have
little ability to report subjective events at the millisecond
timescale."
While I grant that was mostly true in 2007 when you wrote the paper,
it is MUCH less true today.
You then discuss a kind of framework generating system that would use
something analogous to a genetic algorithm to create (complex)
frameworks that could then be evaluated for their ability to exhibit
cognition. The question in genetic terms is what should the fitness
test be? You don't really answer that. Other than that, this is an
interesting idea that might be reducible to a concrete approach if
more details were forthcoming.
"The way to make it possible is by means of a software development
environment specifically designed to facilitate the building and
testing of large numbers of similar, parameterized cognitive systems.
The author is currently working on such a tool, the details of which
will be covered in a later paper."
I assume we are still waiting for this paper. What I can't begin to
understand is how a researcher would be able to determine if such a
system was good or bad at the rate of several per day. That seems
analogous to taking every infant in the hospital, interacting with
them for two hours, and trying to determine which one would make the
best theoretical physicist. That part seems hard.
I am definitely one of the "scruffs" you describe in your conclusion.
I am not tied to mathematical elegance in any way. I'm more impressed
with what works. Watson works, and therefore I am impressed by that.
> But, please, don't read several papers and just say afterward "All I
> know at this point is that I need to separate the working brain from the
> storage brain. Congratulations, you have recast the brain as a Von
> Neumann architecture". It looks more like I should be saying, if I were
> less polite, "Congratulations, you just understood the first page of a
> 700-page cognitive pscychology context that was assumed in those papers".
> But won't ;-).
It is now 10:15 PM. I have spent nearly two hours reading the paper
you described as being among your best efforts. Much of what is in
your paper is true. Some is conjecture, and you make it pretty clear
when it is. The argument that intelligence requires an irreducible
system is interesting, and possibly mathematically true, even though
you don't necessarily claim that. Knowing that doesn't seem to help
much in designing a system, but that could be a lack of imagination
and/or knowledge on my side. The proposal to develop a framework
generator is interesting too, and like Einstein's thought experiments
(riding light beams and so forth) it may lead in a fruitful direction.
I know enough reading this paper that I am interested in reading the
next paper promised (if and when it is ever finished).
All that being said, I stick by what I said earlier. This particular
paper is more a work of philosophy than science. Please don't be
offended by that. I have a GREAT deal of respect for philosophy. This
paper may, in fact, be a great work of philosophy. Remember that the
meaning of philosophy is a love of thought. It is clear that a lot of
thought has gone into it. But there is no evidence in the paper other
than an appeal to common sense (albeit a very vertical kind of common
sense) that the assertions made therein are correct. There are no
experiments to be repeated (other than the thought experiments). There
is no program to run to verify your results (partially because you
don't claim any yet). There are no algorithms shared. There are
descriptions of complex systems, but only conjecture that it may be
important.
In addition, there is a contempt for the work of others. Now, being a
big Ayn Rand fan, I can actually admire that kind of individualism,
but you only get the right to impose that level of self assurance on
others AFTER your work has produced some results. In the mean time,
you will have to live with working in the rock quarry. (See The
Fountainhead - Ayn Rand)
Richard, you are a good, and perhaps a great philosopher. You may be a
good or great scientist too, but that is indiscernible from that
paper.
I reviewed this twice for tone... hopefully, it isn't too insulting.
It isn't meant to be.
-Kelly
More information about the extropy-chat
mailing list