[extropy-chat] Intermediate steps and AGI

Benjamin Goertzel ben at goertzel.org
Sun Feb 11 22:44:59 UTC 2007


> >I don't have more time to
> >give to this dialogue either, really, I've got an AI to build...
>
> Good.
>
> Have you considered an intermediate step?  An intelligence amplifier that
> could build engineering models and keep track of details would be of
> enormous help.  Not to mention being worth a fortune.

This is an worthwhile topic (I don't mean engineering modeling
specifically (though yeah, that is worthwhile too, but I don't have
much to say about it) but "intermediate steps to AGI" more generally).

Unfortunately I have concluded that "intermediate steps" are going to
be of fairly limited **scientific** value within the Novamente
context.

The Novamente system was designed as an integrative system intended to
give rise to intelligent behavior via the combination of all the
parts.  It was not designed to give rise to linearly more and more
intelligent behavior, incrementally, as more and more of the parts are
introduced.  That would be a nice property, but I don't currently know
how to make an AGI design that possesses it.  Nor does anyone, so far
as I know.

Unfortunately, according to the logic of the design itself, the
preliminary and partial forms of NM are not likely to display any kind
of awesomely powerful AGI.  It is just the nature of the design that
the generality of the system's intelligence is supposed to emerge via
cooperative activity of all the different parts. This due to is the
"complex, self-organizing systems" aspect of the design that some
probability-and-logic buffs don't adequately appreciate.

However, that doesn't imply that preliminary versions won't be able to
do anything useful.

For example, I think that Novamente-based "reinforcement learning +
memory" for embodied agent learning in a simulation world, if tuned
for efficiency, can be useful -- and should be doable using Novamente
inference and evolutionary learning, combined with pretty simplistic
versions of attention allocation and action selection, and without
concept creation or map formation or some other advanced Novamente
dynamics.  We are currently doing experimental work like this right
now, teaching the system to play simple doggish games in the
simulation world.

For another example, we have done some experiments running
Novamente-based inference based on the output of a rule-based NLP
system.     This seems to be useful for biological hypothesis
generation, if the text is PubMed abstracts, for instance.

Also, Novamente "inference + evolutionary learning" can be a powerful
datamining tool; though we haven't used it as such yet, we probably
will at some point as Novamente LLC has some commercial datamining
contracts that we are currently addressing using simpler tools.

So, I do think that NM can be **useful** before completion, for at
least three narrow-AI apps:

-- embodied agent control
-- reasoning based on NLP parsing
-- data mining

and probably a lot more.

However, all these are basically narrow-AI apps, at which an
incomplete NM is likely to perform incrementally rather than
mind-blowingly better than the best non-proto-AGI narrow-AI approaches
(which are themselves quite complicated, though not as complicated as
NM).

Alas, seeing the "AGI wake up" in any meaningful sense is not gonna
happen till essentially the whole NM design is implemented and tuned
and taught a bit.  That's just the nature of the beast.  Mind, or at
least the NM variant of mind, is holistic.

-- Ben G



More information about the extropy-chat mailing list