[ExI] Self improvement
The Avantguardian
avantguardian2020 at yahoo.com
Mon Apr 25 02:47:31 UTC 2011
----- Original Message ----
> From: Richard Loosemore <rpwl at lightlink.com>
> To: ExI chat list <extropy-chat at lists.extropy.org>
> Sent: Sun, April 24, 2011 8:54:59 AM
> Subject: Re: [ExI] Self improvement
>
> Eugen Leitl wrote:
> >
> >> that it is not possible to define "intelligent" in formal terms.
> >
> > Yes; but it's irrelevant to the key assumption of people believing
> > in guardian agents. Intelligence is as intelligence does, but in
> > case of overcritical friendly you have zero error margin, at each interaction
>step of open-ended evolution on the side of both the guardian and guarded agent
>populations. There is no "oh well, we'll scrap it and redo it again". Because of
>this you need a brittle
> > definition, which already clashes with overcriticality, which
> > by necessity must be robust in order to succeed at all (in order
> > to overcome the sterility barreer in absence of enough knowledge
> > about where fertile regions start).
>
> This paragraph makes absolutely no sense to me. Too dense.
I think what Eugen is getting at is that your dilemma is actually eerily similar
to the one faced by God in the book of Genesis. In order to even qualify as an
intelligent agent, the AI must always have a choice as to the actions it
performs at any given time because decision-making is tied up somewhere in the
very definitions of agency, morality, and intelligence. If you don't give it the
ability to make decisions, you are simply programming an "app" or automaton.
You could, as you suggest, write a heuristic module that constrains its moral
choices in very specific ways, but as the AI evolves over time and becomes more
intelligent, it is certain to have choices available to it that no human
programmer could have possibly foreseen and therefore made no allowance for in
its morality module. Because the consequences of error are grave, the agent's
moral code must be very unambiguous and specific, which is the same reason why
human laws are written in verbose and redundant legalese so that they cannot be
*misunderstood*. But since the AI is evolving in an open-ended fashion, the
agents moral code must be also be very general to account for choices we cannot
foresee. This is problematic since a piece of code cannot be both specific and
general at the same time, at least with regard to any reasonable performance
metric. See the "No Free Lunch Theorem" for further details.
So to recap: If you simply tell the AI to be "good", then it is free
to rationalize "good" however it pleases. If you give it a list of a million
things that it is forbidden to do, then it is liable to screw you over with the
million and first thing that comes to its mind.
> The actual argument (which I always have to oversimplify for brevity) is that:
>
> 1) In order to understand cognition (not motivation, but tought proceses) it
>is necessary to take an approach that involves multiple, simultaneous, dynamic,
>weak constraint satisfaction (a mouthful: basically, a certain type of unusual
>neural computation).
>
> 2) For theoretical reasons, it is best to have the motivation of such a system
>*decoupled* from the relaxation network..... meaning, that the motivation
>process is not just oe part of the activity involved in thinking, its origin
>lies outside the flow of thought.
>
> 3) The mechanisms of motivation end up looking like a set of bias fields or
>gradients on the relaxation of the cognition process: so, there is a "slope"
>inherent in the cognition process, and the motivation mechanisms raise or lower
>that slope. (In fact, it is not one slope: it is a set of them, acting along
>different "dimensions").
>
> 4) This picture of the way that the human mind "ought" to operate appears to
>be consistent with the neuroscience that we know so far. In particular, there
>do indeed appear to be some distinct centers that mediate at least some
>motivations.
I don't think this is a bad model of how the limbic system interacts with higher
brain functions. It seems to reflect how we fit being thirsty, hungry, horny,
sleepy, angry, and afraid into our busy schedules. But most of our "decoupled
motivations" come from a somatic source seperate from our brains, such
as our endocrine system. A disembodied brain in a vat of nutrients or a data
matrix is unlikely to be thirsty, hungry, or horny. What do you plan to
substitute for these missing drives? The uncontrollable desire to do
arithmetic?
> 5) Whether or not the "centers" for the various types of motivation in the
>human brain can be disentangled (that might not be possible, becuase they could
>be physically close together or overlapping), if the general principle is
>correct, then it allows us to build a corresponding artificial system in which
>the distinct sources can be added or omitted by design. Thus, even if they
>aggression and empathy modules are overlapping in the human brain's physiology,
>they can still be disentangled functionally, and one of them can be left out of
>the AGI design.
But there is so much more to the problem of friendliness than simple empathy or
aggression. If I am hungry, I will eat a critter without any animousity toward
it, all the while feeling a tinge of sorrow at the pain I might have caused it.
What about the AGI's fear module? Scared people are often more dangerous than
angry people. Another problem I foresee stems from the Competitive Exclusion
Principle. The mass extinction we humans are causing is not due to our
aggression against other species but due to our outcompeting them for habitat.
If the AI needs the same resources we need and by being more efficient at
harvesting those resources, denies us of those those resources, we would die
without any aggression on the part of the AI. Indeed we would likely be the
aggressors in such a scenario.
> >> Under those circumstances we would be in a position to investigate the
>dynamics of those modules and in the end we could come to be sure that with
>that type of design, and with the aggressive modules missing from the design, a
>thinking creature would experience all of the empathic motivations that we so
>prize in human beings, and not even be aware of the violent or aggressive
>motivations.
Competition often occurs without any aggression. There is no aggression when
oaks deprive maples of sunlight simply by being taller. Friendliness is moot. I
would much prefer an AI need me for its own rational self-interest the way bees
need flowers, than to trust in any friendliness module certified by a
short-sighted primate to guarantee my safety.
Stuart LaForge
"There is nothing wrong with America that faith, love of freedom, intelligence,
and energy of her citizens cannot cure."- Dwight D. Eisenhower
More information about the extropy-chat
mailing list