[ExI] Towards a new transhumanist movement.
Richard Loosemore
rpwl at lightlink.com
Thu Oct 14 15:59:30 UTC 2010
spike wrote:
> ...
>> Alan Grimes wrote:
>> What I've learned is that trying to argue with uploaders is hopeless.
>> You can't get them on philosophy, you can't get them on
>> technicalities, you can't get them on logistics, and you
>> can't appeal to a stronger desire. They literally live for
>> the sake of sticking their brain in a meat grinder.... Alan
>
> Alan, their, your and our attitude may be completely irrelevant. When
> Eliezer hung out here, we used to argue this a lot, but I was never
> convinced we humans have any choice in the matter. I argued we have no
> models to even predict how an emergent AI would behave, never mind control
> it, or even significantly influence it. He thought we do have some control
> in creating a friendly AI, but I have long thought we are cattle on a train
> car, with no say in whether we are being taken to be bred in a green pasture
> or to the slaughterhouse.
>
> Outloading is our best hope, but it is only a fond hope rather than a
> prediction.
Since I am actively doing research in this area I feel obliged to speak up.
You would have been right to criticize Eliezer's attitude to friendly
AI, since his approach was driven by an extremely narrow mindset about
what AI actually is.
But in general it is not correct to say that "... we have no models to
even predict how an emergent AI would behave...".
If you treat AI as about building relaxation systems, you can map the
friendliness problem onto something akin to a statistical thermodynamics
problem. Molecular systems do not settle into the states they do
because some external agency forces them to do so (in the manner of a
Yudkowskian, or Asimovian, external friendliness-enforcing logic) but
because the molecules are all trying to relax into local states that
minimize the breakage of certain constraints.
Similarly, an AI in which every aspect of its dynamic is governed by the
relaxation fo cosntraints, can be built in such a way that friendliness
is incorporated into all the downward gradients that define what the
system *wants* to do. Such an AI does not "decide" to be bad any more
than a gas can "decide" not to obey Boyles Law.
Obviously, this is the merest sketch. But then, my point in mentioning
all this is that these issues are not to be resolved by gut feelings and
non-technical arguments: this is about the *exact* mechanics of
building AI systems.
Forgive me, but these days I get pretty frustrated at seeing comments
like "we are cattle on a train car, with no say in whether we are being
taken to be bred in a green pasture or to the slaughterhouse", when my
perspective as a person who is actuallyu trying to design those systems,
is that I cannot actually see an EASY way to build AI systems that are
both superintelligent AND viciously unstable, unfriendly and violent.
Richard Loosemore
More information about the extropy-chat
mailing list