[ExI] Unfrendly AI is a mistaken idea.
Christopher Healey
CHealey at unicom-inc.com
Wed May 23 16:56:48 UTC 2007
> For just as you might keep a photograph of your great great
grandfather,
> so might an AI (friendly or not) wish to maintain an infinitesimal
record > of
> its progenitors.
>
> Lee
The strong possibility also exists that it would see at least some
direct value in preserving us, in that any retroactive simulations it
might have a need to run of humans or our systemic civilization (from
which it was spawned) would necessarily risk lacking those aspects of
the systemic dynamics that its past self was blind to, at the time we
ceased to exist.
If I was an AGI that assigned a finite probability to the fact that my
design may exclude factors that were important and outside my awareness,
I'd make sure to "tread lightly" on the only data set I had
demonstrating such a complex system. Perhaps I'd even go so far as to
say all generalizations have the potential to exclude an important
aspect of reality, and decide to wait a good long time before yanking
you off whatever baseline reality you're running on. To do that, my
assurance level would have to be astronomically high; and not my
assurance level of preserving you, my assurance level of preserving
myself. And part of that goal would likely involve avoiding actions
that would needlessly contrain my potential state-space, which
destroying (or allowing the destruction of) that "pristine" data set
could potentially result in.
My point is that there are probably lots of good reasons to keep us
around, considering the cost/benefit, but hey, if we can raise a child
that actually desires our well being in return... that sounds like a
good idea to me!
>From my perspective, Friendly AI is in large part about ensuring that we
pursue designs that are actually capable of representing that concern.
If the human brain can go from benign to sociopathic over a relatively
minor range of alteration, then in the task of hand-constructing a
complete mind from scratch, there are certain systemic mistakes we
probably want to identify ahead of time. And the more subtle our
errors, the worse off we will be. A highly flawed recursively improving
mind will hack itself to death pretty quickly, but a subtly-flawed mind
will successfully achieve goals that erroneously represent actual
intentions. It will get a lot of crap done before hacking itself into
pieces... and probably us with it, since we won't have any privileged
status as it runs amok.
The picture people have a hard time avoiding is that of another really,
really, really smart human having power at their disposal, and we've
rightly evolved to be worried about that. But this is a different
issue, and deserves our deepest efforts to sever our anthropomorphisms
wherever we can.
-Chris
More information about the extropy-chat
mailing list