[ExI] Empathic AGI [WAS Safety of human-like motivation systems]
Kelly Anderson
kellycoinguy at gmail.com
Tue Feb 8 03:23:47 UTC 2011
> If we accept that "normal" human-level empathy (that is, a mere
> ingredient in the evolutionary strategies) is enough, we just have to
> emulate a Darwinian machine as similar as possible in its behavioural
> making to ourselves, and this shall be automatically part of its
> repertoire - along with aggression, flight, sex, etc.
>
> If, OTOH, your AGI is implemented in view of other goals than maximing
> its fitness, it will be neither "altruistic" nor "selfish", it will
> simply execute the other program(s) it is being given or instructed to
> develop as any other less or more intelligent, less or more dangerous,
> universal computing device.
The real truth of the matter is that AGIs will be manufactured (or
trained) with all sorts of tweaking. There will be loving AGIs, and
Spock-like AGIs. There will undoubtedly be AGIs with personality
disorders, perhaps surpassing Hitler in their cruelty. If for no other
reason than to be an opponent in an advanced video game. Just recall
that if it can be done, it will be done. The question for us is what
sorts of rights we give AGIs. Is there any way to keep bad AGIs "in
the bottle" in some safe context? Will there even be a way of
determining that an AGI is, in fact, a sociopath? We can't even find
the Ted Bundys among us. Policing in the future is going to be very
interesting. What sorts of AGIs will we create to be the police of the
future? Certainly people won't be able to police them. We can't keep
the law up with technology now. What privacy rights will an AGI have?
It's all very messy. Should be fun!
-Kelly
More information about the extropy-chat
mailing list