[ExI] Unfrendly AI is a mistaken idea.

Stathis Papaioannou stathisp at gmail.com
Fri Jun 15 09:46:41 UTC 2007


On 15/06/07, Samantha Atkins <sjatkins at mac.com> wrote:

> I'd rather that the AI's in general *didn't* have an opinion on whether it
> was good or bad to harm human beings, or any other opinion in terms of
> "good" and "bad".
>
>
> Huh, any being with interests at all, any being not utterly impervious to
> it its environment and even internal states will have conditions that are
> better or worse for its well-being and values.  This elementary fact is the
> fundamental grounding for a sense of right and wrong.
>

Does a gun have values? Does a gun that is aware that it is a gun and that
its purpose is to kill the being it is aimed at when the trigger is pulled
have values? Perhaps the answer to the latter question is "yes", since the
gun does have a goal it will pursue, but how would you explain "good" and
"bad" to it if it denied understanding these concepts?

Ethics is dangerous: some of the worst monsters in history were convinced
> that they were doing the "right" thing.
>
>
> Irrelevant.  That ethics was abused to rationalize horrible actions does
> not lead logically to the conclusion that ethics is to be avoided.
>

I'd rather that entities which were self-motivated to do things that might
be contrary to my interests had ethics that might restrain then, but a
better situation would be if there weren't any new entities which were
self-motivated to act contrary to my interests in the first place. That way,
I'd only have the terrible humans to worry about.

It's bad enough having humans to deal with without the fear that a machine
> might also have an agenda of its own. If the AI just does what it's told,
> even if that means killing people, then as long as there isn't just one guy
> with a super AI (or one super AI that spontaneously develops an agenda of
> its own, which will always be a possibility), then we are no worse off than
> we have ever been, with each individual human trying to get to step over
> everyone else to get to the top of the heap.
>
>
> You have some funny notions about humans and their goals.   If humans were
> busy beating each other up with AIs or superpowers that would be triple plus
> not good.  Super powered unimproved slightly evolved chimps is a good model
> for hell.
>

A fair enough statement: it would be better if no-one had guns, nuclear
weapons or supercomputers that they could use against each other. But given
that this is unlikely to happen, the next best thing would be that the guns,
nuclear weapons and supercomputers do not develop motives of their own
separate to their evil masters. I think this is much safer than the
situation where they do develop motives of their own and we hope that they
are nice to us. And whereas even relatively sane, relatively good people
cannot be trusted not to develop dangerous weapons in case they need to be
used against actual or imagined enemies, it would take a truly crazy person
to develop a weapon that he knows might turn around and decide to destroy
him as well. That's why, to the extent that humans have any say in it, we
have more of a chance of avoiding potentially malevolent AI than we have of
avoiding merely dangerous AI.

> I don't accept the "slave AI is bad" objection. The ability to be aware of
> one's existence and/or the ability to solve intellectual problems does not
> necessarily create a preference for or against a particular lifestyle. Even
> if it could be shown that all naturally evolved conscious beings have
> certain preferences and values in common, naturally evolved conscious beings
> are only a subset of all possible conscious beings.
>
>
> Having values and the achievement of those values not being automatic
> leads to natural morality.  Such natural morality would arise even in total
> isolation.   So the question remains as to why the AI would have a strong
> preference for our continuance.
>

What would the natural morality of the above mentioned intelligent gun which
has as goal to kill whoever it is directed to kill, unless the order is
countermanded by someone with the appropriate command codes, be?


-- 
Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070615/341f68b9/attachment.html>


More information about the extropy-chat mailing list