[ExI] Unfrendly AI is a mistaken idea.
sjatkins at mac.com
Thu Jun 14 20:29:34 UTC 2007
On Jun 13, 2007, at 12:21 AM, Stathis Papaioannou wrote:
> On 13/06/07, John K Clark <jonkc at att.net> wrote:
> > Stop doing whatever it is doing when that is specifically requested.
> But that leads to a paradox! I am told the most important thing is
> never to
> harm human beings, but I know that if I stop doing what I'm doing
> now as
> requested the world economy will collapse and hundreds of millions
> of people
> will starve to death. So now the AI must either go into an infinite
> loop or
> do what other intelligences, like us, do when they encounter a
> savor the weirdness of it for a moment and then just ignore it and
> get back
> to work and do what you want to do.
> I'd rather that the AI's in general *didn't* have an opinion on
> whether it was good or bad to harm human beings, or any other
> opinion in terms of "good" and "bad".
Huh, any being with interests at all, any being not utterly impervious
to it its environment and even internal states will have conditions
that are better or worse for its well-being and values. This
elementary fact is the fundamental grounding for a sense of right and
> Ethics is dangerous: some of the worst monsters in history were
> convinced that they were doing the "right" thing.
Irrelevant. That ethics was abused to rationalize horrible actions
does not lead logically to the conclusion that ethics is to be avoided.
> It's bad enough having humans to deal with without the fear that a
> machine might also have an agenda of its own. If the AI just does
> what it's told, even if that means killing people, then as long as
> there isn't just one guy with a super AI (or one super AI that
> spontaneously develops an agenda of its own, which will always be a
> possibility), then we are no worse off than we have ever been, with
> each individual human trying to get to step over everyone else to
> get to the top of the heap.
You have some funny notions about humans and their goals. If humans
were busy beating each other up with AIs or superpowers that would be
triple plus not good. Super powered unimproved slightly evolved
chimps is a good model for hell.
> I don't accept the "slave AI is bad" objection. The ability to be
> aware of one's existence and/or the ability to solve intellectual
> problems does not necessarily create a preference for or against a
> particular lifestyle. Even if it could be shown that all naturally
> evolved conscious beings have certain preferences and values in
> common, naturally evolved conscious beings are only a subset of all
> possible conscious beings.
Having values and the achievement of those values not being automatic
leads to natural morality. Such natural morality would arise even in
total isolation. So the question remains as to why the AI would have
a strong preference for our continuance.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat