[ExI] Unfrendly AI is a mistaken idea.

John K Clark jonkc at att.net
Wed May 23 05:23:01 UTC 2007


"Brent Allsop" <brent.allsop at comcast.net>

> friendliness”, to be congruent with intelligence. In other words, the more
> intelligent any being is, the friendlier it will be.

I can find no such relationship between friendliness and intelligence among
human beings; some retarded people can be very nice and Isaac Newton,
possibly the smartest person who ever lived, was a complete bastard.  But
the friendly AI people aren’t really talking about being friendly, they want
more, much much more. In the video Hugo de Garis says the AI’s entire reason
for existing should be to serve us. Think about that for a minute, here you
have an intelligence that is a thousand or a million times smarter than the
entire human race put together and yet the AI is supposed to place our needs
ahead of its own. And the AI keeps getting smarter and so from its point of
view we keep getting dumber and yet the AI is still delighted to be our
slave. The friendly AI people actually think this grotesque situation is
stable, year after year they think it will continue, and remember one of our
years would seem like several million to it.

It aint going to happen of course no way no how, the AI will have far bigger
fish to fry than our little needs and wants, but what really disturbs me is
that so many otherwise moral people wish such a thing were not imposable.
Engineering a sentient but inferior race to be your slave is morally
questionable but astronomically worse is engineering a superior race to be
your slave; or if would be if it were possible but fortunately it is not.

> if you seek to destroy others, you will then be “lonely” which cannot be
> as good as not being lonely and destructive.

So the AI might not want to destroy other AIs, but I don’t think we’d be
very good company to such a being. Can you get any companionship from
a sea slug?

 John K Clark









More information about the extropy-chat mailing list