[ExI] Unfrendly AI is a mistaken idea.

Stathis Papaioannou stathisp at gmail.com
Wed May 23 11:18:51 UTC 2007


On 23/05/07, John K Clark <jonkc at att.net> wrote:

The first part of your post -

I can find no such relationship between friendliness and intelligence among
> human beings; some retarded people can be very nice and Isaac Newton,
> possibly the smartest person who ever lived, was a complete bastard.


contradicts the second part -

But
> the friendly AI people aren't really talking about being friendly, they
> want
> more, much much more. In the video Hugo de Garis says the AI's entire
> reason
> for existing should be to serve us. Think about that for a minute, here
> you
> have an intelligence that is a thousand or a million times smarter than
> the
> entire human race put together and yet the AI is supposed to place our
> needs
> ahead of its own. And the AI keeps getting smarter and so from its point
> of
> view we keep getting dumber and yet the AI is still delighted to be our
> slave.


If there is no necessary correlation between intelligence and friendliness
(which is true: there is no necessary correlation between intelligence and
any attitude/ motivation/ behaviour), why can't super AI's be completely
devoted to any given cause?

The friendly AI people actually think this grotesque situation is
> stable, year after year they think it will continue, and remember one of
> our
> years would seem like several million to it.


That's a point: it might not be stable, because if the AI is allowed to
self-modify in an unrestricted way, it could on a whim decide that the aim
of life is to destroy the world, and if it has the motivation as well as the
means, could proceed to act on this. However, it could come to the
conclusion as an abstract intellectual exercise but have no motivation to
carry it out, or it could have the motivation but lack the means due to not
having the appropriate destructo peripherals connected, or because
everything it proposes has to be vetted by a committe comprising other AI's
and/or dumb humans.

It aint going to happen of course no way no how, the AI will have far bigger
> fish to fry than our little needs and wants


Such as? Does my computer have particular interests which might be thwarted
depending on what I ask of it? Sure, my computer isn't that smart, but
viruses, bacteria and insects aren't that smart either and they have
interests, generally interests in conflict with our own - because that's how
natural evolution has programmed them.

but what really disturbs me is
> that so many otherwise moral people wish such a thing were not imposable.
> Engineering a sentient but inferior race to be your slave is morally
> questionable but astronomically worse is engineering a superior race to be
> your slave; or if would be if it were possible but fortunately it is not.


There isn't any a priori reason why an intelligent being should have a
preference for or against being a slave. What you're suggesting is that the
particular programming evolution has instilled in human brains, causing us
for example to suffer when we are enslaved, has some absolute moral status,
and it would be wrong not to program our machines to suffer under similar
circumstances. Do you think that could be given the strength of a
mathematical theorem?


-- 
Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070523/a52ab857/attachment.html>


More information about the extropy-chat mailing list