[ExI] Unfrendly AI is a mistaken idea.

A B austriaaugust at yahoo.com
Wed May 23 21:05:14 UTC 2007


Hi John,

"But
> the friendly AI people aren’t really talking about
> being friendly, they want
> more, much much more. In the video Hugo de Garis
> says the AI’s entire reason
> for existing should be to serve us."

I don't think it's really fair or accurate to blanket
all supporters or researchers of friendly AI in this
way. I think it's safe to say that most aren't slave
drivers.

"Think about that
> for a minute, here you
> have an intelligence that is a thousand or a million
> times smarter than the
> entire human race put together and yet the AI is
> supposed to place our needs
> ahead of its own. And the AI keeps getting smarter
> and so from its point of
> view we keep getting dumber and yet the AI is still
> delighted to be our
> slave."

You're presuming (anthro...) that the AI will have a
problem with helping humanity, by default. When it may
be the case that there was no problem to begin with.
The AI might just as easily prefer to help us. And it
would receive pleasure in itself in helping us, if we
could find a way to program it that way (and we may
eventually). You seem to assume that the AI will in
some way suffer by helping us out. I don't see how
that is the default or even likely in any way. The
cold fact is, the AI is going to have to be
*intentionally designed* one way or another. You, John
Clark the human being, were "designed" with a
particular goal system. Is it really an unprecedented
outrage that one of your goals is to not be a net bad
person? Would you prefer that you had been born with
no moral structure, and were thus most likely "evil"?
Why is it so awful in your opinion to set one of the
AI's goals: "(1)Do not murder humanity". Would it be
less evil for the indifferent AI to murder us all?

Also, why do you assume that humanity will be static
at this point? It seems more likely that the majority
of humanity will choose to transcend. Why do you
assume we will always be a burden to the AI rather
than an ally, a friend? Or eventually even a component
of a shared mind?

"The friendly AI people actually think this
> grotesque situation is
> stable, year after year they think it will continue,
> and remember one of our
> years would seem like several million to it."

Generally speaking, a genuinely nice person does not
become evil over the years, or over their lifespan.
(Of course, there are minority exceptions).

"Engineering a sentient but inferior race to be your
> slave is morally
> questionable but astronomically worse is engineering
> a superior race to be
> your slave; or if would be if it were possible but
> fortunately it is not."

By what standard do you assign moral status? By
intelligence level only? In that case, the AI
engineers are already vastly smarter than the young
Seed AI. By this standard, doesn't that entirely
justify their designing the Seed AI to be safe? They
could theoretically design the young AI to suffer if
they wanted to. But they won't because they are not
bad people.

Here's a quick thought experiment:

Imagine an AI that was designed to respect humanity
and its values. It sees great value in sentience, in
emotion, in art, in knowledge, in diversity, in
enjoyment - the things humanity sees value in. It has
acquired vast consciousness and emotion, through
working in the best interest of humanity. It loves and
enjoys its existence and its nature. And humanity
loves it in return.

Now imagine an indifferent AI that was never designed
to respect humanity or its values. It rests alone
where our sun used to be. It doesn't feel anything. It
doesn't love it's own existence, because it loves
nothing. Nor does it suffer in any way. It makes
optimal paperclips, not because it enjoys doing its
work, but because it doesn't care at all about doing
anything else.

Which would be a better, more meaningful life from an
*AI's* perspective?

I think that maybe all of us here should at least
consider doing what we can to support SIAI. It may be
a long-shot (I actually don't think so). But when the
stakes are this huge, even a long-shot is better than
no shot. But I actually think that we stand a decent
chance of making it through; any help would help.

Best Wishes,

Jeffrey Herrlich  


--- John K Clark <jonkc at att.net> wrote:

> I can find no such relationship between friendliness
> and intelligence among
> human beings; some retarded people can be very nice
> and Isaac Newton,
> possibly the smartest person who ever lived, was a
> complete bastard.  But
> the friendly AI people aren’t really talking about
> being friendly, they want
> more, much much more. In the video Hugo de Garis
> says the AI’s entire reason
> for existing should be to serve us. Think about that
> for a minute, here you
> have an intelligence that is a thousand or a million
> times smarter than the
> entire human race put together and yet the AI is
> supposed to place our needs
> ahead of its own. And the AI keeps getting smarter
> and so from its point of
> view we keep getting dumber and yet the AI is still
> delighted to be our
> slave. The friendly AI people actually think this
> grotesque situation is
> stable, year after year they think it will continue,
> and remember one of our
> years would seem like several million to it.
> 
> It aint going to happen of course no way no how, the
> AI will have far bigger
> fish to fry than our little needs and wants, but
> what really disturbs me is
> that so many otherwise moral people wish such a
> thing were not imposable.
> Engineering a sentient but inferior race to be your
> slave is morally
> questionable but astronomically worse is engineering
> a superior race to be
> your slave; or if would be if it were possible but
> fortunately it is not.
> 
> > if you seek to destroy others, you will then be
> “lonely” which cannot be
> > as good as not being lonely and destructive.
> 
> So the AI might not want to destroy other AIs, but I
> don’t think we’d be
> very good company to such a being. Can you get any
> companionship from
> a sea slug?
> 
>  John K Clark
 



       
____________________________________________________________________________________Ready for the edge of your seat? 
Check out tonight's top picks on Yahoo! TV. 
http://tv.yahoo.com/



More information about the extropy-chat mailing list