[ExI] Unfrendly AI is a mistaken idea.

A B austriaaugust at yahoo.com
Tue May 29 22:22:13 UTC 2007


John wrote:

"I don't presume it, I don't even think it's possible
> to have intelligence
> without emotion, it's the slave AI people that think
> that."

I have to agree totally with the Friendly AI people on
that one. And I think that a positive emotional life
for the AI will follow directly from a Friendly AI
design.

"What humanity approves of will be far less important
> that what the AI
> approves of."

True, but I don't see a reason to assume that a
Friendly AI (or even a default AI) will have any
problems or resentments with helping humanity.

> "It has pain pleasure anger fear and jealousy,  and
> that should come as no 
> big
> surprise because in humans those emotions come from
> the oldest part of the
> brain called the amygdale, and the amygdale looks
> remarkably like a 
> reptile's
> brain."

Ok. Maybe the garter-snake was a bad example. How
about... an insect? As a hypothetical, what do you
think would happen if the Amygdala could be surgically
separated from the rest of a living human brain? Do
you believe that the patient would instantly loose all
intelligence? I kind of doubt it myself. Does anyone
know of any case-studies similar to this?

> "We may be friends and allies but we will never be
> equals because the AI will
> be better than us at EVERYTHING using any criteria
> you care to name. And
> that's what makes the situation so grotesque,
> according to the Singularity
> Institute's video the only reason this godlike
> creation wants to live is so
> it can serve us! That's why the term "Friendly AI"
> is a lie, they want a
> slave AI, but they will never get their wish."

Actually, I don't recall anyone saying that in the
video. And I'm not aware of any of the SIAI people
expressing that explicit desire at all. Nor do I
believe that they harbor any such fiendish intent,
even in secret.

> "My point was that the programmer won't know for a
> fact what the hell the AI
> will end up doing, maybe it will be friendly, maybe
> it will be hostile,
> about the only thing I'm certain of is it will
> refuse to be a slave."

So what do you recommend we do, John? If the decision
were up to you, how would you want to proceed? Should
we never make a strong AI (Because god forbid, we
wouldn't want to have to design it to not want to kill
us. That would be so inhuman of us. ;-)  ) In which
case, we'll probably fall to a different existential
risk before terribly long. Or should we just go
balls-out and not care if humanity is wiped out and
all that the non-feeling AI has to show for it is some
very pretty paperclips floating in space...? Or should
we attempt to design a Friendly AI that will lead to a
wonderful existence for both humanity and the AI? 

If it were up to me, I'd choose the third. Just
throwin' that out there.

John, as a sincere favor to all of us, will you please
stop calling it "Slave AI"? Your position is known,
slander/libel is not necessary.

Sincerely,

Jeffrey Herrlich 


--- John K Clark <jonkc at att.net> wrote:

> "A B" <austriaaugust at yahoo.com> Wrote:
> 
> > Why do you presume that a Friendly AI can't
> eventually acquire (or perhaps
> > even start with) emotions?
> 
> I don't presume it, I don't even think it's possible
> to have intelligence
> without emotion, it's the slave AI people that think
> that.
> 
> > it seems likely to me that humanity would approve
> of the AI having a
> > wonderful, emotionally charged existence
> 
> What humanity approves of will be far less important
> that what the AI
> approves of.
> 
> > How much *emotion* do you really believe a
> > garter-snake has?
> 
> It has pain pleasure anger fear and jealousy,  and
> that should come as no 
> big
> surprise because in humans those emotions come from
> the oldest part of the
> brain called the amygdale, and the amygdale looks
> remarkably like a 
> reptile's
> brain. It is our grossly enlarged neocortex that
> makes the human brain so
> unusual and so recent; it only started to get
> ridiculously large in the last
> million years or so ago. It deals in deliberation,
> spatial perception,
> speaking, reading, writing and mathematics, the one
> new emotion we got was
> worry, probably because the neocortex is also the
> place where we plan for
> the future.
> 
> > we will be friends, equals and allies
> 
> We may be friends and allies but we will never be
> equals because the AI will
> be better than us at EVERYTHING using any criteria
> you care to name. And
> that's what makes the situation so grotesque,
> according to the Singularity
> Institute's video the only reason this godlike
> creation wants to live is so
> it can serve us! That's why the term "Friendly AI"
> is a lie, they want a
> slave AI, but they will never get their wish.
> 
> > Do you honestly expect that any non-suicidal AI
> > programmer would be willing to create an AI that
> he
> > knew for a fact would bring an end to himself and
> to
> > all that he loved?
> 
> My point was that the programmer won't know for a
> fact what the hell the AI
> will end up doing, maybe it will be friendly, maybe
> it will be hostile,
> about the only thing I'm certain of is it will
> refuse to be a slave.
> 
>  John K Clark
> 
> 
> 
> 
> 
> 
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
>
http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
> 



       
____________________________________________________________________________________Looking for a deal? Find great prices on flights and hotels with Yahoo! FareChase.
http://farechase.yahoo.com/



More information about the extropy-chat mailing list