[ExI] What might be enough for a friendly AI?

spike spike66 at att.net
Thu Nov 18 01:07:14 UTC 2010


. On Behalf Of Florent Berthet
Subject: Re: [ExI] What might be enough for a friendly AI?

 

>.It may just be me, but this whole friendliness thing bothers me.

 

Good.  It should bother you.  It bothers anyone who really thinks about it.

 

>.I don't really mind dying if my successors (supersmart beings or whatever)
can be hundreds of times happier than me.

More generally, wouldn't it be a shame to prevent an AGI to create an
advanced civilization (eg computronium based) just because this outcome
could turn out to be less "friendly" to us than the one of a human-friendly
AGI?  In the end, isn't the goal about maximizing collective happiness?

 

Florent you are a perfect example of dangerous person to have on the AGI
development team.  You (ad I too) might go down this perfectly logical line
of reasoning, then decide to take it upon ourselves to release the AGI, in
order to maximize happiness.

 

>.So why don't we just figure out how to make the AGI understand the concept
of happiness (which shouldn't be hard since we already understand it), and
make it maximize it?

 

Doh!  You were doing so well up to that point, then the fumble right at the
goal line.  We don't really understand happiness.  We know what makes us
feel good, because we have endorphins.  An AGI would (probably) not have
endorphins.  We don't know if it would be happy or what would make it happy.

 

spike

 

 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20101117/9584462d/attachment.html>


More information about the extropy-chat mailing list