[ExI] What might be enough for a friendly AI?

Samantha Atkins sjatkins at mac.com
Thu Nov 18 06:17:11 UTC 2010


On Nov 17, 2010, at 5:07 PM, spike wrote:

> … On Behalf Of Florent Berthet
> Subject: Re: [ExI] What might be enough for a friendly AI?
>  
> >…It may just be me, but this whole friendliness thing bothers me.
>  
> Good.  It should bother you.  It bothers anyone who really thinks about it.
>  
> >…I don't really mind dying if my successors (supersmart beings or whatever) can be hundreds of times happier than me…
> More generally, wouldn't it be a shame to prevent an AGI to create an advanced civilization (eg computronium based) just because this outcome could turn out to be less "friendly" to us than the one of a human-friendly AGI?  In the end, isn't the goal about maximizing collective happiness?
>  
> Florent you are a perfect example of dangerous person to have on the AGI development team.  You (ad I too) might go down this perfectly logical line of reasoning, then decide to take it upon ourselves to release the AGI, in order to maximize happiness.

This is the Cosmist or Terran question.  If you considered it very highly probable that the AGIs would be fantastically brilliant and wonderful beyond imagining AND would be the doom of humanity  then would you still build it or donate to and encourage building it?    I would but with very considerable hesitation and not feeling all that great about it.

- samantha

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20101117/27daab4c/attachment.html>


More information about the extropy-chat mailing list