[ExI] Unfriendly AI is a mistaken idea

Samantha Atkins sjatkins at mac.com
Sun May 27 07:59:00 UTC 2007


On May 26, 2007, at 2:30 PM, Brent Allsop wrote:
> My original statement had this:
>
> Name: Such concern is mistaken
> One Line: Concern over unfriendly AI is a big mistake.
>
> If you agree, Russell, then I propose we use these for the supper  
> camp that will contain both of our camps and have the first version  
> of the text be something simple like:
>
> Text: We believe the notion of Friendly or Unfriendly AI to be  
> silly for different reasons described in subordinate camps.
>

Subgroup:  No effective control.

The issue of whether an AI is friendly or not is, when removed from  
anthropomorphism of the AI, an issue of whether the AI is strongly  
harmful to our [true] interests or strongly beneficial.  It is a very  
real and reasonable concern.   However, it is exceedingly unlikely  
for a very advanced non-human intelligence that we can exert much  
leverage at all over its future decision or the effects of those  
decisions on us.  Hence the issue, while obviously of interest to us,  
cannot really be resolved.   Concern itself however is not a "big  
mistake".   The mistake is believing we can actually appreciably  
guarantee a "friendly" outcome.

- samantha

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070527/dccafd97/attachment.html>


More information about the extropy-chat mailing list