[ExI] Unfriendly AI is a mistaken idea

Samantha Atkins sjatkins at mac.com
Sun May 27 09:05:13 UTC 2007


On May 27, 2007, at 12:59 AM, Samantha Atkins wrote:

>
> On May 26, 2007, at 2:30 PM, Brent Allsop wrote:
>> My original statement had this:
>>
>> Name: Such concern is mistaken
>> One Line: Concern over unfriendly AI is a big mistake.
>>
>> If you agree, Russell, then I propose we use these for the supper  
>> camp that will contain both of our camps and have the first  
>> version of the text be something simple like:
>>
>> Text: We believe the notion of Friendly or Unfriendly AI to be  
>> silly for different reasons described in subordinate camps.
>>
>
> Subgroup:  No effective control.
>
> The issue of whether an AI is friendly or not is, when removed from  
> anthropomorphism of the AI, an issue of whether the AI is strongly  
> harmful to our [true] interests or strongly beneficial.  It is a  
> very real and reasonable concern.   However, it is exceedingly  
> unlikely for a very advanced non-human intelligence that we can  
> exert much leverage at all over its future decision or the effects  
> of those decisions on us.  Hence the issue, while obviously of  
> interest to us, cannot really be resolved.   Concern itself however  
> is not a "big mistake".   The mistake is believing we can actually  
> appreciably guarantee a "friendly" outcome.
>

There is also some danger that over concern with insuring "Friendly"  
AI, when we in fact cannot do so, slows down or prevents us from  
achieving strong AI at all.   If the great influx of substantially  
 >human intelligence is required for our survival then postponing AI  
could itself be a real existential risk or failure to avert other  
existential risks.

- s

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070527/1abe2589/attachment.html>


More information about the extropy-chat mailing list