[ExI] Unfriendly AI is a mistaken idea

Brent Allsop brent.allsop at comcast.net
Sun May 27 19:58:22 UTC 2007


Russell,

Thanks for adding your camp and support, it looks great!  Your 1 hour 
share of Canonizer LLC has been recorded.  Notice that this now anti 
"Friendly AI" topic has now moved up from the 10th most supported topic 
to #4.

Here is a link to the topic as it is currently proposed:

http://test.canonizer.com/change_as_of.asp?destination=http://test.canonizer.com/topic.asp?topic_num=16&as_of=review/



Samantha,

Thanks for proposing yet another good position statement that looks to 
me like another good point.  Since you disagree with a notion in the 
supper statement - that "Concern Is Mistaken" we should probably 
restructure things to better accommodate your camp.  You don't want to 
support a sub camp to a camp containing a notion you disagree with and 
thereby imply support of the mistaken (in your POV)  notion.

Could we say that one of the points of contention seems to do with 
Motivation of future AI?  Russell, you believe that for the foreseeable 
future tools will just do what we program them to, and they will not be 
motivated right?  Where as apparently Samantha and I believe tools will 
be increasingly motivated and thereby share this difference in our 
beliefs with you?

Samantha's and my differences have to do with the morality (or 
friendliness) of such motivations right?  I believe, like everything 
else, the morality of this motivation will be improving so it need be of 
no real concern.  While Samantha differers, believing the morality may 
not improve along with everything else, so it indeed could be a real 
concern right Samantha?  Yet, since Samantha believes if such were the 
case, there would be nothing we could do about it, it isn't worth any 
effort, hence she is in agreement with Russell and I about the lack of 
worth for any effort towards creating a friendly AI at this time?

So does anyone think a restructuring like the following would not be a 
big improvement?  Or can anyone propose a better structure?

   1. No benefit for effort on Friendly AI
         1. AI will be motivated
               1. Everything, including moral motivation, will increase.
                  (Brent)
               2. We can't do anything about it, so don't waist effort
                  on it. (Samantha)
         2. AI will not be motivated (Russell)


Any other POV out there we're still missing before we make a big 
structure change like this?

Thanks for your effort folks!  This is really helping me realize and 
understand the important issues in a much more productive way.

Brent Allsop





Samantha Atkins wrote:
>
> On May 27, 2007, at 12:59 AM, Samantha Atkins wrote:
>
>>
>> On May 26, 2007, at 2:30 PM, Brent Allsop wrote:
>>> My original statement had this:
>>>
>>> Name: *Such concern is mistaken*
>>> One Line: *Concern over unfriendly AI is a big mistake.*
>>>
>>> If you agree, Russell, then I propose we use these for the supper 
>>> camp that will contain both of our camps and have the first version 
>>> of the text be something simple like:
>>>
>>> Text: *We believe the notion of Friendly or Unfriendly AI to be 
>>> silly for different reasons described in subordinate camps.*
>>>
>>
>> Subgroup:  No effective control.
>>
>> The issue of whether an AI is friendly or not is, when removed from 
>> anthropomorphism of the AI, an issue of whether the AI is strongly 
>> harmful to our [true] interests or strongly beneficial.  It is a very 
>> real and reasonable concern.   However, it is exceedingly unlikely 
>> for a very advanced non-human intelligence that we can exert much 
>> leverage at all over its future decision or the effects of those 
>> decisions on us.  Hence the issue, while obviously of interest to us, 
>> cannot really be resolved.   Concern itself however is not a "big 
>> mistake".   The mistake is believing we can actually appreciably 
>> guarantee a "friendly" outcome.
>>
>
> There is also some danger that over concern with insuring "Friendly" 
> AI, when we in fact cannot do so, slows down or prevents us from 
> achieving strong AI at all.   If the great influx of substantially 
> >human intelligence is required for our survival then postponing AI 
> could itself be a real existential risk or failure to avert other 
> existential risks.
>
> - s
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>   

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070527/453b3f1d/attachment.html>


More information about the extropy-chat mailing list