[extropy-chat] AI design

Dan Clemmensen dgc at cox.net
Fri Jun 4 18:02:06 UTC 2004


Acy James Stapp wrote:

>Zero Powers wrote:
>  
>
>>following its prime directive?  And if it should happen to determine
>>(as you seem to think it must) that being nice to humans is an
>>unnecessary waste of its resources, will it not be able to find a
>>work around to the prime directive?  
>>
>>Zero
>>    
>>
>
>The general idea is to delay this occurence as much as possible until
>the mass of humanity is capable of defending itself against it.
>
>  
>
This is a bad idea. There are potential costs and potential benefits of 
a superintelligence, and potential costs and potential benefits of 
deferring a superintelligence.   If you decide to work to defer the SI, 
you are making assumptions about both sets of costs and benefits.

This thread has focused almost exclusively on the worst-case outcome of 
creating an SI. I think extropians have a fairly good idea of the 
magnitude of potential best-case outcomes, also.  However, we've been 
neglecting the more mundane cost/benefit analysis of deferral. The 
worst-case outcomes of deferral are pretty horrific. It is quite easy to 
envision plausible scenarios in which humanity destroys civilization, 
humanity, the ecosystem, or the earth, without any SI involvement. there 
are also several classes of cosmic catastrophe that can destroy 
humanity. A "good" SI could prevent these disasters. So we need to 
analyze the relative risks.

Moving back from the worst cases, we pay a huge everyday price by 
deferring the SI. If the SI bootstraps a hard-takeoff singularity, or 
even if it "just" massively increases productivity, millions of lives 
will be saved. Deferring the SI effectively kills those people.



More information about the extropy-chat mailing list