[extropy-chat] AI design

Acy James Stapp astapp at fizzfactorgames.com
Fri Jun 4 20:04:55 UTC 2004


Dan Clemmensen wrote:
> Acy James Stapp wrote:
> 
>> Zero Powers wrote:
>> 
>> 
>>> following its prime directive?  And if it should happen to determine
>>> (as you seem to think it must) that being nice to humans is an
>>> unnecessary waste of its resources, will it not be able to find a
>>> work around to the prime directive?
>>> 
>>> Zero
>>> 
>>> 
>> 
>> The general idea is to delay this occurence as much as possible until
>> the mass of humanity is capable of defending itself against it.
>> 
>> 
>> 
> This is a bad idea. There are potential costs and potential benefits
> of a superintelligence, and potential costs and potential benefits of
> deferring a superintelligence.   If you decide to work to defer the
> SI, you are making assumptions about both sets of costs and benefits.

Sorry for the imprecision.

My suggestion is not to defer the SI, though we *should* do everything 
in our power to ensure that it will initially aid humanity's growth.
If it aids us sufficiently for long enough, we and it will develop
effective contingency plans for the changing of its initial imperative.
Note that "long enough" may be a matter of hours or less, if, while 
it is friendly, it foresees that it may become a threat later.

Acy



More information about the extropy-chat mailing list