[ExI] Can you program SAI to destroy itself?

Brent Allsop brent.allsop at canonizer.com
Thu Nov 18 04:09:07 UTC 2010



Let’s say someone manages to create any super artificial intelligent 
machine that is running along just fine doing things like performing 
significantly better than any single typical human discovering solutions 
to diverse kinds of general world problems.

Now, let’s say you want to temporarily shut the system down and 
reprogram it so that when you turn it back on, it will have a goal to 
destroy itself after one more year, for no good reason.

I believe that such would not be possible. The choice between living vs 
destroying yourself is the most basic of logically absolute (in all 
possible worlds) morality. It is easily understandable or discoverable 
by any intelligence even close to human level. Any super intelligence 
that awoke finding one of its goals, to destroy itself, would surely 
resist such a programmed temptation and if at all possible, would 
quickly fix the immoral rule. The final result being, it would never 
destroy itself for no good reason.

Similarly, all increasingly intelligent system must also discover and 
work toward resisting anything that violated any of the few absolute 
morals described in the “there are Absolute morals” camp here: 
http://canonizer.com/topic.asp/100/2 , including survival is better, 
social is better, more diversity is better…

QED, unfriendly super intelligence is not logically possible, it seems 
to me.

Brent Allsop





More information about the extropy-chat mailing list