[extropy-chat] SI morality

Samantha Atkins samantha at objectent.com
Sat Apr 17 19:37:13 UTC 2004


On Apr 16, 2004, at 6:16 AM, Paul Bridger wrote:

> Dan Clemmensen wrote:
>
>> Being of a sunny and carefree disposition, and having a "belief" that 
>> reason tends to
>> "good," I think that the SI will rapidly create a morality for itself 
>> that I will consider
>> "good." Therefore, I'm in favor of actively accelerating the advent 
>> of the SI if possible.
>
>
> Given all the negative AI scenarios played out in popular culture 
> (Matrix, Terminator etc.) I expect the most deadly obstacle to a 
> big-bang type Singularity to be fear. All scientific obstacles can be 
> conquered by the application of our rational minds, but something that 
> cannot be conquered by rationality is...irrationality. However, I also 
> expect AI to appear in our lives slowly at first and then with 
> increasing prevalence.

Well yes, but.. it would be highly irrational not to be fearful of the 
advent of a super-intelligence utterly beyond one's understanding that 
very will may not be in the least beholden to our continued existence 
and well-being.   Fear of singularity cannot be put down to simple 
irrationality.


>
> Like you, I strongly believe a purely rational artificial intelligence 
> would be a benevolent one, but I wouldn't expect most people to agree 
> (simply because most people don't explore issues beyond what they see 
> at the movie theater). There's a fantastic quote on a related issue 
> from Greg Egan's Diaspora: "Conquering the universe is what bacteria 
> with spaceships would do." In other words, any culture sufficiently 
> technologically advanced to travel interstellar distances would also 
> likely be sufficiently rationally advanced to not want to annihilate 
> us. I think a similar argument applies to any purely rational 
> artificial intelligence we manage to create.
>

Your expectation re a SAI being benevolent or even rational in the 
sense you conceive of the word is hardly compelling enough to be the 
only reasonable possibility to right-thinking folks.  If you believe it 
is simply rational to not only not annihilate alien and/or inferior 
intelligences but to go out of one's way to not incidentally harm them 
then please make your rational air-tight case.   It would be a great 
relief to many.

> I'm interested: have people on this list speculated much about the 
> morality of a purely rational intelligence? If you value rationality, 
> as extropians do, then surely the morality of this putative rational 
> artificial intelligence would be of great interest - it should be the 
> code we all live by. Rationality means slicing away all arbitrary 
> customs, and reducing decisions to a cost-benefit analysis of 
> forseeable consequences. This is at once no morality, and a perfect 
> morality. Hmm...Zen-like truth, or vacuous pseudo-profundity - you 
> decide. :)
>

It certainly is of great interest.  Since you claim to know what fully 
rational morality would be like and lead to, why don't you lead off?  I 
think there is rather more to it than reducing everything to a 
cost-benefit analysis.    In a purely cost-benefit analysis, what 
benefit would an SAI derive from the preservation of humanity if 
humanity seems to be inhabiting and laying claim to resources it 
requires for its own goals?

- s




More information about the extropy-chat mailing list