[extropy-chat] SI morality

Paul Bridger paul.bridger at paradise.net.nz
Fri Apr 16 14:11:05 UTC 2004


Ahh good stuff, I'll have to get reading.

Dan Clemmensen wrote:

> Paul Bridger wrote:
>
>>
>> I'm interested: have people on this list speculated much about the 
>> morality of a purely rational intelligence? If you value rationality, 
>> as extropians do, then surely the morality of this putative rational 
>> artificial intelligence would be of great interest - it should be the 
>> code we all live by. Rationality means slicing away all arbitrary 
>> customs, and reducing decisions to a cost-benefit analysis of 
>> forseeable consequences. This is at once no morality, and a perfect 
>> morality. Hmm...Zen-like truth, or vacuous pseudo-profundity - you 
>> decide. :)
>>
> Are you aware of Eliezer Yudkowsky's efforts on this area?
> Please see:
>    http://singinst.org/
>
> I think Eliezer started his effort on this list in about 1996, when he 
> was high-school age. It is now his life-work.  At the time, we spent a 
> fairly large amount of effort on this topic. I'm a mere dilitante, but 
> other list members published on this topic, either on the web or (in 
> at least one case) in book form.
>
> I'm assuming you are interested in the general topic of SI morality, 
> and not just the abstract topic of purely rational morality.
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo/extropy-chat
>
>



More information about the extropy-chat mailing list