[extropy-chat] SI morality

Dan Clemmensen dgc at cox.net
Fri Apr 16 13:51:19 UTC 2004


Paul Bridger wrote:

>
> I'm interested: have people on this list speculated much about the 
> morality of a purely rational intelligence? If you value rationality, 
> as extropians do, then surely the morality of this putative rational 
> artificial intelligence would be of great interest - it should be the 
> code we all live by. Rationality means slicing away all arbitrary 
> customs, and reducing decisions to a cost-benefit analysis of 
> forseeable consequences. This is at once no morality, and a perfect 
> morality. Hmm...Zen-like truth, or vacuous pseudo-profundity - you 
> decide. :)
>
Are you aware of Eliezer Yudkowsky's efforts on this area?
Please see:
    http://singinst.org/

I think Eliezer started his effort on this list in about 1996, when he 
was high-school age. It is now his life-work.  At the time, we spent a 
fairly large amount of effort on this topic. I'm a mere dilitante, but 
other list members published on this topic, either on the web or (in at 
least one case) in book form.

I'm assuming you are interested in the general topic of SI morality, and 
not just the abstract topic of purely rational morality.



More information about the extropy-chat mailing list