[extropy-chat] SI morality
Alex Ramonsky
alex at ramonsky.com
Fri Apr 16 16:45:49 UTC 2004
Paul Bridger wrote:
> Ahh good stuff, I'll have to get reading.
>
> Dan Clemmensen wrote:
>
>> Paul Bridger wrote:
>>
>>>
>>> I'm interested: have people on this list speculated much about the
>>> morality of a purely rational intelligence?
>>
Speculated, designed, adopted, beta tested and...running : )
>>> If you value rationality, as extropians do, then surely the morality
>>> of this putative rational artificial intelligence would be of great
>>> interest - it should be the code we all live by.
>>
Wow. Somebody's noticed. I'll start planning the party. : )
>>> Rationality means slicing away all arbitrary customs, and reducing
>>> decisions to a cost-benefit
>>
(in terms of survival and success of rationality, not of you
personally...you have to align your behavior to it in order to benefit.)
>>> analysis of forseeable consequences. This is at once no morality,
>>> and a perfect morality. Hmm...Zen-like truth, or vacuous
>>> pseudo-profundity - you decide. :)
>>
It's neither and both, as you say. A bit like nature/nurture.
Oddly enough it turns out to be the ultimate altruism.. The actions one
is forced to take turn out to be good for others by accident (or design,
if you are that way inclined.)
It rather points the finger at other moralities' use of the word
'altruism', and they could get embarrassed about that. It's so
deceptively simple it rarely gets noticed, and it's (use of the networks
involved) detectable by MRI. With it, life becomes very very easy indeed.
It does not make you happy, but at least life makes sense and you know
what to do.
If anyone wants to see the specs, mail me offlist.
Best,
AR
More information about the extropy-chat
mailing list