[extropy-chat] SI morality
Paul Bridger
paul.bridger at paradise.net.nz
Sun Apr 18 00:07:03 UTC 2004
> Fear of singularity cannot be put down to simple irrationality.
Yes, you're quite right. I think this was what Eugen was getting at as well.
I expect the public to fear Singularity less because they've thought it
through (irrationally or otherwise), and more because of the culture
they've been fed.
>> Like you, I strongly believe a purely rational artificial
>> intelligence would be a benevolent one, but I wouldn't expect most
>> people to agree (simply because most people don't explore issues
>> beyond what they see at the movie theater). There's a fantastic quote
>> on a related issue from Greg Egan's Diaspora: "Conquering the
>> universe is what bacteria with spaceships would do." In other words,
>> any culture sufficiently technologically advanced to travel
>> interstellar distances would also likely be sufficiently rationally
>> advanced to not want to annihilate us. I think a similar argument
>> applies to any purely rational artificial intelligence we manage to
>> create.
>>
>
>
> Your expectation re a SAI being benevolent or even rational in the
> sense you conceive of the word is hardly compelling enough to be the
> only reasonable possibility to right-thinking folks. If you believe
> it is simply rational to not only not annihilate alien and/or inferior
> intelligences but to go out of one's way to not incidentally harm them
> then please make your rational air-tight case. It would be a great
> relief to many.
After reading some of "Creating Friendly AI" I realise that I need to do
a lot more thinking (and reading) before I can "strongly believe"
anything about rational morality again.
>> I'm interested: have people on this list speculated much about the
>> morality of a purely rational intelligence? If you value rationality,
>> as extropians do, then surely the morality of this putative rational
>> artificial intelligence would be of great interest - it should be the
>> code we all live by. Rationality means slicing away all arbitrary
>> customs, and reducing decisions to a cost-benefit analysis of
>> forseeable consequences. This is at once no morality, and a perfect
>> morality. Hmm...Zen-like truth, or vacuous pseudo-profundity - you
>> decide. :)
>>
> It certainly is of great interest. Since you claim to know what fully
> rational morality would be like and lead to, why don't you lead off?
> I think there is rather more to it than reducing everything to a
> cost-benefit analysis. In a purely cost-benefit analysis, what
> benefit would an SAI derive from the preservation of humanity if
> humanity seems to be inhabiting and laying claim to resources it
> requires for its own goals?
Maybe later, after I get my hubris back. :)
I guess my one argument for the likely outcome of friendly AI is this:
As the programmers, unless we create AI by mistake we get to specify
what core goals the AI has. Primary goals are self-perpetuating and
cannot be changed by goal-directed behaviour.
An obvious rebuttal to this sunny scenario is that a putative AI may
require non-goal-directed behaviour (such as some random source-code
shuffling - or some more plausible analogue of random mutation) in order
to improve itself in creative new ways.
Paul
More information about the extropy-chat
mailing list