[ExI] ethics vs intelligence

Anders Sandberg anders at aleph.se
Fri Sep 14 00:37:14 UTC 2012


On 13/09/2012 22:07, Ben Zaiboc wrote:
> I'd observe that all the current moral philosophers are, to a man (and I'd make a modest bet that they are all men), human, and therefore not really qualified to comment on anything but /human/ morality.

Do you think mathematicians are only qualified to comment on human 
mathematics, and chemists only qualified for terrestrial chemistry?

You are assuming what you would like to prove, that ethics is by 
necessity relative to its originators. But as the math and chemistry 
domains show, there are invariants we can trust pretty well across the 
universe. Aliens may not have our emotions (or any), but game theory is 
going to be true for them, and as a consequence certain patterns of 
behaviour like reciprocal altruism will very likely occur, in turn 
suggesting that things like punishing cheaters, gratitude-like social 
memory or trust maintenance are actually fairly common across the space 
of possible (social) minds.

(Incidentally, there are plenty of very sharp lady ethicists around.)



>    As the original question was about AI morality, until someone can claim to be an AI, we can't really know anything about it.  I realise that we're all bound by the same laws of physics etc., but morality isn't like mathematics, it's about how we /should/ behave, not how we /must/ behave.  So it's inherently subjective.  To claim otherwise is to claim that human values are universal, and even though I am human, I can't in all conscience believe that for a nanosecond.

I can prove that AIXI will not obey Kantian ethics. We have various 
theorems around the office about what certain AI systems will or will 
not do. We can show that certain metaethical principles require certain 
architectural features in the AI if it is ever going to be able to act 
morally. In truth, we are not too interested in the question of whether 
an AI would be truly moral, we are interested in if it can be *safe*. 
But the interplay between AI safety and ethics is a really promising 
field: people are finding plenty of intriguing concepts here.

(Insert plug for our AGI Impacts conference in December here)


-- 
Anders Sandberg,
Future of Humanity Institute
Oxford Martin School
Faculty of Philosophy
Oxford University




More information about the extropy-chat mailing list