[ExI] Morality function, self-correcting moral systems?
Anders Sandberg
anders at aleph.se
Wed Dec 14 23:23:15 UTC 2011
Tomasz Rola wrote:
> The problem: we discuss morality, ethics, try to improve ways humans deal
> with each other. Without knowing what is to be optimized, trying to
> optimize it is, to say mildly, optimistic...
>
Yup. Hence axiology. But sadly, we do not have much consensus on what
value is. And even given a value theory it is often hard to find moral
systems that achieve the value (in theory or practice).
> Using neural net for this is interesting. Unfortunately, I think NNs can
> behave slightly differently every time they are raised and trained from
> the scratch. There might be also some small but meaningful problems when
> porting trained NN from one computer to another (different float
> representations, different computation' precisions an so on). I am more into
> finding formula that is not affected by such effects.
>
> (oh please, please, hopefully I used these words in a right way and won't
> cause the whole thread slip into the linguistic mumbo-jumbo-boxing).
>
No problem. But that NNs give slightly different responses depending on
training should not be a problem if the training set is good enough or
the problem is well posed - if you get radical differences, then you are
using the wrong approach. Similarly for floats: any system that is too
noise sensitive is likely a bad moral system. If you ever find a formula
it must be implemented using fallible neurons or noisy electronics.
> Interesting. Here I can see where a language of ethicists and language of
> mathematics part ways :-).
>
There are some ethicists who go all the way to formal logic, but they
are rare. It is rather hard to bridge the gap to reality - as the Wiki
entry on formal ethics mentions, unusually for ethics people don't even
quibble about the axioms, which is a strong sign that it might not have
much actual content.
https://en.wikipedia.org/wiki/Formal_ethics
> In Ola's case, does he mean short term
> maximization or long term one?
>
I don't remember. I think he is for the long term one, insofar that he
wants to maximize the integral of his pleasure.
> Thanks for the pointers. It will take me some time to grok.
>
Yup. Ethics is fun, but besides the facepalm-inducing parts (how can
anybody believe *that*?!) there are some really hard problems. People
who say otherwise have not groked it.
--
Anders Sandberg,
Future of Humanity Institute
Oxford Martin School
Faculty of Philosophy
Oxford University
More information about the extropy-chat
mailing list