[ExI] Morality function, self-correcting moral systems?

Anders Sandberg anders at aleph.se
Tue Dec 13 21:43:37 UTC 2011


Tomasz Rola wrote:
> However, I wonder if there was any attempt to devise a "morality 
> function"
Yes. However, most professional ethicists would not think it is a 
workable approach if you asked them.

I recently found this paper: "An artificial neural network approach for 
creating an ethical artificial agent" by Honarvar et al. 
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5423190&tag=1

Basically they take examples with features are like "the voluntariness 
of an agent", "the duration of non-human patients’ pleasure", "the 
number of displeasured human patients", etc. encoded as levels, train a 
neural network to correctly classify them as ethical or not based on 
labeled examples, and then apply it to new cases. Basically the method 
finds a decision boundary in the space of features. Since an ANN is a 
general function approximator this is a bit like your M, although just 
producing a thumbs up or down answer.

There might be interesting things to learn here if you have plenty of 
examples or real world data (i.e. what kinds of decision boundaries do 
real world morality have?), but there are many weaknesses. First, you 
need to assume the examples are correctly judged (what is the correct 
judgement about somebody stealing medicine for their sick wife?), second 
that all the relevant features are encoded (this is a biggie - their 
model misses enormous chunks of ethical possibilities because it doesn't 
include them). And then there is the not-so-small matter that many 
ethicists think that getting to a correct answer on the morality of 
actions is not the whole of morality (Kantians think that understanding 
and intending the good is the important part, while virtue ethicists 
think the big thing is to repeatedly act in a good way).



> Also, I have a hypothesis that all human based organisations (societies) I 
> know of are immoral.

I think this is a pretty safe assumption that few ethicists would deny. 
OK, the social contractarians might actually think it is impossible, but 
they are crazy ;-)


> This hints towards conclusion, that it is impossible 
> to build moral structure out of humans.

There are many ethicists who would say that if your morality is 
impossible to follow, then it cannot be the true morality. It is the 
converse of "ought implies can". However, there might be moralities that 
are possible to do but so demanding that practically nobody can do them 
(this is a common accusation against Peter Singers utilitarianism, to 
which he usually cheerfully responds to by asking why anybody thinks the 
correct moral system has to be easy to do).


My own take on it is that in a sense we are reinforcement learning 
machines, trying to find an optimal policy function (mapping each action 
possible in a situation onto a probability of doing it). Becoming better 
at morality means we develop a better policy function. The problem is 
figuring out what "better" means here (my friend the egoistic hedonist 
ethicist Ola has a simple answer - what gives *him* pleasure - but he is 
fairly rare). But it is clear that learning and optimizing ought to be 
regarded as important parts of moral behavior.

-- 
Anders Sandberg,
Future of Humanity Institute
Philosophy Faculty of Oxford University 




More information about the extropy-chat mailing list