[ExI] Morality function, self-correcting moral systems?

Tomasz Rola rtomek at ceti.pl
Wed Dec 14 19:53:44 UTC 2011


On Tue, 13 Dec 2011, Anders Sandberg wrote:

> Tomasz Rola wrote:
> > However, I wonder if there was any attempt to devise a "morality 
> > function"
> Yes. However, most professional ethicists would not think it is a 
> workable approach if you asked them.

The problem: we discuss morality, ethics, try to improve ways humans deal 
with each other. Without knowing what is to be optimized, trying to 
optimize it is, to say mildly, optimistic...

> I recently found this paper: "An artificial neural network approach for 
> creating an ethical artificial agent" by Honarvar et al. 
> http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5423190&tag=1

Using neural net for this is interesting. Unfortunately, I think NNs can 
behave slightly differently every time they are raised and trained from 
the scratch. There might be also some small but meaningful problems when 
porting trained NN from one computer to another (different float 
representations, different computation' precisions an so on). I am more into 
finding formula that is not affected by such effects.

(oh please, please, hopefully I used these words in a right way and won't 
cause the whole thread slip into the linguistic mumbo-jumbo-boxing).

> > Also, I have a hypothesis that all human based organisations (societies) I 
> > know of are immoral.
> 
> I think this is a pretty safe assumption that few ethicists would deny. 
> OK, the social contractarians might actually think it is impossible, but 
> they are crazy ;-)

Yes. However I don't aim at agreeing with guys who say more or less what I 
do, but to agree with a proof :-). From what you wrote, they don't provide 
the proof because they cannot. They formulated some thesis but it doesn't 
seem like they have prooved them - if this was the case, "contractarians" 
would have to agree, even if it was against their ideas. 

My understanding of the word, "proof" is not just something that one might 
or might not consider true at one's whimsical will. Not accepting the 
proof - of course, if one really wants to be hurt by consequences. One can 
also stick one's finger into fire, not accepting that fire will sooner or 
later fry the finger.

> > This hints towards conclusion, that it is impossible 
> > to build moral structure out of humans.
> 
> There are many ethicists who would say that if your morality is 
> impossible to follow, then it cannot be the true morality. It is the 
> converse of "ought implies can". However, there might be moralities that 
> are possible to do but so demanding that practically nobody can do them 
> (this is a common accusation against Peter Singers utilitarianism, to 
> which he usually cheerfully responds to by asking why anybody thinks the 
> correct moral system has to be easy to do).

Interesting. Here I can see where a language of ethicists and language of 
mathematics part ways :-).

If M-function gives an n-dimensional point as a result, there are many 
different outcomes, none of them any more "true" than another. Those are 
just numbers, giving evaluation of M arguments in a form that is easier to 
analyse with mathematical tools.

> My own take on it is that in a sense we are reinforcement learning 
> machines, trying to find an optimal policy function (mapping each action 
> possible in a situation onto a probability of doing it). Becoming better 
> at morality means we develop a better policy function. The problem is 
> figuring out what "better" means here (my friend the egoistic hedonist 
> ethicist Ola has a simple answer - what gives *him* pleasure - but he is 
> fairly rare). But it is clear that learning and optimizing ought to be 
> regarded as important parts of moral behavior.

You are right, probably. In Ola's case, does he mean short term 
maximization or long term one?

Thanks for the pointers. It will take me some time to grok.

Regards,
Tomasz Rola

--
** A C programmer asked whether computer had Buddha's nature.      **
** As the answer, master did "rm -rif" on the programmer's home    **
** directory. And then the C programmer became enlightened...      **
**                                                                 **
** Tomasz Rola          mailto:tomasz_rola at bigfoot.com             **



More information about the extropy-chat mailing list