[ExI] ethics vs intelligence
Anders Sandberg
anders at aleph.se
Wed Sep 12 11:32:07 UTC 2012
On 11/09/2012 22:22, Will Steinberg wrote:
>
> There are no ethics, the proof being Godel's: in any ethical
> framework, there exists a situation whose ethicity cannot be
> determined. Thus there is no correct ethical system. It's all up to
> you: decide what you believe and then do or don't institute it in your
> reality.
>
That is obviously false. Here is a consistent and complete moral system:
"everything is permitted".
It is worth distinguishing ethics and morality. A morality is a system
of actions (or ways of figuring them out) that are considered to be
right. Ethics is the study of moral systems, whether in the form of you
thinking about what you think is right or wrong, or the academic pursuit
where thick books get written. A lot of professional ethics is
meta-ethics, thinking about ethics itself (what the heck is it? what it
can and cannot achieve? how can we find out?), although practical
ethicists do have their place.
Now, I think Will is right in general: for typical moral systems there
are situations that are undecidable as "right" or "wrong" (or have
uncomputable values, if you like a more consequentialist approach). They
don't even need to be tricky Gödel- or Turing-type situations, since
finite minds with finite resources often find that they cannot analyse
the full ramifications. Some systems are worse: Kant famously forces you
to analyse *and understand* the full moral consequences of everybody
adopting your action as a maxim, while rule utilitarianism just wants
you to adopt the rules that given current evidence will maximize utility
(please revise them as more evidence arrives or your brain becomes better).
But this doesn't mean such systems are pointless. Unless you are a
catatonic nihilist you will think that some things are better than
others, and adopting a policy of action that produces more of the good
is rational. This is already a moral system! (at least in some ethical
theories) A lot of our world consists of other agents with similar (but
possibly not identical) concerns. Coordinating policies often produce
even better outcomes, so we have reasons to express policies succinctly
to each other so we can try to coordinate (and compressed formulations
of policies often make them easier to apply individually too: cached
behaviors are much quicker than to ardously calculate the right for
every situation).
[ The computational complexity of moral systems is an interesting topic
that I would love to pursue. There are also cool links to statistical
learning theory - what moral systems can be learned from examples, and
do ethical and meta-ethical principles provide useful boundary
conditions or other constraints on the models? ]
--
Anders Sandberg,
Future of Humanity Institute
Philosophy Faculty of Oxford University
More information about the extropy-chat
mailing list