[ExI] moral reasoning
William Flynn Wallace
foozler83 at gmail.com
Sat May 28 14:29:50 UTC 2016
The old view of perfect rationality and perfect introspection is pretty
clearly not a good model for how people actually act and think morally.
Exactly so. If that idea is not dead yet it is high time it is.
In some moral experiments a hypothetical dilemma is presented and the
person picks an action he would take. Similar dilemmas are presented later
to get some idea of the reliability of answers.
Reliability is never found to be perfect. In fact, if you change the
dilemma a bit you can get entirely different answers. This challenges the
idea that there is some fixed moral system in a person. Going even
further, change the externals: put the same dilemma in a less personal or
more personal situation and you can find that the situation changes the
behavior. This is an old problem: the causes of behavior stem from both
external and internal variables.
Thus this is a possibility: a person who has committed murder would not
have done so earlier or later in the day, if they had not been angry, or
drunk, or had just come from another frustrating experience. Is this
person a 'murderer', implying a personality that would do the same thing
even if circumstances were different? Or is this a person who would commit
murder only in the circumstance he faced?
We have to look outward as well as inward to understand morality. If we
get unreliability of answers to moral questions, then it could be that the
measurements are unreliable and we need to better them. It could also be
that the thing being measured is not, in fact, unchangeable, and the
unreliability of the measurements reflects the unreliability of the thing
itself. In fact, it could be so unreliable that it is a mistake to call it
On Sat, May 28, 2016 at 3:08 AM, Anders Sandberg <anders at aleph.se> wrote:
> I think he is stretching the theory too far. That our ability to
> introspect doesn't cover 100% of our minds, nor is 100% perfect doesn't
> mean there are no detectable thoughts or beliefs. These exceptions do not
> have to be minor "limited exceptions". Similarly, that we might use a
> mindreading system inwards does not mean it is limited by sensory input:
> contents of working memory clearly seem accessible to it.
> The "new" view that the mind is fairly opaque, embodied, and has a lot of
> biases *is* a challenge to a lot of moral philosophy. My colleagues are
> happily scanning brains and arguing how integrated mental subsystems have
> to be before we can properly say that a person is responsible (or that
> there is a person there, as in minimally conscious states). The old view of
> perfect rationality and perfect introspection is pretty clearly not a good
> model for how people actually act and think morally.
> The next question is how to enhance it. Just because a moral system might
> not be implementable in a current human brain might not mean it is not
> morally better than the implementable ones, and if we could update
> ourselves to be able to follow it we should.
> On 2016-05-27 21:37, William Flynn Wallace wrote:
> excerpt from link below:
> If our thoughts and decisions are all unconscious, as the ISA theory
> implies, then moral philosophers have a lot of work to do. For we tend to
> think that people can’t be held responsible for their unconscious
> attitudes. Accepting the ISA theory might not mean giving up on
> responsibility, but it will mean radically rethinking it.
> bill w
> extropy-chat mailing listextropy-chat at lists.extropy.orghttp://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
> Dr Anders Sandberg
> Future of Humanity Institute
> Oxford Martin School
> Oxford University
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat