[ExI] moral reasoning
anders at aleph.se
Sat May 28 08:08:40 UTC 2016
I think he is stretching the theory too far. That our ability to
introspect doesn't cover 100% of our minds, nor is 100% perfect doesn't
mean there are no detectable thoughts or beliefs. These exceptions do
not have to be minor "limited exceptions". Similarly, that we might use
a mindreading system inwards does not mean it is limited by sensory
input: contents of working memory clearly seem accessible to it.
The "new" view that the mind is fairly opaque, embodied, and has a lot
of biases *is* a challenge to a lot of moral philosophy. My colleagues
are happily scanning brains and arguing how integrated mental subsystems
have to be before we can properly say that a person is responsible (or
that there is a person there, as in minimally conscious states). The old
view of perfect rationality and perfect introspection is pretty clearly
not a good model for how people actually act and think morally.
The next question is how to enhance it. Just because a moral system
might not be implementable in a current human brain might not mean it is
not morally better than the implementable ones, and if we could update
ourselves to be able to follow it we should.
On 2016-05-27 21:37, William Flynn Wallace wrote:
> excerpt from link below:
> If our thoughts and decisions are all unconscious, as the ISA theory
> implies, then moral philosophers have a lot of work to do. For we tend
> to think that people can’t be held responsible for their unconscious
> attitudes. Accepting the ISA theory might not mean giving up on
> responsibility, but it will mean radically rethinking it.
> bill w
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
Dr Anders Sandberg
Future of Humanity Institute
Oxford Martin School
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat