[extropy-chat] In defense of moral relativism
Bryan Moss
bryan.moss at dsl.pipex.com
Fri Apr 29 20:48:45 UTC 2005
Giu1i0 Pri5c0 wrote:
>The universe, as we presently know and understand it, does not seem to
>care about old ladies.
>
Sure it does. In so far as we can say there are old ladies in the
universe, we can say there's also caring-about-old-ladies in the
universe. Things like "old ladies" and "caring-about-old-ladies" are
about equally problematic when it comes to the connection between
representation and world.
>Others may say, well the old lady is past reproductive age and probably has nothing left to contribute to the biological or memetic evolution of the human race. She is consuming or holding resources that should be given to younger generations. Her flat should be given to a younger person who can still have children or develop breakthrough ideas. The money of her pension should be put to a productive use. Ergo, the moral thing to do is NOT helping her to cross the street: the sooner she is killed by a car, the better.
>
>
I think we can have a perfectly naturalistic assessment of this
situation. In this case, the statement is an intentionally contrarian
example, and thus we can dismiss it.
>I think this is bullshit. Can I prove that it is bullshit in terms of
>any absolute, objective, or whatever morality? No.
>
I just gave what I consider a pretty good hypothesis as to why the
statement is bullshit; you can probably corroborate it for me. You may
think I'm being disingenuous, but that's my point: that just *is* a
naturalistic account of morality. A moral is, loosely, a representation
that becomes stable within some population and alters moral behaviour. A
naturalistic account of morality asks how this happens. Philosophy books
are filled with pages of statements that, on such a naturalistic account
of morality, we can probably safely ignore, simply because, as moral
representations, they don't have legs. (They might illuminate other
aspects of moral psychology, however.)
What sort of moral things do people do in practise? How do they differ
between groups? As far as I know, this is an area where differences
aren't extreme. The sort of extreme moral situations we do see tend to
concern inter-group dynamics; either in-group fundamentalism in reaction
to an outside challenge to group integrity, or the perceived necessity
to one group of destroying another. In less extreme circumstances, we
can usefully ask what leads to situations where people routinely break
moral norms. Since I doubt the average criminal's career choice was made
through a sense of moral autonomy, I don't think this is likely to
present any problems of the sort we're discussing.
Where morals do differ, we might be tempted to lapse back into
relativism. But if there is no fact as to which moral you should accept,
then we can happily eliminate moral conflicts by arbitrarily changing
prevalent morality. Basically, we can operate under a form of
utilitarianism where utility is defined as the satisfaction of innate
moral intuitions. Whereas it isn’t obvious on most accounts of
utilitarianism whether the increase of utility is a genuine moral good,
on this account, the more moral satisfaction you have, the more moral
satisfaction you have. Thus, what is ultimately moral is what can become
stable within a population and morally satisfies the most people.
Unlike a criminal, a sociopath might make a convincing argument for
moral autonomy, which leads to a more difficult question. Is there a
standpoint from which we can judge whether moral intuition is
functioning properly? Pragmatically, the answer is probably yes; on a
deeper level, I'm not sure. That presents a problem for transhumanists,
who might want to alter their moral intuitions. Then we can ask, Are
there moral intuitions that are necessary given X, Y, Z? Where X, Y, Z
are general axioms of the space of autonomous intelligent beings. Is it
possible to answer this question? I don't know. Regardless, for our
immediate purposes we don't need an answer, we can dissolve most
problems in a straightforward way with a simple naturalistic approach.
There are other objections here. For example, it might turn out to be
the case that our intuitive moral expectations and our moral intuitions
are always in conflict. Perhaps we routinely expect more than we're
willing to give. Perhaps we expect types of things we cannot give at
all. Resolving such issues would be problematic but they would at least
be finite.
BM
More information about the extropy-chat
mailing list