[ExI] Tolerance

Emlyn emlynoregan at gmail.com
Tue Dec 8 01:13:09 UTC 2009


2009/12/8 Brent Neal <brentn at freeshell.org>:
> The utilitarian argument is much more compelling. If the thing produces good
> results, then the thing has merit. If it does not, then it is meritless.

This is where many of us would disagree I think. For me, the
consequentialist approach is not useful, because it can only ever be
evaluated after the fact. You can't use it to predict the future or
guide future action, because it's only a case by case description of
the past; this thing turned out well, that thing turned out poorly. So
for instance, if you were to look at religion this way, you'd come up
with a catalogue of good outcomes / bad outcomes, but how could you
use this to choose future action? I see it only as an approach useful
in assigning blame, which is sometimes important, but largely an empty
endeavour.

If you were to use this catalogue to guide future action (let's assume
an approach based on "do the thing that turned out best most often in
the past"), then you'd be making an assumption that the past is the
best guide to the future. Without extracting principles from your
catalogue, this leaves you with a very narrow band of behaviours; you
can use only approaches which have been tried before. As a guide to
21st century behaviour, that's pretty moribund.

If you extract principles from the past, then you can start doing
useful things. However, this is no longer the kind of act
utilitarianism you have described, it's rule utilitarianism. Here you
are looking for principles for behaviour, some set of self consistent
rules which lead to good outcomes more of the time than anything else
you can come up with.

And here you are solidly in theoretical reasoning territory. You are
trying to predict the future, so you need a theoretical model of the
universe, of utility, of how people work, etc etc.

That's exactly where you need a priori arguments about religion. We
can say that we shouldn't use religion because, in the general case,
we think it will lead to negative utility, based not only on evidence
but on logical extrapolation of its definitional features (eg: focus
on faith above truth).

But I'm even suspicious of utilitarianism here. Utilitarianism appears
to me to have a real weakness regarding relative power of actors. When
you talk of maximizing utility, you're talking about something very
fuzzy as if it were strongly defined. Are there utility points which
each person has, which you can sum under various scenarios and find
the greatest such? No. Instead, we kind of guess at what the utility
overall is, based on intuition of what is good for other people, and
invariably altered by the lens we look through, which is our POV.

Inescapably, people's interests clash; if they didn't, we wouldn't
need any systems for sorting this kind of stuff out in the first
place. So any important decision about how to live, how to proceed
into the future, is one where competing interests are being "balanced"
(ie: some winners and losers are being chosen). But who is deciding
how to do this balancing? Those with power in a given situation. Is
their assessment of the best outcome the same as that of the losers?
Almost definitely not, these are groups in conflict. I think the more
you use case-by-case assessment of utility, the more prone you are to
the (sometimes unconscious) bias toward the powerful by the simple
fact that it is their utility functions being used in calculations.

So I find myself more and more in favour of general, unalterable
principles. The most important I can think of is the pre-eminence of
truth. Truth is more important than anything else. Which is why I like
Richard Dawkins, I think we share that as a value.

-- 
Emlyn

http://emlyntech.wordpress.com - coding related
http://point7.wordpress.com - ranting
http://emlynoregan.com - main site



More information about the extropy-chat mailing list