[extropy-chat] Pleasure as ultimate measure of morality [Was: Pleasing Oneself]

Stathis Papaioannou stathisp at gmail.com
Wed Feb 28 04:17:58 UTC 2007


On 2/28/07, TheMan <mabranu at yahoo.com> wrote:

Jef writes:
> >So if I'm to understand why you think pleasure is
> >the ultimate measure of morality, I need to
> >understand what you think pleasure is.  Of course,
> >if in your thinking there are many kinds of
> >pleasure, then we'll need to understand what they
> >all have in common before we can say that there is
> >something that is worthy of calling fundamental or
> >ultimate.  I don't need to do this because I
> >consider "pleasure" in all its manifestations to be
> >only indications from a subjective system reporting
> >that things are going well (whether those outside
> >the system would agree or not.)  To me, pleasure is
> >only the vector of the feedback loop, but says
> >nothing directly about the goodness of the output of
> >the system.


If I could interject, I think what is commonly understood by "pleasure" is
too simplistic in this context. Shall I eat the cake or shall I abstain?
Eating the cake will be pleasurable; on the other hand, eating the cake may
cause me to put on weight. If the anticipated pleasure of eating the cake
outweighs the anxiety about putting on weight, I will eat it; if the other
way around, I won't. Every factor is added to the mix when making a
decision, including more complex emotions such as a sense of responsibility
and ethical and aesthetic considerations. At each point, the path taken is
the path of greater total pleasure.

>This leads me to ask you where you moral theory
> >leads you in the case of someone in extreme pain
> >from a terminal disease.  Would it be morally better
> >for them to die in order to increase net pleasure in
> >the world, or do you see them as contributing some
> >small absolute amount of pleasure (despite their
> >pain) which would be lost if they died?
>
> I think pleasure and suffering can weigh each other
> out. If and only if there is more pleasure than
> suffering in a person's life, that life is
> intrinsically worth living (all other things equal).
>
> But even if most of the human beings in the world
> today would accept and - by changing the law - start
> applying an ethics that says that all human beings,
> who are expected to experience more suffering than
> pleasure for the rest of their lives, should be
> killed, that ethics might still upset so many people
> that such a change in society might worsen the total
> balance of pleasure/suffering in the world more than
> it would improve it. Even most of the people who
> suffer more than they experience pleasure, would
> suffer from the knowledge that they might get killed
> any day. They are driven by their instinctive wish to
> "survive no matter what". That wish is their enemy,
> they just don't know it; just like a retard may not
> know it is bad for him to hit himself in the head with
> a hammer all the time.


By that sort of argument you could as easily discount the physical pain as
the fear of death.  If my brain were different I might not have this
irrational fear of death; but if my brain were different I might not suffer
so much with the pain either.

It may be that what many people prematurely believe to
> be utilitarian (for example killing people who are
> incurable ill, constantly suffer tremendously from
> their illness and exhaust society's resources by
> staying alive) might be utilitarianistically right
> only if a sufficient number of people are okay with
> it. How many would be a sufficient number, I don't
> know, but I think it's far more than 50% of the
> people. And since we are not there yet, your intuition
> is totally right when it says to you not to accept
> such an ethics that would shorten some people's lives
> for the misguided sake of (what is, given the
> circumstances today, wrongly perceived as) improving
> the balance of pleasure/suffering in the universe,
> even if it might at a first glance improve that
> balance.
>
> So your objection is not an objection against a
> "classical-hedonist-utilitarianistically
> recommendable" practice, but an objection against what
> you misguidedly perceive as being a
> classical-hedonistic-utilitarianistically
> recommendable practice. To start killing miserable
> people today, just because they'd probably otherwise
> be miserable for the rest of their lives, simply
> probably wouldn't improve the balance of
> pleasure/suffering in the world, even thought that
> might seem to be the case at a first thought. One must
> consider the side effects.
>
> Jef writes:
> >>> I understand you are claiming that morality is
> >>> measured with respect to pleasure integrated over
> >>> all sentient beings, right? Do you also integrate
> >>> over all time? So that which provides the
> >>> greatest
> >>> pleasure for the greatest number for the greatest
> >>> time is the most moral?
> >>
> >> Fundamentally, yes. However, this does not
> >> necessarily
> >> imply that one must inexorably commit immoral acts
> >> against other sentients in order to achieve this >>
> goal.
> >
> >I understand that you claim that pleasure is the
> >ultimate measure of morality, but your statement
> >above seems to say that you think that there may be
> >other measures of morality (possibly higher) that
> >might come into conflict with increasing pleasure.
> >Doesn't your statement above seem to contradict your
> >thesis?
>
> This can be explained by what I wrote above.
> Utilitarianism may, today, require the practical
> appication of some "ethics" that may not seem purely
> utilitarian at first sight. Today, people value their
> free will so much that it would be
> utilitarianistically wrong to even try to take their
> free will away from them. It's much more feasable to
> increase the pleasure within the limits that people's
> request for free will set. Utilitarianism has to take
> feasability into account too, among many other things.
> And I also believe people's request for free will has
> a positive intrumental utilitarian value in that it
> guarantees great plurality in the thinking of mankind,
> something that has proven to be good for mankind's
> survival chances throughout history. And mankind's
> survival is certainly a utilitarianistically good
> thing. One thing that can combine plurarily in
> mankind's thinking with increasing the pleasure and
> decreasing the suffering in the world, though, is
> voluntary giving to the needing. It spreads love, and
> love is utilitarianistically good, whereas "forced
> charity" probably has utilitarianistically more bad
> than good consequences. This way of reasoning doesn't
> require anything else than utilitarianism. It doesn't
> require "respecting people's free will" to have any
> _intrinsical_ value, although free will certainly has
> critical _instrumental_ value.
>
> >It almost seems as if you saying that the freedom to
> >choose is a greater moral good than actual pleasure
> >(which of course I would agree with).
>
> See above. Its value is greater, but not
> intrinsically, only instrumentally, and only for the
> time being. That may change.


Of course: come the Singularity, we can all experience as much pleasure
constantly as computer memory allows.

Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070228/d99648e0/attachment.html>


More information about the extropy-chat mailing list