[extropy-chat] Pleasure as ultimate measure of morality [Was: Pleasing Oneself]

Jef Allbright jef at jefallbright.net
Thu Feb 1 23:46:32 UTC 2007


Jeffrey -

Would you please send to this list in plain-text format?  Otherwise, it takes some time to reformat and properly quote your text.

- Jef





A B wrote:

> Some time ago Jef Albright wrote:
>> "Seeking pleasure is ammoral , but tends to correlate with activity that we would assess as "good"."
> 
> I would go so far as to say that achieving pleasure
> for oneself is, per se, beyond just being ammoral.
> I would argue that achieving pleasure for oneself
> is positively moral. It is only incidental that the
> entity that appears to sense and enjoy the pleasure
> is the one who is commonly considered to be "oneself".
> IOW, pleasing oneself is a special case of a more general
> condition (what Jef might consider "greater scope") of
> the whole of sentient beings experiencing pleasure.
> That's not to say that all sentients have a desire
> to please any sentient apart from themselves - as
> many examples will testify. However, I think that
> most humans would want other sentients to experience
> pleasure, all else being equal. IMHO, pleasing oneself
> only becomes arguably immoral (or ammoral) when the
> cost of the self-pleasure is a loss of pleasure in
> other sentients or a gain in suffering of other sentients.

---------------------------------------------------------------


> To clear up any possible confusion, I did
> not intend this post or thread name to
> exclusively imply sexual self-satisfaction, 
> but rather any activity that provides oneself 
> with pleasure; from watching TV to decoding 
> the Universe. Although in principle, I don't 
> see anything morally wrong with sexual 
> self-satisfaction. Perhaps I should have been 
> more careful in choosing a subject title. Oh 
> well, maybe it will generate more thread 
> attention ;-)  Anyway, moving on ...

 
> Jef writes:
>> So I understand that you believe pleasure is the ultimate measure of morality.
 
> *Subjective* pleasure {as well as capacity for 
> subjective pleasure} (in it's myriad forms to 
> include those which some people would consider 
> distasteful) over the "largest scope" -meaning 
> the highest number of sentient beings regardless 
> of their station of existence, yes.

What do you mean by "subjective" pleasure?  Is there some kind of pleasure that is not subjective?

I think pleasure/happiness/eudaimonia has many variations.  Of course you realize that I think these sensations and assessments are only indirect indicators and not fundamental measures of "good".  But I wonder if you think you understand that statement of mine that you felt strongly enough about to disagree with?

So if I'm to understand why you think pleasure is the ultimate measure of morality, I need to understand what you think pleasure is.  Of course, if in your thinking there are many kinds of pleasure, then we'll need to understand what they all have in common before we can say that there is something that is worthy of calling fundamental or ultimate.  I don't need to do this because I consider "pleasure" in all its manifestations to be only indications from a subjective system reporting that things are going well (whether those outside the system would agree or not.)  To me, pleasure is only the vector of the feedback loop, but says nothing directly about the goodness of the output of the system.

Do you see pleasure as being measured on a unipolar, or bipolar axis?  As you probably know, many people consider pleasure and pain to be polar opposites on the same scale; do you agree with this?  Or do you see pleasure as ranging from about zero (little or no pleasure) to some high value corresponding with extreme ecstacy?

And just to add another calibration point, you would consider ecstacy to a higher moral good than, for example, the calm satisfaction of completing a hard day's work, or the joy of mother seeing her newborn after hours of painful labor, or...you get the idea.

This leads me to ask you where you moral theory leads you in the case of someone in extreme pain from a terminal disease.  Would it be morally better for them to die in order to increase net pleasure in the world, or do you see them as contributing some small absolute amount of pleasure (despite their pain) which would be lost if they died?

I'm glad to see we seem to be using "scope" in the same way.
 
> Jef writes:
>> I understand you are claiming that morality is
>> measured with respect to pleasure integrated over 
>> all sentient beings, right? Do you also integrate 
>> over all time? So that which provides the greatest 
>> pleasure for the greatest number for the greatest 
>> time is the most moral?
 
> Fundamentally, yes. However, this does not necessarily
> imply that one must inexorably commit immoral acts
> against other sentients in order to achieve this goal.

I understand that you claim that pleasure is the ultimate measure of morality, but your statement above seems to say that you think that there may be other measures of morality (possibly higher) that might come into conflict with increasing pleasure.  Doesn't your statement above seem to contradict your thesis?

 
> Jef writes:
>> I assume you acknowledge the necessity of some short
>> term sacrifice of pleasure in order to achieve the
>> greater pleasure. How do you see that working in
>> principle?
 
> Well, we are dealing in abstract ideals and not in the
> grit of reality. However, achieving the "greater pleasure"
> does not inevitably require imposing suffering, or a loss
> of pleasure, or a loss of capacity for pleasure on any
> *other* sentient beings. For example, if 99% of hard
> working Americans chose to donate $30.00 to the advancement
> of altruistic AGI, well... good things would probably happen. 
 
> Now donating that $30.00 may necessitate that a donor remain
> at his crappy job that he hates, but he always retains the
> choice not to donate and not to work, if he so chooses. It's
> a willful sacrifice.

It almost seems as if you saying that the freedom to choose is a greater moral good than actual pleasure (which of course I would agree with).

So if you believe that the level of the world's pleasure over extended time is the ultimate measure of morality, would you therefore consider it a moral improvement if all currently living humans would sacrifice their current standard of living and invest their time and resources soley to increase the pleasure of future generations?  This form of leverage should certainly result in greater pleasure for greater numbers over greater time, but it would violate many of my moral values.

If you feel somehow that extrapolating to the future in such a way is not valid, then how about this scenario:  Would your belief in pleasure as the ultimate measure of morality compel us to to adopt a form of willing slavery, where some people (say selected by lottery) would enjoy the labors of people who would otherwise be unemployed and unproductive members of society, as long as the "slaves" are given a constant supply of pleasure-inducing drugs? It seems all parties could have more pleasure in such a system, although at loss of what I consider greater moral values.

Yes, these examples are extreme, but it's at the edges where we find out it a concept really holds up to its promise.
 
> Jef writes:
>> Based on your reasoning, if 50 percent of the population
>> are feeling less than average pleasure, would it be a moral
>> good to eliminate them from the population in order to raise
>> the overall level of pleasure?
 
> No, definitely not. A more moral action would be to lift the
> lower 50% out of their unhappiness. Perhaps I could make my
> original statement more applicable by saying:
> Pleasing oneself only becomes arguably immoral or ammoral when
> the cost of the self-pleasure is a loss of pleasure in other
> sentients, a gain in suffering of other sentients, or a loss
> of capacity for pleasure in other sentients. 

So, based on your statement above, would you reason that it would be a moral good to increase the population of sentients as much as possible, despite the unavoidable economic difficulties so that these poor people could exist like many other poor people in the world, each of them adding an additional increment of pleasure?

If, as I'm guessing, you're not comfortable with decreasing *or* increasing the number of sentients purely in order to increase the level of pleasure, then wouldn't this seem to tell us that increasing pleasure is not truly your fundamental and ultimate measure of morality?
 
> In this case, if you were to kill the lower 50% you'd be
> bringing their capacity for subjective pleasure down to zero,
> in addition to eliminating whatever low level of pleasure that
> they did manage to experience to begin with.
 
Jeffrey, I've tried to show you some of the obvious inconsistencies in a system of morality based on pleasure.  You're in good company as many philosophers have held to the same idea.  You can find many arguments pro and con if you search "utilitarian ethics", and it's ultimately incoherent in my opinion.

I could give you a more coherent view of morality, but you haven't asked why I made the statement that statement that you find disagreeable.

- Jef





More information about the extropy-chat mailing list