<br><br><div><span class="gmail_quote">On 2/28/07, <b class="gmail_sendername">TheMan</b> <<a href="mailto:mabranu@yahoo.com">mabranu@yahoo.com</a>> wrote:<br><br></span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Jef writes:<br>>So if I'm to understand why you think pleasure is<br>>the ultimate measure of morality, I need to<br>>understand what you think pleasure is. Of course,<br>>if in your thinking there are many kinds of
<br>>pleasure, then we'll need to understand what they<br>>all have in common before we can say that there is<br>>something that is worthy of calling fundamental or<br>>ultimate. I don't need to do this because I
<br>>consider "pleasure" in all its manifestations to be<br>>only indications from a subjective system reporting<br>>that things are going well (whether those outside<br>>the system would agree or not.) To me, pleasure is
<br>>only the vector of the feedback loop, but says<br>>nothing directly about the goodness of the output of<br>>the system.</blockquote><div><br>If I could interject, I think what is commonly understood by "pleasure" is too simplistic in this context. Shall I eat the cake or shall I abstain? Eating the cake will be pleasurable; on the other hand, eating the cake may cause me to put on weight. If the anticipated pleasure of eating the cake outweighs the anxiety about putting on weight, I will eat it; if the other way around, I won't. Every factor is added to the mix when making a decision, including more complex emotions such as a sense of responsibility and ethical and aesthetic considerations. At each point, the path taken is the path of greater total pleasure.
<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">>This leads me to ask you where you moral theory<br>>leads you in the case of someone in extreme pain
<br>>from a terminal disease. Would it be morally better<br>>for them to die in order to increase net pleasure in<br>>the world, or do you see them as contributing some<br>>small absolute amount of pleasure (despite their
<br>>pain) which would be lost if they died?<br><br>I think pleasure and suffering can weigh each other<br>out. If and only if there is more pleasure than<br>suffering in a person's life, that life is<br>intrinsically worth living (all other things equal).
<br><br>But even if most of the human beings in the world<br>today would accept and - by changing the law - start<br>applying an ethics that says that all human beings,<br>who are expected to experience more suffering than
<br>pleasure for the rest of their lives, should be<br>killed, that ethics might still upset so many people<br>that such a change in society might worsen the total<br>balance of pleasure/suffering in the world more than<br>
it would improve it. Even most of the people who<br>suffer more than they experience pleasure, would<br>suffer from the knowledge that they might get killed<br>any day. They are driven by their instinctive wish to<br>"survive no matter what". That wish is their enemy,
<br>they just don't know it; just like a retard may not<br>know it is bad for him to hit himself in the head with<br>a hammer all the time.</blockquote><div><br>By that sort of argument you could as easily discount the physical pain as the fear of death. If my brain were different I might not have this irrational fear of death; but if my brain were different I might not suffer so much with the pain either.
<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">It may be that what many people prematurely believe to<br>be utilitarian (for example killing people who are
<br>incurable ill, constantly suffer tremendously from<br>their illness and exhaust society's resources by<br>staying alive) might be utilitarianistically right<br>only if a sufficient number of people are okay with<br>
it. How many would be a sufficient number, I don't<br>know, but I think it's far more than 50% of the<br>people. And since we are not there yet, your intuition<br>is totally right when it says to you not to accept
<br>such an ethics that would shorten some people's lives<br>for the misguided sake of (what is, given the<br>circumstances today, wrongly perceived as) improving<br>the balance of pleasure/suffering in the universe,<br>
even if it might at a first glance improve that<br>balance.<br><br>So your objection is not an objection against a<br>"classical-hedonist-utilitarianistically<br>recommendable" practice, but an objection against what
<br>you misguidedly perceive as being a<br>classical-hedonistic-utilitarianistically<br>recommendable practice. To start killing miserable<br>people today, just because they'd probably otherwise<br>be miserable for the rest of their lives, simply
<br>probably wouldn't improve the balance of<br>pleasure/suffering in the world, even thought that<br>might seem to be the case at a first thought. One must<br>consider the side effects.<br><br>Jef writes:<br>>>> I understand you are claiming that morality is
<br>>>> measured with respect to pleasure integrated over<br>>>> all sentient beings, right? Do you also integrate<br>>>> over all time? So that which provides the<br>>>> greatest<br>>>> pleasure for the greatest number for the greatest
<br>>>> time is the most moral?<br>>><br>>> Fundamentally, yes. However, this does not<br>>> necessarily<br>>> imply that one must inexorably commit immoral acts<br>>> against other sentients in order to achieve this >>
<br>goal.<br>><br>>I understand that you claim that pleasure is the<br>>ultimate measure of morality, but your statement<br>>above seems to say that you think that there may be<br>>other measures of morality (possibly higher) that
<br>>might come into conflict with increasing pleasure.<br>>Doesn't your statement above seem to contradict your<br>>thesis?<br><br>This can be explained by what I wrote above.<br>Utilitarianism may, today, require the practical
<br>appication of some "ethics" that may not seem purely<br>utilitarian at first sight. Today, people value their<br>free will so much that it would be<br>utilitarianistically wrong to even try to take their<br>
free will away from them. It's much more feasable to<br>increase the pleasure within the limits that people's<br>request for free will set. Utilitarianism has to take<br>feasability into account too, among many other things.
<br>And I also believe people's request for free will has<br>a positive intrumental utilitarian value in that it<br>guarantees great plurality in the thinking of mankind,<br>something that has proven to be good for mankind's
<br>survival chances throughout history. And mankind's<br>survival is certainly a utilitarianistically good<br>thing. One thing that can combine plurarily in<br>mankind's thinking with increasing the pleasure and<br>
decreasing the suffering in the world, though, is<br>voluntary giving to the needing. It spreads love, and<br>love is utilitarianistically good, whereas "forced<br>charity" probably has utilitarianistically more bad
<br>than good consequences. This way of reasoning doesn't<br>require anything else than utilitarianism. It doesn't<br>require "respecting people's free will" to have any<br>_intrinsical_ value, although free will certainly has
<br>critical _instrumental_ value.<br><br>>It almost seems as if you saying that the freedom to<br>>choose is a greater moral good than actual pleasure<br>>(which of course I would agree with).<br><br>See above. Its value is greater, but not
<br>intrinsically, only instrumentally, and only for the<br>time being. That may change.</blockquote><div><br>Of course: come the Singularity, we can all experience as much pleasure constantly as computer memory allows.<br>
<br>Stathis Papaioannou <br></div><br></div><br>