[extropy-chat] Desirability of Happiness, Etc. (was Are ancestor simulations immoral?)

Jef Allbright jef at jefallbright.net
Sun May 28 20:05:07 UTC 2006


Lee -

You're typically careful with your choice of words, but after two cycles of
this thread it seems you're agressively debating what you thought I said
rather than looking to understand my points and contribute constructively to
the discussion.

Misconception 1:  Intrinsic good
I'm referring to the well-known understanding that *intrinsic* good isn't a
coherent concept.  When we talk about "good" it must be relative to the
value system of some agent.  Any perceived goodness is not intrinsic.  As I
said earlier, we can agree on much of what is good because we share much of
our fundamental values due to common evolutionary heritage.  But it's not
intrinsic and it's not absolute.  I think you know this perfectly well, but
keep reading "intrinsic" as if I were saying "obvious" or "objective."

Misconception 2:  Happiness directly corresponding to good
I've tried to make clear that happiness is not necessarily an accurate or
direct indicator of what is good.  I think you could easily agree with this
and then we could move on to more interesting things.  I've tried to point
out that applying these terms as absolutes leads to contradiction, and I've
tried to point out that fundamentally life involves dealing with gradients.
It seems that you have not grasped the point I was trying to make, but
rather took my comments as some kind of attack calling for your defense.
(This has been a common pattern between us, by the way, and I'll accept
whatever responsibility is due to me.)

I was seeking understanding (possibly agreement) on these two points with
the hope that the discussion could move on to more interesting issues such
as mitigating the effects of those evolved systems which impair performance;
using "positive" reward gradients as motivation rather than the absolute
bipolar happiness/suffering scheme you referred to; possibly some comparison
between the use of positive gradients and the idea of positive-sum social
decision-making contrasted with the idea of politics as competition over
scarcity; and possibly some discussion of the implications and consequences
of acting on your recent statement that you'd like to be thousands of miles
away with only people you trust.

BTW, breaking up a person's statements in order to call bullshit on a phrase
out of context is dirty pool, regardless of whether you apologize for it in
your next sentence.

- Jef




On 5/28/06, Lee Corbin <lcorbin at tsoft.com> wrote:
>
> Jef writes
>
> > As discussed extensively on this list and elsewhere, if we
> > were living in a simulated universe and it were being
> > switched on or off, at whatever duty-cycle, there would be...
> > no reason to care -- from within the system.
>
> > So on to happiness and suffering.
>
> > (2) Happiness functions as an indicator of progress toward
> > goals,
>
> Only in evolutionarily derived natural circumstances. Drugs
> can short-circuit this, which is often very, very good.
>
> >  and for that reason it tends to correspond with what is
> > considered good (what is seen to work over increasing scope.)
> > But to confuse an indicator of progress with progress itself
> > is like confusing a map with the territory and the eventual
> > results are not good (they don't work very well.)
>
> That's not necessarily true at all.  As Pearce explains fully,
> happiness even when artificially induced can often enhance
> progress. Many times we're made unhappy (at the instigation
> of our genes) because things aren't going well, and they
> [the genes] figuratively figure that strength should be
> saved for sunnier days. But soon sunniness can come from a
> bottle, and artificial enthusiasm and joy will bring about
> greater individual and collective progress and achievement.
>
> >  Similarly, we can subvert the process and create a feeling
> > of happiness directly by technical means, but this too is
> > not an intrinsic good,
>
> I disagree.  All other things being equal, I approve of
> happiness, artificially induced or otherwise. In your
> language, then, I consider the happiness itself as an
> intrinsic good. (Don't you consider pain in and of itself
> an intrinsic evil?)
>
> > When the Buddha said that all life is suffering,
> > he was stating a more fundamental truth,
>
> This has to be the greatest one line of bullshit I've
> ever seen you endorse! The exact degree that all life
> involves suffering is the degree to which our technology
> hasn't yet fixed something quite obvious.
>
> > that all life involves gradients that must be continuously
> > overcome.
>
> (yes I know, I didn't let you finish the sentence. But still!)
>
> > It would be a misunderstanding to think one could eliminate
> > the gradients of life, but it is a great understanding to
> > acknowledge and accept this and thus eliminate subjective
> > suffering from the internal model while continuing to
> > function in the world.
>
> Well yes!  But you don't need to beat around the bush.
>
> > I think we can agree that being blissfully incapacitated is
> > not morally superior to striving
>
> I don't see them as at all connected, i.e., I don't see a
> necessary trade-off between them at all
>
> > (and therefore tolerating some suffering) to promote ones values.
>
> at the present time  :-)
>
> Lee
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20060528/cfbb59e9/attachment.html>


More information about the extropy-chat mailing list