[extropy-chat] A future fit to live in?

Jef Allbright jef at jefallbright.net
Mon Jan 15 18:39:53 UTC 2007

Heartland wrote:

> Jef, it sounds like you're a card-carrying functionalist. :-)

Okay, but of the never-know-the-whole-context kind.

> Jef:
> I never said that promoting one's values is more important than promoting one's
> survival.
> I'm sorry Jef, but that is exactly what you've been saying (just look at your
> previous sentence). If you hadn't been saying that I wouldn't have been compelled
> to reply to your posts. I'm sure you don't think you are saying that, but you do.
> (read below)
> Jef:
> What I said is that fundamentally what we do is try to affect our environment in
> such a way that we promote our values into the future (values for survival
> included.).
> And some values are more important than trying to stay alive, right?

Yes, some values may be more important than staying alive.  But please understand that fundamental does not mean important.

> So, it seems to you that this stipulation of "[promoting survival] without
> requiring an explicit goal of 'you must survive!' makes all this right and
> logically consistent. Let's then focus on this. How can you claim that promoting
> survival is more important than promoting (survival-unrelated) values and still
> insist that explicit goal of staying alive is less important than promoting values?
> In other words, how can trying to survive be more important than staying alive?
> Staying alive *is the whole point* of trying to survive. It's as if someone
> campaigned hard for candidate X while not caring about whether or not X wins the
> office.

Cosmides and Tooby: "Humans are adaptation executors rather than fitness maximizers."

If you understand the above statement, then you would understand my point.  You can google on that phrase as well as on "framing problem" for more.

> Jef:
> Seeking pleasure is ammoral , but tends to correlate with activity that we would
> assess as "good".
> ----

> I would say that's a very strong correlation. :-) Perhaps strong enough to define
> "good" as "pleasure?"

So the pleasure experienced by a rapist means rape is good? 
So the high of heroin means drug-induced bliss is good?
So the satisfaction of gaining from another's loss is good?

All of these "goods" fail very quickly as the context is extended.

So the aches and pains of a hard days's work would is bad?
So the loss of 100 lives to defend our homeland would is bad?
So paying one's electrical bill is bad?

All of these "bads" become good as the context is extended.

Before you being justifying via "these activities are in anticipation of the ultimately pleasurable outcome" just take a look at higher-level look at the nature of your argument.  If we were to plot a a chart of pleasure versus goodness, we would see a very strange curve, implying that while there is correlation, it's not a direct relationship.  Try it in your mind.  A huge degree of pleasure for the heroin addict and the rapist, with very little good.  A small degree of pleasure for paying your electric bill, but a very substantial good.  Now, you could argue that the good is integrated over extended time.  You could argue that the heroin addict suffers much more pain over a long period of time, but please consider that decisions and expected reward are in the present, and the heroin addict might slide from bliss to coma, with no pain to balance your equation.

Now, if you'd like a simple, coherent, monotonic, extensible definition of "good", consider:


 Actions are assessed as good to the extent that they are expected to promote our present values into the future.


Taking that one step further, actions are considered "right" or "moral" to the extent that they are assessed as "good" over increasing scope of consequences.


None of this can be derived from "pleasure" because pleasure is merely a higher level adaptation, a subjective indication, a side-effect, of what tends to work. It's not fundamental.

Hmmm.  It seems that you snipped my challenge to you about the "pleasure of ant and amoebas"...

> Heartland:
> If you or someone else can show why wireheading is wrong without
> resorting to the obvious "yuck factor" I would like to read it.)
> Jef:
> Wireheading can be useful to the extent it improves functioning by compensating for
> performance impairments due to detrimental side-effects of  our evolved
> configuration.  For example, physical pain is a useful adaptation in that it forces
> a prompt (and usually appropriate) response by the organism away from danger.
> However, a detrimental side-effect of this useful adaption is the possibility of
> lingering  or disabling pain.  Similarly for mental and emotional conditions that
> might be compensated beneficially to improve the functioning of the (human)
> organism.
> ----
> Yes.
> Jef:
> However, wireheading can be very bad to the extent that it subverts the "pleasure
> sense" by changing values toward environment, short-circuiting the healthy mode of
> allowing the agent to change environment toward values.  Such short-cicuiting of
> the growth process is morally neutral with regard to the individual, but
> detrimental and morally wrong from the point of view of others in the community.
> ----

> On the face of it, this is a strong argument, but consider this. Pleasure is not
> only chocolate and sex (as I cautioned against this knee-jerk thought before) but
> also seeing a healthy environment and happy community. If wireheading has a natural
> tendency to change values toward environment, that powerful force will be
> automatically balanced by an opposing force that says, "it would be wrong to change
> values as this change could negatively affect my environment which would certainly
> diminish my potential for experiencing pleasure in the future." This simple and
> "cold" benefit vs. cost analysis would keep a rational agent on the path of
> "pleasure growth" resulting in equal benefits to the individual *and* society.
> Obviously, this "agent" doesn't necessarily refer to a present-day human. This
> could only work for rational agents. The point is that it's not that wireheading
> itself is broken, but that humans still are.

And do you really think there can ever be a completely rational agent?

> Jef:
> If we're going to attempt to continue this discussion, I think it would be very
> effective if we tried reversing roles.   If agreeable, you can reply by clearly
> summarizing my position (my point of view) as coherently as possible, then state
> why anyone might have a problem with it.  I'll do the same for your position.  This
> should greatly minimize the tendency to talk past one another.
> ----
> Ah, it's that trap again. Fool me once... . :-)

Yes, it's a wonderfully effective technique for exposing the extent of the other party's understanding.

- Jef

-------------- next part --------------
A non-text attachment was scrubbed...
Name: winmail.dat
Type: application/ms-tnef
Size: 9448 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070115/c7e2d992/attachment.bin>

More information about the extropy-chat mailing list