[extropy-chat] Altruism (was What Human Minds Will Eventually Do)

Lee Corbin lcorbin at tsoft.com
Sat Jul 1 18:55:07 UTC 2006


Jef Allbright writes

> Sent: Friday, June 30, 2006 9:20 AM
> On 6/29/06, Lee Corbin <lcorbin at tsoft.com> wrote:
> 
> > But it's so *hard* to extrapolate from non-human viewpoints :-)
> > At least for me (I'm less sure about you!)
> 
> That's an interesting statement.  On the surface it may seem obvious
> and common-sensical but it seems to carry a hidden assumption.
> Consider this alternate and ask whether anything substantial is
> missing:  "But it's so hard to extrapolate non-human behavior."

All right.  I'm sorry.  I meant super-human, >H, okay?  I thought
it was clear from context.

> for the sake of
> clarity in this sort of discussion we might do well to abandon the
> term "altruistic" as it is deeply tied to the irrational behavior of
> an agent putting the good of others over its own (within a given
> context.)

Omigod.  Jef, *you* can't be serious.  Do you believe (after persuading
me to read "The Moral Animal") that altruism is "irrational"???

(Okay, please for give me for mounting the soapbox here---and
overreacting to one word---but I do want to lay down certain 
claims here, for the sake of clarity.)

Rationality has nothing to do with it. It all depends on one's values.
It is *not* one of my values to have the maximum number of children
I can, nor is it one of my values to attain as much publicity as I
can.  I know that you don't consider *this* to be irrational..., so
why is it necessarily irrational for me to sacrifice myself for
my friends, or for the Socialist Workers movement, or whatever?
Surely you just used the wrong word.

> Altruism certainly does exist, in the form of evolved programming that
> causes individuals to act to their local detriment for the good of
> their larger group (or some proxy), but in our discussions on the
> Extropy list we are more often interested in "enlightened self-interest",
> dynamics of cooperation/synergy over increasing scope, or superrationality.

Yes, we are indeed more often interested in the effects on "number one",
but not always. When you speak of "local detriment", I do understand
what you mean. That is so.  But one simply may embrace a value system
that puts what is good for oneself subordinate to other things. We
sometimes discuss those things too.

> > (It bears repeating that humans engage in violence far, far
> > less per observed hour than does any other primate.)
> 
> And it may bear repeating that this trend is not based on increasing
> niceness or goodness, but rather on increasing awareness of
> positive-sum behaviors that work over increasing scope.  We're moving
> away from focusing on ends (that person/tribe is our enemy) and toward
> effective principles of growth (that person/tribe may eventually
> become a McDonalds franchise.)

Quite right. As Keith said in another thread, ev psych is the best
theory going.  You are right to emphasize that most of our behavior
that appears altruistic really isn't: there is usually an element of
enlightened self-interest. But that is *not* always the case, as
you know.

Lee




More information about the extropy-chat mailing list