[extropy-chat] Re: (Ethics/Socio) Arrow of Morality

Jef Allbright jef at jefallbright.net
Thu Apr 14 18:36:21 UTC 2005

It seems we have arrived together at a more encompassing understanding 
and are poised to take a few steps toward possible practical 
applications of these ideas.

>I ask for clarification on what you mean by "what works". For empirical
>propositions, I will concede that "what works" means that an empirical
>prediction predicts the outcome it claims to predict, and it more elegant (makes
>fewer assumptions) than the available alternate explanations of equal predictive
Yes, I mean "what works" in the sense you described, but also in a more 
encompassing sense:  that "what works" means a structure that will tend 
to survive and grow, regardless of whether it is fully comprehended by 
any observer system.

>A:  I hope we can agree that the Naturalistic Fallacy of attempting to derive
>"ought from is" is in error because value judgments are necessarily subjective.
>W: Agreed. 
>I speculate that, in theory, if someone could make the argument somehow that a
>judgment (statement of value) does not depend on the judgment (conclusion) of a
>judge (observer), but is and must be the same for all possible judges
>(observers), such an argument could support the idea of an objective judgment.
This statement is not well-formed and carries an internal contradiction 
of a self-referential nature.  Look at the way the various forms of 
"judge" operate here.  I agree, however, with the intent of the 
statement, that IF all observers agree in all cases, then an issue may 
be considered, for all practical purposes, objective. 

However, my point in the Arrow of Morality is that there is practical 
wisdom in recognizing that we can increasingly approach, but never 
achieve, complete and final objectivity.  Recognizing this, we are 
better equipped to devise good practices (policies and procedures that 
work over increasing context, implying inherent growth.)  Alternatively, 
assumption of absolute truth(s) leads to an eventual breakdown, like a 
short-circuit, in the growth process.

As the Red Queen said in _Through the Looking Glass_, "in this place it 
takes all the running you can do, to keep in the same place."  As Van 
Valen (1973) pointed out, a system must continuously develop in order to 
merely /maintain/ its fitness relative to the systems it co-evolves with.

>However, since judgment is based in the understanding (which differs from man to
>man) and not on the reason (which is the same for all men), I think this
>involves a paradox, so I doubt such an argument could ever be successful. 

All paradox results from insufficient context.  In the bigger picture, 
all the pieces fit.

>W: Oddly enough, I was just today reading GK Chesterton's ORTHODOXY, where he
>makes the argument that the fundamental difference between Eastern and Western
>philosophy, between Buddhism and Christianity, is the Eastern identification of
>self with the unity of the universe, versus the Western identification of the
>self separate from (in Christian terms, fallen from) unity with the creator of
>the universe. There are things greater than oneself, for which the hero, the
>saint and the philosopher lays down his life. One could adopt an Eastern
>terminology and say that a lesser "self" was being sacrificed to serve a greater
>"self"; or one could adopt a Western terminology and say that the "self" was
>being sacrificed to the other, an ideal to whom one owes service. The former
>describes sacrifice as enlightened self-interest, and praises enlightenment; the
>latter describes sacrifice as selflessness, and praises love. 

Yes, very apropos. The dichotomy inherent in the popular western view 
causes problems because it doesn't scale well to a range of nested 
contexts -- at some point it impairs growth.

>My question here is twofold: first, do these two descriptions map onto each
>other? Second, if not, does one describe the nature of self-sacrifice better
>than the other?

Self-sacrifice, interpreted narrowly, is immoral.  It's anti-growth, and 
it's logically inconsistent with choice being the result of an agent 
acting according to its own perceived interests.

Self-sacrifice, interpreted within a larger context, means acting 
according to one's identification with a greater self.  It's a simpler, 
scalable concept, but counter-intuitive to thinkers raised in the 
western tradition.

Which is better?  I think the more elegant, scalable model has greater 
long-term prospects for success.

>I got it now. I think. It is alien to my approach to things, which may explain
>my incomprehension. 
>It sounds like a principle that has some of the elegance of Utilitarianism without the unpleasing tyranny-of-the-majority implications of Utilitarianism. 
Yes, as I understand it, Kant updated Utilitarianism, and I think this 
is an update to Kant as mentioned a while back.

>It sounds also like an algebraic approach to morality. If X is greater than and
>encompasses Y, then we can know that X is better than Y, even without knowing
>the specific values of X and Y. 
Yes, it assures us that we can in fact discover and develop principles 
leading to increasingly moral behavior (what we will increasingly agree 
is "good" because it works) but it does not provide moral absolutes.  
The arrow provides a sense of direction, outward, with increasing 
awareness and thus more effective decision-making, rather than inward 
with increasing blind assuredness, and rather than no direction at all 
("all directions equal".)

>My only suspicion toward this way of talking about morality is the same caution
>you expressed towards moral absolutes: concepts like "growth" and "the greater
>self" can be misused. Any concept can be misused, I admit, but some are more
>prone to misuse in one direction than others. The danger of  misuse centers
>around misreading the needs of the growth of the greater self to be mere
>selfishness; a moral maxim that emphasized love for others as its foundational
>principle may be more resistant to misuse than one based on enlightened
>self-interest of the greater self: but, at the moment, I only voice a suspicion,
>and I am not submitting an argument that this is necessarily the case. 
Primitive examples of moral behavior are observed in the animal world, 
with reciprocity, reward and punishment evident to some extent. We 
humans instinctively feel disgust, repulsion, anger, etc., providing a 
moral compass indicating "right" and "wrong" below the level of 
conscious thought.  We still have these built-in indicators because they 
worked well for our ancestors, but in our more complex world they 
sometimes lead us astray. 

A key example is our instinctive fear of outsiders, abstractable to fear 
of the unknown.  In the past, Outsiders were most often a threat, 
competing for limited resources and reproductive opportunities. In our 
present society, an Outsider is likely to be a potential trading partner 
or source of valuable new information.  In the past, there was great 
survival value in avoiding something new -- a strange plant that may be 
poisonous, or a path through unknown and potentially hostile territory.  
In the present, instinctive (and cultural) avoidance of what is new 
leads to missed opportunities in trade, medical care, development of 
more efficient food sources, and so on.

More recently, with expanding awareness, moral rules (principles of what 
works) were codified:  The Golden Rule and its many variations; The Ten 
Commandments known to Christianity, Judaism and Islam; The Four Noble 
Truths and the Eightfold Path of Buddhism; and others.  Many of these 
served to temper the rather harsh instinctive morality of the past, but 
were stated in strict absolute terms as suited the consciousness of the 

We now find ourselves at the cusp of a qualitatively new level of 
awareness of our selves and our environment.  It's an awareness 
manifested at the higher context level of the group, rather than the 
individual, because we have reached the point where our environment is 
becoming too complex for the individual to effectively comprehend 
sufficiently to make effective large-scale decisions.  The time is right 
for a science of right and wrong (what works) incorporating principles 
of effective interaction of complex systems.

I'll take one small step further, and leave more for a future discussion.

Self and Other

We've discussed the importance of Self as the (necessarily subjective) 
agent of all moral choice.  Any action that can be considered in moral 
terms must be an interaction between Self and Other (that which is not 
identified as Self).  I like Stuart Kauffman's term, "the adjacent 
possible" to describe Other in a more functional way, but note that he 
does not apply it in any moral context as far as I know.   Key point:  
All interaction is between Self and Other.  Interaction between Self and 
Other is most effective when Other is not diminished with respect to Self.

I could go a bit further, suggesting principles of effective 
(synergetic, positive-sum) interaction with the goal of optimizing 
growth of Self, but I suspect I've already provided enough fodder for 
controversy and further discussion.

- Jef

More information about the extropy-chat mailing list