[extropy-chat] Re: (Ethics/Epistemology) Arrow of Morality

Jef Allbright jef at jefallbright.net
Thu May 5 00:37:01 UTC 2005

john-c-wright at sff.net wrote:

>Now, to the matter:
>To the best of my limited understanding, your conception of an arrow of morality
>has three shortcomings: first, it is useless to any who do not accept mere
>survival as the ground of morality

I've repeatedly said that growth (of Self), not mere survival, is 
inferred to be the fundamental moral good. That is why I referred to the 
Red Queen Principle in my last two posts to you on this point.

>; second, it is mute to determine what objects
>should be included or excluded from the moral order, some of which are already
>universal in any case;
As I've said earlier, this metaethical theory is not prescriptive, but 
its value is (1) in assuring us that moral progress can be achieved in 
practical terms (thus contradicting any strong version of moral 
relativism) and (2) in providing a framework for discovering and 
developing principles of effective interaction that will lead to 
increasingly moral behavior.

> third, morality by its nature must be treated as if it
>were an absolute by its partisans, or else it has no ability to act as a moral
I disagree with this assertion. In fact, belief in absolutes, without 
considering context, eventually leads to a dangerous blindness, or 
limited awareness of the full context of a situation and thus limited 
effectiveness of solutions. A person can make good moral choices based 
on increasingly sound and more generally applicable principles rather 
than depending on a set of static moral laws established during a 
simpler time.

>1. I asked for a description of what is meant by your idea that morality is
>“what works”, especially were a given system of moral rules or reciprocities
>works better for a larger group than for a smaller included group. Identifying
>“what works” in an empirical science is easy: when the results as witnessed by
>our eyes are as predicted, the theory giving rise to the prediction is said to
>have “worked.” If the results are other than predicted, no matter what good the
>theory may be on other grounds, it does not work as an empirical predictive theory.

Yes, I generally agree with this.

>I then asked how to apply this to a normative science, such as ethics, where we
>are not dealing with theories predicting what will happen, but, rather, with
>maxims of what men ought or ought not to do.

Maxims are becoming obsolete. They worked quite well for some time 
because they codified "what worked" generally for our ancestors in their 
qualitatively simpler environment of limited interactions between 
individuals and between tribes. Static laws that worked well then do not 
work as well in an increasingly complex world.

>You say: "what works" means a structure that will tend to survive and grow,
>regardless of whether it is fully comprehended by any observer system.
>I submit, however, that there is no ground to say one thing “works” and another
>“does not work”, without a normative axiom beforehand to define what works. 
>Mere survival is insufficient for this end: if the human race, for example, were
>promised mankind would enjoy a population level on average of two hundred
>millions, guaranteed to survive at least two hundred thousand years, if only we
>were absorbed into a Borg cube, and lost our souls; or if we were, given the
>alternative, offered a population only of one hundred millions and a span of one
>hundred thousand years if we are members of the United Federation of Planets, I
>would select the Federation over the Borg for myself and my children. 
I don't know how to make it any clearer that I am not arguing that mere 
survival is a moral good. I have tried to say clearly that a moral agent 
must make choices that are in its own (extended) interests. I don't 
understand why you would erect such a straw man at this point in this 
reiterative discussion.

>My point is that only if I accepted the normative axiom that mere survival at
>any price were the supreme governing moral principle, am I would obligated to
>accept the offer of the Borg to be assimilated.  They offer twice as many
>survivors to last twice as long. But this is not a axiom I accept: there are
>times when it is better for the nation to perish than that one innocent man
>should die. Thank goodness, those times are rare, but the mere existence of
>normative values no lower than mere survival makes me chary of accepting your
>formulation without some additional argument to support it.
I am afraid that I have failed repeatedly to make clear that I'm not 
talking about mere survival, but Growth of Self. Self must choose that 
which will further the growth of its interests, not mere survival. I 
can't see how being assimilated by the Borg would be in anyone's 
rational interests, and I'm surprised that this isn't clear from much of 
what I've said over the preceding weeks of this discussion.

>2. The second basic problem with the “arrow of morality” formulation, is that it
>cannot be used to tell in what direction the arrow of morality should grow, and,
>hence, cannot tell a man how he should act. This is a specific application of a
>general philosophical error when dealing with evolving or changing standards: a
>standard, by definition, if it changes, cannot be used as a standard. 
I use the "Arrow" analogy to convey that there is a universal "ratchet 
effect" that what works tends to survive and grow. In other words, that 
there is progress, rather than isotropic all-things-relative or static 
absolutes. The analogy is similar to the thermodynamic arrow of time, 
which means there are principles describing a process, but does not mean 
it will help you tell the time at any moment.

>A moral code is a specific formulation of the universal morality; the main point
>of difference from culture to culture or age to age is the scale of the moral
>code: whether the moral order protects and commands one’s neighbor’s only, one’s
>tribes or nation, or all mankind. This seems to be the arrow of morality of
>which you speak, the motion from a parochial to a cosmopolitan moral code. 

No, I am not referring to increasing geopolitical scale, but I am 
talking about increasing context of awareness applied to moral 
reasoning. This increasing awareness is a result of increasing the 
number of agents that are interacting, increasing the types of 
interactions, and the number of interactions. All of these tend to 
increase with time, thus the analogy of an arrow.

>The Stoic, the Christian, the Buddhist and the Mohammedan each embrace a code,
>which is universal and cosmopolitan. Pagan codes of honor (with all due respect
>to my pagan ancestors) are parochial; pagan gods were meant to be the tribal
>gods of a given tribe, and their rules were never claimed to protect or to bind
>strangers from the antipodes. But the monotheistic religions made the assertion
>of universality. These systems claim to apply to every living soul. The
>parochialism of the previous tribal gods was rejected by the Roman and absorbed
>by the Hindu: antislavery societies, believe it or not, existed among the
>Imperial Romans and during the Christian Dark Ages. Likewise, the followers of
>the Prophet were forbidden to enslave any of their fellows who submitted to the
>Will of Allah the Compassionate, the Merciful. The size of the group to be
>considered covered by the moral order increased from the local tribe to all
>mankind with these cosmopolitan religions.  

Again, I am not talking about the increasing size of the geopolitical 
group, although there can be a rough indirect correspondence. The 
effective interaction of moral agents dominates physical size with 
regard to moral awareness and resulting behavior.

>Communism and Nazism, of course, reverse this. These aberrations spring from a
>higher culture and reject it, restricting the moral order no longer to mankind,
>or even to Christendom, but only to members of a favored race (in the case of
>the Nazi, the so-called Aryan)  or to members of a favored economic class (in
>the case of the Communist, the so-called Proletarian). Their savagery exceeded
>that of the ancient pagans, perhaps in part because the knowledge that they were
>betraying a conception something finer and higher than their own tormented them.
In both these cases, I think we can agree that significant power was 
concentrated among relatively few moral agents and thus the Self making 
these abhorrent moral choices was effectively collapsed, similar to poor 
Raskolnikov as we discussed earlier.

>Oddly enough, in the modern era, two factions among us are attempting to
>increase the scope of morality in two opposite directions. Some would insist
>that the moral order protect animals; others would insist that the moral order
>protect unborn babies. The first would outlaw carnivores as cannibals, the
>second would outlaw abortionists as infanticides. 
>Now, is there any way to predict or prefer which way the arrow or morality will
>go in the future? If we grant human rights to beasts, they might increase in
>survival and growth (unless animal population numbers drop once they are no
>longer domesticated for food); if we grant human rights to fetuses, they
>personally will increase in survival, and families who otherwise would go
>childless will grow. So which way is the arrow of morality supposed to grow?
>Your formulation of “what works” seems as ambiguous as a Delphic oracle. 
Are you saying that you expect there to be simple answers to complex 
moral issues? The very fact that such questions are being asked these 
days is an indication that our moral and ethical sensitivity is increasing.

As moral issues become more complex, we will increasingly apply 
principles of effective interaction, rather than strict laws, to these 
issues, and increasingly effective solutions will tend to be worked out. 
I have ventured already to suggest some tentative principles that can be 
inferred from the fundamental theory, and more work is yet to be done. 
That is my purpose in promoting this thinking.

>3. No matter what the viewpoint from the objective observer as to the actuarial
>benefit of adopting or rejecting specific innovations to a given moral code,
>from the viewpoint within a moral code, the moral code itself will contain
>reasons to explain and support itself.
I haven't been able to make sense of the preceding paragraph.

>The observer outside the moral code talking to the partisan within the moral
>code may say anything he likes about the “growth and survival” benefits of a
>particular innovation; but, unless he speaks to the specific reasons why the
>partisan adheres to a particular moral code, the information is of no value to
>the partisan. 
>An example might make this clear. Suppose we have two men, both of whom agree on
>the basics of a moral system. Let us say one is a Franciscan Monk, the other is
>a member of the Military Order of the Knights of the Temple of Jerusalem. Both
>are Christians, but one has taken a vow of pacifism, the other, a vow to recover
>Jerusalem from the Paynims by forces of arms. 
>An objective observer shows them Game Theory. He explains to our Monk and our
>Knight a simple game, called the Prisoner’s Dilemma, where if two players each
>cooperate with the other, they break even; if one cooperates and one betrays,
>the betrayer wins; if they both betray, both lose. Our objective observer
>convinces them that one and only one strategy is favorable over the long term:
>the strategy of simple reciprocity. Namely, a player who is willing to cooperate
>with other players until betrayed, to betray once each time he is betrayed, but
>to forgive and cooperate again next trial.  Our observer might urge, for
>example, that the pacifist retaliate upon certain occasions in order to deter
>further attacks; or he might urge the Crusader to fight only defensively, and to
>cooperate with the Turk whenever possible.  
>No matter what these calculations are, Christianity is fundamentally
>otherworldly, so that their survival rate on Earth could not have been the prime
>concern to our Monk and our Knight when they took their vows. 
>Even if the Monk and the Knight are carefully convinced by our observer, his
>arguments can have no effect on them, because they do not share the normative
>axiom that survival and growth are paramount concerns. 

Neither could one hope to convince Raskolnikov, or other small Selves 
acting in relative isolation, but as the context of interactions 
expands, then actions that work will tend to supersede those with more 
limited effectiveness.

>In other words, a basic problem with the “arrow of morality” approach is that it
>is in fact not objective, merely one philosophy like any other, apparently a
>form of utilitarianism. It will not convince anyone who does not already share
>the axioms of utilitarianism (albeit, it might be useful as a predictor of which
>philosophy will be the most popular or longest-lasting.) 
If it is in fact a predictor of which moral philosophy will be most 
popular or longest-lasting, then as a metaethical theory it is successful.

>As a related thought, let me submit that humanity has no need of changing
>standards in morality, nor any ability to change those standards even if it
>wanted to. Morality, in human experience, is one unified structure, and always
>has been: a set of maxims or imperatives commanding human action. 
And I maintain that as our world and its issues become more complex, we 
will find that effective interaction within the larger culture will 
become increasingly important to our individual interests, and that more 
complex issues will require more complex reasoning from principles 
rather than from any fixed set of laws established in a simpler time.

>I propose that the emphasis on certain maxims or imperatives, their rank or
>priority, might differ from one man to the next or from one school of thought to
>the name, but the general moral order of the universe known to all men through
>their natural reason. What is amazing about the various ages and races of man,
>is not that we see, here and there, customs of particular cruelty or degeneracy,
>such as temple prostitution or human sacrifice, but that we see nearly universal
>agreement on the basics: the Eightfold Path of the Buddha and the Ten
>Commandments of Moses cover the same points, as do the utterings of Lapland
>witches and the staves of Norse prophecy. Even vicious beasts like the
>Communists can only justify their shocking inhumanity, their brutal
>mass-murders, mass-lies, mass-robberies, and so on, by reference to a moral
>maxim (charity to the poor) which has at least the same pedigree as the moral
>maxims condemning murder, lying, and robbery. 

Yes, because we share an extensive evolutionary heritage and a common 
environment to which we had [note past tense] adapted over thousands 
(culturally) and millions (biologically) of years. We have made it thus 
far because we have inherited what worked.

>Hence, the universality of moral maxims suggests that moral systems cannot
>differ in their fundamentals. They differ in the arguments used to support the
>maxims, and they differ in the different weight given the moral maxims compared
>one with another. (The Chinaman and the Jew, for example, will both acknowledge
>the moral maxim of respecting one’s parents: but Chinese tradition has a much
>more elaborate and demanding system of ranks within each family than the Jewish.) 
>We can look at morality only one of two ways: from the inside, or from the
>outside. From the inside, we, as moral beings, can weigh in our consciences the
>wisdom of changing the emphasis or scope of a moral rule to which we all defer
>as authoritative. The arrow of morality can grow only in the direction already
>implied, but not yet come to flower, in the maxims we already accept. From the
>outside, we, as purely rational beings, can look at some aspect of morality or a
>particular moral code in non-moralistic terms, such as, for example, looking at
>the incentives which cause certain formulations of local moral codes to flourish
>or gain partisans, while other diminish. The difficulty with the arrow of
>morality formulation (as I understand it) is that it cannot bridge this gap
>between inside and outside. 
>The man inside it does not need it: Christendom already preaches and practices
>toleration of dissent. The man outside cannot use it: knowing a certain code
>will reach more people or “works better” than another code provides no
>particular motive to amend a moral code. It might or it might not, depending on
>what he thinks the moral stature of “working better” is.  
Mr. Wright, it has been a pleasure carrying on this discussion with you, 
and if I did not already mention it, I enjoyed very much reading your 
three books of the Golden Age. I am afraid that much of this discussion 
has become repetition of our previous points, and I think that to some 
extent we are talking past one another. I would be pleased to continue a 
dialogue via personal email, but I hesitate to post another iteration to 
this public forum.

I do look forward to your further comments.

- Jef

More information about the extropy-chat mailing list