[extropy-chat] consequentialism/deontologism discussion

Jef Allbright jef at jefallbright.net
Sat Apr 28 01:05:13 UTC 2007


On 4/27/07, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
> > > By "increasing context of shared values" do you mean something like a
> lowest
> > > common denominator, or an averaging out of values?
> >
> > No.  I use the phrase "fine-grained values" to mean just the opposite.
> > Our shared values can be approximated as an extremely complex
> > hierarchy with "reality" (the ultimate view of what works) at the root
> > and increasingly subjective branches supporting ever more subjective
> > sub-branches until we reach each individual's values.  The key here is
> > that even though each of us has effective access only to our own
> > subjective values at the tips of the outermost branches, we  have an
> > increasingly shared interest in the increasingly probable branches
> > (supporting us) leading back to the root.  With increasing awareness
> > of this tree structure, we would increasingly agree on which branches
> > best support, not our present values, but growth in the direction
> > indicated by our shared values that work.
> >
> >
> > > What if there is just an irreducible conflict in values, such as between
> > > those who think women should "dress modestly" and those who think women
> > > should dress however they please (this issue is often assumed to be
> based on
> > > religious or anti-egalitarian considerations, but consider the
> prudishness
> > > of the Russian and Chinese communists)?
> >
> > See above, and let me know if that does not address your question.
>
> In the example I give, both parties would agree that their dress code for
> women was part of some more general principle. The problem is, they might
> see different branches, a different trunk, different roots, or claim the
> same roots for their trunk's exclusive use. There might be perfectly stable,
> progressive societies ("what works"?) possible based on either ideology.

What an interesting choice of words:  "perfectly stable, progressive societies."

Within an evolutionary framework such as the one being discussed, it
sounds like an oxymoron.  It's jarring to see "stable" as if that were
somehow something good in the Red Queen's Race.  Even in the sense of
an Evolutionary Stable Strategy or Nash equilibrium in game theory,
such stability can be good only if the environment also is stable.
Even in the iterated Prisoner's Dilemma, the now-venerable Tit-for-Tat
strategy is known to be less than optimum if one's opponents
cooperate, which is to be expected in the real world.

And within this evolutionary context, what might be meant by
"progressive"?  Of course I realize that "progressive" has been
appropriated by some to mean social progress *toward* some more ideal
condition (implicity defined contra the existing "repressive"
structures of power), whereas I see social progress as progressively
moving *away* from what doesn't work, scientific progress as
progressively ruling out what doesn't work, and so on, generating
increasingly probable principles of what does work supporting an
increasing variety of possibilities that might work.

That said, I assume that "perfectly stable, progressive societies" is
intended to mean free from civil unrest and proceeding toward
increasingly free exercise of individual rights, or something similar.
 Sounds nice, but I don't know of any functional model that supports
it.  As Churchill said, "democracy is the worst form of government
except all the others that have been tried."  I think some people are
moving in a good direction with "deliberative democracy", because I
think they're slowly moving toward the kind of framework I propose.

Your statement refers also to competing ideologies, whereas I was
talking about competing values, my ideology being that increasing
awareness (of values and methods, etc., etc.) leads to increasingly
moral choices.

As I mentioned earlier, and as evidenced by the "talking past each
other" going on here, the seeds of thought I've been planting don't go
nearly far enough to properly frame what is really a very simple, but
alien idea. Call me Michael Valentine?  No, please don't.


> Each side will in the end be reduced to yelling at the other, "My values are
> better than your values!". This is the case for any argument where the
> premises cannot be agreed upon.

I think the key point here is that you and I agree that values are
subjective, and there is absolutely no basis for proving to an
individual that their values are "wrong".  But -- we share a great
deal of that tree, diverging only with the relatively outermost
branches. To the extent that we identify with more of our branch of
that tree, we will find increasing agreement on principles that
promote our shared values (that work) over increasing scope.

If that is still too abstract, consider the Romulans and the Klingons.
 They share a common humanoid heritage but have diverged into quite
separate cultures.  The Klingons have taken the way of the warrior to
an extreme, while the Romulans have grown in the direction of stealth
and deception.  Caricatures, sure, but they illustrate the point,
which is that they hold deeper values in common.  They must care for
their children, they value the pursuit of happiness (however they
define happiness), they value the right to defend themselves, they
value cooperation (to the extend that it promotes shared values), ...
and of course I could go on and on.

We could even apply this thinking to robotic machine intelligence
vis-à-vis humans.  The intersecting branches would be a little further
down, closer to the roots, but to the extent that these hypothetical
robots had to interact within our physical world, within somewhat
similar constraints, then there would be some basis for empathy and
cooperation, effectively moral agreement.

Your thoughts?

- Jef




More information about the extropy-chat mailing list