[extropy-chat] consequentialism/deontologism discussion

Jef Allbright jef at jefallbright.net
Thu Apr 26 17:37:00 UTC 2007


On 4/26/07, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
> On 4/26/07, Jef Allbright <jef at jefallbright.net> wrote:
>
> > That said, I believe we are in fact on the cusp of extending our moral
> > decision-making beyond the blind evolutionary preferences of our
> > biology and culture, and on the verge of applying an intentional
> > process of collaborative decision-making, promoting an increasing
> > context of shared values into the future we create.  This framework,
> > representing (1) awareness of our fine-grained values and (2)
> > awareness of methods of effective interaction, will effectively
> > amplify "wisdom" based on evolving human values beyond the moral
> > capacity of any human individual of today.
>
> Are the fine-grained values the same as those determined by biology and
> culture?

Yes.  I would rather say that they are *expressed* through our biology
and culture, and determined or encoded via an evolutionary process,
but I think we're close enough on this point.  A more subtle, but
important point is that our values necessary evolve.  The purpose of
this framework for increasing awareness is to facilitate us taking an
increasingly intentional role in guiding the direction of our evolving
values.


> By "increasing context of shared values" do you mean something like a lowest
> common denominator, or an averaging out of values?

No.  I use the phrase "fine-grained values" to mean just the opposite.
 Our shared values can be approximated as an extremely complex
hierarchy with "reality" (the ultimate view of what works) at the root
and increasingly subjective branches supporting ever more subjective
sub-branches until we reach each individual's values.  The key here is
that even though each of us has effective access only to our own
subjective values at the tips of the outermost branches, we  have an
increasingly shared interest in the increasingly probable branches
(supporting us) leading back to the root.  With increasing awareness
of this tree structure, we would increasingly agree on which branches
best support, not our present values, but growth in the direction
indicated by our shared values that work.


> What if there is just an irreducible conflict in values, such as between
> those who think women should "dress modestly" and those who think women
> should dress however they please (this issue is often assumed to be based on
> religious or anti-egalitarian considerations, but consider the prudishness
> of the Russian and Chinese communists)?

See above, and let me know if that does not address your question.


> (These are basic questions, I realise, so feel free to refer me to the list
> archive if you have already answered them).


While I raise this thinking often, I try to do it in five paragraphs
or less, planting seeds rather than attempting to transplant a forest.
 I started to create an outline for a book, but found that in order to
address the wide range of cultural and philosophical starting points
would require much more than I personally could hope to accomplish.

So feel free to comment and ask questions and I'll be happy to
co-refine this thinking on or offlist.


> > In contrast to your assumption of hedonistic pleasure as the ultimate
> > "good", I see Growth, in terms of our shared values that work, as a
> > more fundamental good, and would point out that such Growth provides
> > the robust infrastructure for ongoing pleasure.
>
> > With regard to Utilitarian views of morality and ethics, I would point
> > out the unavoidability of unintended and unanticipated consequences
> > and suggest that it in the bigger picture we can promote our values
> > more effectively by implementing principles of best known methods
> > rather than by directly seeking to maximize utility as currently
> > conceived.
>
> It would be a very concrete and short-sighted utilitarian who regards
> immediate sensual pleasure as the only criterion for ethical behaviour.
> Pleasure can be deferred, and it can take the form of eg. joy in altruistic
> service. You just need to expand the scope of the utility to include the
> bigger picture.

Yes!  And what do we get as we look for methods of maximizing utility
over various (expanding) scope?  We get principles.

- Jef



More information about the extropy-chat mailing list