[ExI] [Ethics] Consequential, deontological, virtue-based, preference-based..., ...

Jef Allbright jef at jefallbright.net
Thu May 22 15:58:25 UTC 2008


On Wed, May 21, 2008 at 8:44 PM, Max More <max at maxmore.com> wrote:

>>Each of these metaethical theories, when extended, arrives at
>>inconsistency.  Each assumes a rational ideal, which, unrealistically,
>>entails a rational homunculus at the core.
>
> I would agree with your comment here, as applied to deontological and
> consequentialist frameworks. I'm not sure that it's true of a virtue
> ethics. Do you think it does, or were you thinking of the other two?
> (Virtue ethics does have other serious limitations.)

Max, thanks for your interest.  I'm happy to explore this with you,
but the limitations of this medium may let us down.

Put (too) briefly, I see virtue ethics as the closest of these to my
concept since it comes closest to action on the basis of best-known
principles (principles >> preferences >> expected consequences),
tending to greater effectiveness to the extent future states are
underspecified.

But yes, I would group these together, as each depends on a core
"rational evaluator" who applies the [virtues|preferences|expected
consequences] to the decision-making process leading to the actions of
the agent.

I'm not saying that these predominate branches of ethical reasoning
are wrong, but that they are incomplete and ultimately inconsistent,
each revolving around an illusory singularity of self (thus my earlier
reference to free will in disguise, and "ought from is" from without
versus from within.)

... [This space represents a heap of potential discussion] ...

Leading to my oft' repeated formulation:

The perceived morality of an action increases with the extent to which
the action is assessed as promoting, in principle, an increasingly
coherent model of an increasing context of evolving values over
increasing scope of consequences.  Wash, rinse, repeat.

>From this description we can derive our prescription, not for what IS
moral, but given any starting point, how to proceed in the direction
(in the sense that "outward" is a direction) of increasing morality.
There are two elements:

(1) subjective context of values
All things equal, acting on behalf of an increasing context of values
will be assessed as acting increasingly morally.  Boundary conditions
are, on one hand the case of an isolated agent (minimum context of
values) where there is no difference between "good" and "right" [cf my
discussion of Raskalnikov with John C. Wright a a few years ago on
this list], and on the other hand the case of a "god's-eye" view, from
which context there is no "right", but simply what is.  Within the
range of human affairs, all else equal, we would agree that the
actions of a person to promote values coherent over a context
increasing (roughly) from individual to family or small group to
nation to world would be seen (from within that context) as
increasingly moral.  Likewise, the actions of a person based on an
coherent model of an increasing context of personal values would be
seen as increasingly moral.

At this point in the discussion, what is commonly non-intuitive and
central to your question to me, is that there is no essential
difference between the example of the agent seen to be acting on
behalf of an increasing context of values including other persons, and
the example of an agent acting on behalf of an increasing context of
values not involving other persons.  These cases are identical in that
they each represent an agency acting to promote a context of values.
Further to the point of your question to me, this agency, acting on
behalf of values over whatever context, will act exactly in accordance
with its nature defined by its values (of course within the
constraints of its environment.)  Any consideration of morality (in
the deep extensible sense which is my aim) is in the loop, not in the
agent. Otherwise, we are left with the infinite regress of the agent
deciding that it is "good" to acting in accordance with principles of
virtue that are "right."


(2) objective scope of interaction with the world
I don't think I need say much here.  It's the simple point that all
else equal, any action assessed as moral in terms of the values it
promotes, will with increasing effectiveness be seen as increasingly
moral.  Essentially, it's about our increasingly effective
(instrumental) knowledge of science.


Notes:
1.  While (1) and ((2) above are orthogonal elements, any agent has
effectively a single model of its world, and at any moment it
expresses its nature by acting on its local environment "simply" to
minimize the difference between the environment and its (the agent's)
values.  In the process, its model is inevitably updated.

2.  This formulation applies only to the extent that agents have
values in common.  You can picture this as each agent being like a
leaf of a tree growing in the direction of increasing possibility,
increasingly connected via branches of increasing probability leading
back to root "reality."  Therefore, all agents have increasingly
fundamental values increasingly in common.

3.  Time permitting, I would provide examples of how our instinctive
sense of morality is a special case of this formulation.  I've done
this partially in past posts.

4.  Time permitting, I would provide examples of how our cultural
moral codes are a special case of this formulation.  I've done this
partially in past posts.

4.  In case it wasn't clear above, this descriptive theory can (and
should!) be interpreted as prescribing intentional development of
systems facilitating elements (1) and (2) above.

This is rough [I need to get busy on other things], but I do think it
would be good to pause here for any comments...


- Jef



More information about the extropy-chat mailing list