[ExI] [Ethics] Consequential, deontological, virtue-based, preference-based..., ...

Jef Allbright jef at jefallbright.net
Thu Jun 12 00:14:27 UTC 2008


On Sat, Jun 7, 2008 at 6:21 AM, Vladimir Nesov <robotact at gmail.com> wrote:
> On Thu, May 29, 2008 at 8:11 PM, Jef Allbright <jef at jefallbright.net> wrote:
>> On Thu, May 29, 2008 at 7:09 AM, Vladimir Nesov <robotact at gmail.com> wrote:
>>>
>>> Does the main reason you need to include subjectivity in your system
>>> come from the requirement to distinguish between different motives for
>>> the same action based on agent's states of mind? I don't think it's
>>> needed to be done at all (as I'll describe below). Or do you simply
>>> mean that the outcome depends on the relation of agent to its
>>> environment (not "good", but "good for a particular agent in
>>> particular environment")?
>>
>> It appears we have something of a disconnect here in our use of the
>> term "subjective", perhaps reflecting a larger difference in our
>> epistemology.  First, I'll mention that the term "subjective" often
>> evokes a knee-jerk reaction from those who highly value rationality,
>> science and objective measures of truth and tend to righteously
>> associate any use of the term "subjective" with vague, soft, mystical,
>> whishy-washy, feel-good, post-modernist, deconstructionalist,
>> non-realist, relativist, etc. thinking.  For the record (for the nth
>> time) I deplore such modes, and strive for as much precision and
>> accuracy as practical (but not more so.)  When I refer to something as
>> "subjective" I'm referring to the necessarily subjective model through
>> which any agent perceives its umwelt within (I must assume) a coherent
>> and consistent reality.
>
> But in many contexts, at least in theoretical discourse, it can be
> assumed that the reality is specified down to the quarks, which makes
> the difference between subjective look at the facts that follow from
> such a description and, as you see it, overzealoous objective view, to
> be too close to each other to warrant any distinction. Am I
> misundestanding your point again? Even if you insist on "subjective
> probabilities", there are things which are too certain for their
> potential falsity to influence decision-making, including
> epistemology.


Vladimir, I assert that any effective model of morality has an
intrinsically subjective element.

You point out that some contexts are sufficiently well-specified that
they are virtually objective.

Fine, but so what?  Can you provide even a single example of an issue
that is both so well-specified as to be considered mutually objective
AND which is located on the moral spectrum?  Note that "morality"
cannot apply to the decision-making of an isolated agent whose choices
are just as "right" as they are "good" within the sole context of the
agent's own model.  Neither does "morality" apply to the hypothetical
case of an agent with a god's-eye view of his universe, such that
"good" or "right" is indistinguishable from "is".

As I have said so many times before, we are like individual leaves on
a tree of increasing possibility, which subjective points of view are
supported by branches of increasing probability leading to back to
"ultimate reality."  With increasing context of awareness, we would
find necessarily increasing agreement on our increasingly fundamental
supporting values (branches) in common.


>>> Reductionism isn't supposed to answer such questions, it only suggests
>>> that the answer is to be sought for in the causal structure of
>>> reality. If you choose to take guidance from your own state of mind,
>>> that's one way (but not reliable and impossible to perfect).
>>
>> Statements such as the above are jarring to me (but overwhelmingly
>> common on this discussion list.)  Who is the "you" who exists apart
>> such that "you" can take guidance from your own state of mind?
>>
>
> You are not singular, you consist of moving parts that interact with each other.

I refer to the "you" of agency.

You point out that any agent consists of parts.

Fine, so what?  Did you even ask yourself what my point might have been?


>> Yes, I speak repeatedly about the increasing good of increasing
>> coherence over increasing context of a model for decision-making
>> evolving with interaction with reality.  Note that this model is
>> necessarily entirely subjective -- the map is never the territory.
>
> Map is in the territory, so territory that implements map can also be
> mapped.

Only if from a greater context.

> That is how map influences the territory, by being part of it.

No.  Incoherent.  In systems-theoretic terms, i.e. in terms that can
be practically modeled, any intentional agent necessarily acts from an
internal model to effect change on its local environment.  This is in
no sense a denial of the agent existing and acting within its
environment, but if you're going to talk about "influence", you need
to model A influencing B, as A influencing A is incoherent.


> A fallacy is in assuming that map is the territory that is being
> mapped by it, even if it's not so. But it might be so, sometimes.

Probability mass must sum to unity, and while there may be cases where
a map might for all practical purposes represent all **presumably**
salient aspects of the territory, can't we remain calmly parsimonious
here and simply agree that that the map is **never** exactly the
territory?


>> You may have some difficulty with my statement above, and I'm happy to
>> entertain any thoughtful objections, but you may have even more
>> difficulty with what I will say next:
>>
>> The term "increasing context" applies equally well to the accumulating
>> experience of a single person or to the accumulating experience of a
>> group -- with scope of agency corresponding to the scope of the model.
>>  In other words, agency entails a self, but in no sense is that agency
>> necessarily constrained within the bounds of a single organism.
>
> No, I actually have no problem with it. My current working hypothesis
> for Friendliness is to construct something that is to human
> civilization, like a brain is to inborn lower-level drives.

I'm talking about an open-ended intentional framework for increasingly
effective search for positive-sum solutions promoting an increasing
context of increasingly coherent hierarchical fine-grained subjective
and evolving values, via methods implementing principles of increasing
objective efficacy over increasing scope of consequences.

If you would call my envisioned framework a "brain" for the humans it
serves, then we might be close to agreement there.

But to me it would seem silly to call one's brain "friendly" to its
body unless one considered the brain to have independent agency.  And
to the extent that any such any such asymmetry of intelligence
existed, then I think it would be naive to expect that such an agent
could possibly be "Friendly" as its values would necessarily be widely
divergent and thus inherently in conflict with the other agents.

Consider the relationship of human parents to their offspring with
widely their divergent values.  Protective, caring?  Yes, but hardly
"friendly."


>> So, one might object: "So Jef, you are saying that Hitler's campaign
>> to exterminate the Jews should be seen as moral within the subjective
>> model that supported such action?"  To which I would reply "Yes, but
>> only to the extent such model was seen (necessarily from within) to
>> increase in coherence AS IT INCREASED IN CONTEXT."  Eliminating any
>> who object certainly increases coherence, but it does NOT increase in
>> context, and thus it tends not to be evolutionarily successful.  Cults
>> are another fine example of increasing coherence with DECREASING
>> context and most certainly not the other way around.
>
> I'm sure nobody is assuming objective morality. Morality is determined
> by properties of an agent, but properties of an agent are determined
> by its physical makeup, which you can analyze from outside. How does
> your higher cognition know what is good for "you"? Parts of the brain
> implementing it observe the inborn drives, or environmental infuence.

Therein lies a fundamental and widespread misconception.  Evolved
organisms are not fitness maximizers, but rather, adaptation
executors.  The organism exactly expresses its nature within the
constraints of its environment.  There is no objective good.


>>> I didn't imply a guarantee, I deliberately said they only need to be
>>> good in "general enough" sense. Do you have a particular model of what
>>> constitutes intelligent behavior?
>>
>> Yes and no.  As I see it, the essential difficulty is that the
>> "intelligence" of any action is dependent on context, and (within the
>> domain of questions we would consider to be of moral interest) we can
>> never know the full context within which we act.
>
> Hence, ability to generalize as an essential feature of intelligence,
> if not the only one.

Somewhat oxymoronic, when one considers that "to generalize" here
entails learning and physically expressing an increasingly complex
transform encoding effective interaction with a complex and uncertain
environment.  "To generalize" here carries virtually no practical
information content relative to the actual process, while it claims to
be "the only one."   Like enthusing that Solomonoff Induction is the
key to AI.


>> I think it is worthwhile to observe that "intelligent" behavior is
>> seen as maximizing intended consequences (while minimizing the
>> unintended (and unforeseen)).  It is the capacity for effective
>> complex prediction within a complex environment.
>>
>> I think this problem is inherently open-ended and to the extent that
>> future states are under-specified, decision-making must depend
>> increasingly not on expected utility but on a model representing
>> best-known hierarchical principles of effective action (i.e.
>> instrumental scientific knowledge) promoting a present model of
>> (subjective, evolving) values into the future.
>>
>> As I said to Max several days ago (Max?)  morality does not inhere in
>> the agent (who will at any time exactly express its nature) but in the
>> outwardly pointing arrow pointing in the direction of the space of
>> actions implementing principles effective over increasing scope
>> promoting an increasing context of increasingly coherent values.
>
> You keep repeating these words, and I'm sure you have some intuitive
> picture in your mind that is described by them, but it help in
> communicating that picture.

I don't know any more practical way to effectively expand on this
formulation within the constraints of this forum and significant
differences in personal background.  I've tried.


>> You appear to assume the possibility of some Archimedian point outside
>> the system (where if you could but stand (given a sufficient lever)
>> you could move the world.)  My point, is that we are always only
>> within the system itself, and thus we can never have a truly objective
>> basis for navigating into the future.
>
> We are always outside the environment, interacting with it, or
> half-of-you is always outside the other-half-of-you. I don't see a
> problem is assuming an outside view, it is a useful approximation for
> reasoning.

Therein lies the infinite regress, the singularity of self, at the
heart of these many ***interminable*** topics of discussion.


>>> I don't like the notion of values very much, it looks like unnatural
>>> way of describing things to me.
>>
>> Yes, "values" are seen as soft, squishy mushy things, the very
>> antithesis of hard objective rationality. ;-)
>>
>> My usage of "values" is not entirely synonymous with "preferences"
>> however.  I'm using "values" in the broader sense implying the actual
>> hard physical nature of the agent, inherently deeper than the
>> "preferences" which the agent might be aware of and express.  In the
>> same sense, I encompass the "values" of a simple thermostat,
>> fundamentally no different than the "values" of a person, although
>> differing a great deal in scale of complexity.
>>
>
> Values = physical makeup of an agent? At this granularity, I'd say
> that environment is also very important in determining the values, at
> which point explanation looses meaning, if no specific concept is
> presented.

Really?  Imagine system A and system B operating within effectively
the same environment.  You can consider A and B to be two persons --
let's make them a human male and female and make one young and the
other elderly.  [It'd be more fun to use individuals from somewhat
more distant branches of the tree of subjective reality I mention
earlier, perhaps a dolphin and a machine intelligence, but a male and
female human may seem easier.]  Now, I'm saying that these individuals
leaves -- er, persons -- within that common environment, will have
somewhat divergent values entirely as a function of their present
physical structure.   And what was your point?

- Jef



More information about the extropy-chat mailing list