[ExI] [Ethics] Consequential, deontological, virtue-based, preference-based..., ...

Jef Allbright jef at jefallbright.net
Thu May 29 16:11:29 UTC 2008


On Thu, May 29, 2008 at 7:09 AM, Vladimir Nesov <robotact at gmail.com> wrote:
> On Thu, May 22, 2008 at 7:58 PM, Jef Allbright <jef at jefallbright.net> wrote:
>>
>> (1) subjective context of values
>> All things equal, acting on behalf of an increasing context of values
>> will be assessed as acting increasingly morally.  Boundary conditions
>> are, on one hand the case of an isolated agent (minimum context of
>> values) where there is no difference between "good" and "right" [cf my
>> discussion of Raskalnikov with John C. Wright a a few years ago on
>> this list], and on the other hand the case of a "god's-eye" view, from
>> which context there is no "right", but simply what is.  Within the
>> range of human affairs, all else equal, we would agree that the
>> actions of a person to promote values coherent over a context
>> increasing (roughly) from individual to family or small group to
>> nation to world would be seen (from within that context) as
>> increasingly moral.  Likewise, the actions of a person based on an
>> coherent model of an increasing context of personal values would be
>> seen as increasingly moral.
>>
>> At this point in the discussion, what is commonly non-intuitive and
>> central to your question to me, is that there is no essential
>> difference between the example of the agent seen to be acting on
>> behalf of an increasing context of values including other persons, and
>> the example of an agent acting on behalf of an increasing context of
>> values not involving other persons.  These cases are identical in that
>> they each represent an agency acting to promote a context of values.
>> Further to the point of your question to me, this agency, acting on
>> behalf of values over whatever context, will act exactly in accordance
>> with its nature defined by its values (of course within the
>> constraints of its environment.)  Any consideration of morality (in
>> the deep extensible sense which is my aim) is in the loop, not in the
>> agent. Otherwise, we are left with the infinite regress of the agent
>> deciding that it is "good" to acting in accordance with principles of
>> virtue that are "right."
>>
>
> Does the main reason you need to include subjectivity in your system
> come from the requirement to distinguish between different motives for
> the same action based on agent's states of mind? I don't think it's
> needed to be done at all (as I'll describe below). Or do you simply
> mean that the outcome depends on the relation of agent to its
> environment (not "good", but "good for a particular agent in
> particular environment")?

It appears we have something of a disconnect here in our use of the
term "subjective", perhaps reflecting a larger difference in our
epistemology.  First, I'll mention that the term "subjective" often
evokes a knee-jerk reaction from those who highly value rationality,
science and objective measures of truth and tend to righteously
associate any use of the term "subjective" with vague, soft, mystical,
whishy-washy, feel-good, post-modernist, deconstructionalist,
non-realist, relativist, etc. thinking.  For the record (for the nth
time) I deplore such modes, and strive for as much precision and
accuracy as practical (but not more so.)  When I refer to something as
"subjective" I'm referring to the necessarily subjective model through
which any agent perceives its umwelt within (I must assume) a coherent
and consistent reality.

Further, subjectivity is essential to any coherent formulation of
agency, value, or "what it means" rather than "what it looks like."

These points are well understood and accepted within western
philosophy so there's no need for me to argue, for example, in support
of Hume's fallacy of "ought from is." In my opinion, he's absolutely
correct.

But all paradox is a matter of insufficient context.  In the bigger
picture, all the pieces must fit.

And in the bigger picture with which we are concerned, agency, value
and meaning are of very real importance, and an effective accounting
of subjectivity is required to complete that picture.


> On Thu, May 22, 2008 at 8:49 PM, Jef Allbright <jef at jefallbright.net> wrote:
>> On Thu, May 22, 2008 at 5:20 AM, Vladimir Nesov <robotact at gmail.com> wrote:
>>>
>>> Jef, I heard you mention this coherence over context thing several
>>> times, as some kind of objective measure for improvement.
>>
>> I've mentioned, I'm afraid, ad nauseam on this list and a few others,
>> attempting to plant merely a seed of thought with the hope that it
>> might take root in a few fertile minds.  I would object that it is far
>> from objective, and I used to overuse the word "subjective" as badly
>> as I do "increasingly" and "context" these days.  It's important to
>> recognize that reductionism, with all its strength, has nothing to say
>> about value.  An effective theory of morality must meaningfully relate
>> subjective value with what objectively "works" and without committing
>> the fallacy of "ought from is."
>>
>> It may be useful here to point out the entirely subjective basis of
>> Bayesian inference (within the assumption of a coherent and consistent
>> reality.)

The above statement is intended to highlight the ineluctable element
of subjectivity inherent in the prior.  I am NOT suggesting that
Bayes' Law is subjective any more than I would say that Newton's Laws
are subjective.


>>
>> It may be useful here to point out that fundamentally, decision-making
>> never depends on absolutes, but only inequalities.
>>
>
> Reductionism isn't supposed to answer such questions, it only suggests
> that the answer is to be sought for in the causal structure of
> reality. If you choose to take guidance from your own state of mind,
> that's one way (but not reliable and impossible to perfect).

Statements such as the above are jarring to me (but overwhelmingly
common on this discussion list.)  Who is the "you" who exists apart
such that "you" can take guidance from your own state of mind?

> If you
> instead choose to crack open the thoughts in the head of a test
> subject when he chooses an action, then you are guided by his state of
> mind, which can be as good as yours for the job (but allows to obtain
> knowledge about larger number of people). I'd like to instead
> construct an algorithm that will be able to do the job better then any
> human mind, grounding its decisions in reality and targeting the
> benefit for required scope (a given human, human civilization,
> lobsters, etc.).

Yes, I speak repeatedly about the increasing good of increasing
coherence over increasing context of a model for decision-making
evolving with interaction with reality.  Note that this model is
necessarily entirely subjective -- the map is never the territory.

You may have some difficulty with my statement above, and I'm happy to
entertain any thoughtful objections, but you may have even more
difficulty with what I will say next:

The term "increasing context" applies equally well to the accumulating
experience of a single person or to the accumulating experience of a
group -- with scope of agency corresponding to the scope of the model.
 In other words, agency entails a self, but in no sense is that agency
necessarily constrained within the bounds of a single organism.

So, one might object: "So Jef, you are saying that Hitler's campaign
to exterminate the Jews should be seen as moral within the subjective
model that supported such action?"  To which I would reply "Yes, but
only to the extent such model was seen (necessarily from within) to
increase in coherence AS IT INCREASED IN CONTEXT."  Eliminating any
who object certainly increases coherence, but it does NOT increase in
context, and thus it tends not to be evolutionarily successful.  Cults
are another fine example of increasing coherence with DECREASING
context and most certainly not the other way around.


> I'm not sure what you mean by saying that Bayesian inference is
> subjective.

I hope I have made that clear by now.


> It's often said that probability is state of mind,

Yes, a statement of subjective (un)certainty.

> but it
> is more a characterization of the mind, with probability describing
> the state of incomplete information.

Are you not here mistakenly assuming some (unrealistic) objective
place from which to characterize a state of mind?

Are you comfortable with the distinction between probability and likelihood?


> Bayesian inference is math, an
> algorithm, which is as objective as they go.

Yes.


>>> I'm very much
>>> interested, since I'm myself struggling to develop the notion of
>>> "benevolent" modification: when given a system, what changes can you
>>> make in it that are in general enough sense benevolent from the point
>>> of view of the system, when no explicit
>>> utility-over-all-possible-changes is given.
>>
>> I think there's no more important question than that, and that we
>> should be satisfied (and even happy) that there can be no such
>> guarantee within a system of growth.  That said, while there can be no
>> absolutely perfect solution, we can certainly become increasingly good
>> at applying an increasingly intelligent probabilistic hierarchical
>> model of what does appear to work.  This is my main focus of
>> theoretical interest but I'm by no means qualified to speak with any
>> authority on this.
>>
>
> I didn't imply a guarantee, I deliberately said they only need to be
> good in "general enough" sense. Do you have a particular model of what
> constitutes intelligent behavior?

Yes and no.  As I see it, the essential difficulty is that the
"intelligence" of any action is dependent on context, and (within the
domain of questions we would consider to be of moral interest) we can
never know the full context within which we act.

I think it is worthwhile to observe that "intelligent" behavior is
seen as maximizing intended consequences (while minimizing the
unintended (and unforeseen)).  It is the capacity for effective
complex prediction within a complex environment.

I think this problem is inherently open-ended and to the extent that
future states are under-specified, decision-making must depend
increasingly not on expected utility but on a model representing
best-known hierarchical principles of effective action (i.e.
instrumental scientific knowledge) promoting a present model of
(subjective, evolving) values into the future.

As I said to Max several days ago (Max?)  morality does not inhere in
the agent (who will at any time exactly express its nature) but in the
outwardly pointing arrow pointing in the direction of the space of
actions implementing principles effective over increasing scope
promoting an increasing context of increasingly coherent values.


.I'm asking because I became
> interested in the notion of benevolent modification primarily as
> another way to put the question "what action is intelligent?". I
> sketched some of the intuitions behind this analogy in the post on SL4
> ( http://www.sl4.org/archive/0804/18464.html ).
>
> Please correct me if I understood your point incorrectly (I'm trying
> to read between the lines here, since you didn't express this
> particular perspective explicitly). An agent is the standard that
> makes generalizations, that is you can extract preliminary values from
> the agent, based on its state of mind (or more precisely, agent itself
> does that), and then directly apply them to slightly broader contexts,
> but at some point you'll need to let the agent change the values for
> new contexts, which creates a loop problem, when on one hand you'd
> like to arrange the world in accordance with the current estimation of
> agent's values, and on the other you'd like the agent to be able to
> stumble on something not previously specified, so that it'll be able
> to refine or change the values.

You appear to assume the possibility of some Archimedian point outside
the system (where if you could but stand (given a sufficient lever)
you could move the world.)  My point, is that we are always only
within the system itself, and thus we can never have a truly objective
basis for navigating into the future.  Indeed, it's incoherent even to
refer to "the future" which is effectively unspecified and
unspecifiable, not only due to the intractability of the combinatorial
explosion, but more fundamentally due to the Godelian impossibility of
specifying that which we would hope to extrapolate.


> I don't like the notion of values very much, it looks like unnatural
> way of describing things to me.

Yes, "values" are seen as soft, squishy mushy things, the very
antithesis of hard objective rationality.

My usage of "values" is not entirely synonymous with "preferences"
however.  I'm using "values" in the broader sense implying the actual
hard physical nature of the agent, inherently deeper than the
"preferences" which the agent might be aware of and express.  In the
same sense, I encompass the "values" of a simple thermostat,
fundamentally no different than the "values" of a person, although
differing a great deal in scale of complexity.


I'm going to pause here.  I've read and appreciate your subsequent
comments, and I once grappled with patterns of reasoning very similar
to yours.  Perhaps the seeds I've planted above will grow for you too.

- Jef



> All the way down to the actual
> decision, during the most complex processes that precede it, in the
> outside world and then during perception, it's natural to describe the
> parts by their causal relationships. Then suddenly at the point where
> you need to make a 'real world' decision, you have utilities and
> probabilities, and not earlier when you made billions of decisions
> during perception. And finally, your decision is carried forth, again
> regarded as causal flow. Only at the ethereal point when your decision
> supposedly impacts your action will the 'morality' problem appear, and
> at all the other points it's just about being correct (that is,
> arranging it so that the reality has a way of making the right mark on
> your perception, science) or knowing the way to achieve the goal (that
> is, making the reality to come into agreement with your decision,
> engineering). Current decision theories try to patch the gap in
> between by postulating normative rules that it "must" follow, or
> failing to find their predictions in agreement with the facts, they
> dream up "values" and use them to roughly sketch the decision
> procedure. The problem is that it's more realistic to see decisions
> adding up from the "bias" in perception, along the whole way, starting
> from obtaining incorrect information, continuing through imperfections
> of low-level perception and evolutionary-programmed predispositions,
> and ending in preprogrammed responses and limitations of possible
> physical actions. It's instructive to see a person as operating in
> deterministic environment, then it's just a fixed process, nothing to
> be changed. But suppose you could reach out into that deterministic
> world and change one thing at a time, what would you do so as to make
> something good for this messy causal process? This protocol is my way
> of breaking the loop, when each decision does not constitute so much
> in choosing an action, as in changing a way in which actions are
> chosen, by adding a memory, an elementary skill, a bias towards
> certain decisions, even during low-level perception. These "external"
> changes I call intelligence, not the operation of deterministic
> machine itself. Of course, in real world you'll need to somehow choose
> which parts are "inside the matrix" and which are "outside", and more
> realistically to blend them together, but I find it a useful intuitive
> way of looking at intelligent improvement.
>
> Let's trace a single causal pathway that passes through an agent: it
> starts at event (state) A in the environment and results in perception
> B, then this perception is processed by the agent to yield the action
> Y which leads to outcome (event in the environment) Z. There is a web
> of events happening in the world, and presence of the agent influences
> it, by applying changes through its B->Y pathway. Agent tries to make
> sure that some events A lead to events Z. The mapping from various A
> to various Z that happen, including those that result from agent's
> actions, I call the agent's morality. It tries to arrange its own B->Y
> operation so as to achieve particular A->Z transitions. What you can
> observe from small number of experience tells the story only about
> partial mapping, but the mapping can be extrapolated.
>
> This extrapolation, generalization, can start from continuous nature
> of responses: agent doesn't care about small changes in A, B, Y or Z
> by themselves, such changes only matter if they influence some other
> events significantly. These simultaneous restrictions specify the
> design space for changes that are not considered invasive, whereas a
> particular A->Z mapping shows which transitions are desirable, and so
> benevolent modification process can try to modify the scene so that on
> one hand it's not invasive, and on the other hand it improves the
> scene. It is not specified whether A or B are in the mind or in the
> environment, and which transition is more fundamental for morality,
> A->Z or B->Z. The answer is in the way generalization will work out:
> if it finds that "perception" B is more important, it will mainly
> generalize A to be things that are perceived as B, or if "real event"
> A is more important, it will mainly generalize B to perceive events in
> A.
>
> This generalization of concepts in the mind and environment under
> mutual influences that restrict allowed modifications is what
> corresponds to "increasing coherence" in my model, as far as I can
> see.
>
> For example, if A is "a car is approaching towards you on high speed",
> and B is "I know that", A->B will only work if you see the car. If you
> do see the car, you'll apply B->Y->Z pathway that makes you step
> aside. In this case the right thing to do is to generalize B (or Y)
> based on A, so that you'll be able to step aside even if you don't see
> the car (e.g. by hearing it). This generalization is less invasive,
> since changing B (state of mind) will influence less than changing A
> (making you step aside only when you do see a car, and not otherwise).
> If you change A, you keep the pathway B->Z in your morality, and if
> you change B you keep the pathway A->Z.
>
> --
> Vladimir Nesov
> robotact at gmail.com
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>



More information about the extropy-chat mailing list