[ExI] [Ethics] Consequential, deontological, virtue-based, preference-based..., ...

Vladimir Nesov robotact at gmail.com
Thu May 29 14:09:34 UTC 2008


On Thu, May 22, 2008 at 7:58 PM, Jef Allbright <jef at jefallbright.net> wrote:
>
> (1) subjective context of values
> All things equal, acting on behalf of an increasing context of values
> will be assessed as acting increasingly morally.  Boundary conditions
> are, on one hand the case of an isolated agent (minimum context of
> values) where there is no difference between "good" and "right" [cf my
> discussion of Raskalnikov with John C. Wright a a few years ago on
> this list], and on the other hand the case of a "god's-eye" view, from
> which context there is no "right", but simply what is.  Within the
> range of human affairs, all else equal, we would agree that the
> actions of a person to promote values coherent over a context
> increasing (roughly) from individual to family or small group to
> nation to world would be seen (from within that context) as
> increasingly moral.  Likewise, the actions of a person based on an
> coherent model of an increasing context of personal values would be
> seen as increasingly moral.
>
> At this point in the discussion, what is commonly non-intuitive and
> central to your question to me, is that there is no essential
> difference between the example of the agent seen to be acting on
> behalf of an increasing context of values including other persons, and
> the example of an agent acting on behalf of an increasing context of
> values not involving other persons.  These cases are identical in that
> they each represent an agency acting to promote a context of values.
> Further to the point of your question to me, this agency, acting on
> behalf of values over whatever context, will act exactly in accordance
> with its nature defined by its values (of course within the
> constraints of its environment.)  Any consideration of morality (in
> the deep extensible sense which is my aim) is in the loop, not in the
> agent. Otherwise, we are left with the infinite regress of the agent
> deciding that it is "good" to acting in accordance with principles of
> virtue that are "right."
>

Does the main reason you need to include subjectivity in your system
come from the requirement to distinguish between different motives for
the same action based on agent's states of mind? I don't think it's
needed to be done at all (as I'll describe below). Or do you simply
mean that the outcome depends on the relation of agent to its
environment (not "good", but "good for a particular agent in
particular environment")?


On Thu, May 22, 2008 at 8:49 PM, Jef Allbright <jef at jefallbright.net> wrote:
> On Thu, May 22, 2008 at 5:20 AM, Vladimir Nesov <robotact at gmail.com> wrote:
>>
>> Jef, I heard you mention this coherence over context thing several
>> times, as some kind of objective measure for improvement.
>
> I've mentioned, I'm afraid, ad nauseam on this list and a few others,
> attempting to plant merely a seed of thought with the hope that it
> might take root in a few fertile minds.  I would object that it is far
> from objective, and I used to overuse the word "subjective" as badly
> as "increasingly" and "context." these days.  It's important to
> recognize that reductionism, with all its strength, has nothing to say
> about value.  An effective theory of morality must meaningfully relate
> subjective value with what objectively "works" and without committing
> the fallacy of "ought from is."
>
> It may be useful here to point out the entirely subjective basis of
> Bayesian inference (within the assumption of a coherent and consistent
> reality.)
>
> It may be useful here to point out that fundamentally, decision-making
> never depends on absolutes, but only inequalities.
>

Reductionism isn't supposed to answer such questions, it only suggests
that the answer is to be sought for in the causal structure of
reality. If you choose to take guidance from your own state of mind,
that's one way (but not reliable and impossible to perfect). If you
instead choose to crack open the thoughts in the head of a test
subject when he chooses an action, then you are guided by his state of
mind, which can be as good as yours for the job (but allows to obtain
knowledge about larger number of people). I'd like to instead
construct an algorithm that will be able to do the job better then any
human mind, grounding its decisions in reality and targeting the
benefit for required scope (a given human, human civilization,
lobsters, etc.).

I'm not sure what you mean by saying that Bayesian inference is
subjective. It's often said that probability is state of mind, but it
is more a characterization of the mind, with probability describing
the state of incomplete information. Bayesian inference is math, an
algorithm, which is as objective as they go.

>>
>> I'm very much
>> interested, since I'm myself struggling to develop the notion of
>> "benevolent" modification: when given a system, what changes can you
>> make in it that are in general enough sense benevolent from the point
>> of view of the system, when no explicit
>> utility-over-all-possible-changes is given.
>
> I think there's no more important question than that, and that we
> should be satisfied (and even happy) that there can be no such
> guarantee within a system of growth.  That said, while there can be no
> absolutely perfect solution, we can certainly become increasingly good
> at applying an increasingly intelligent probabilistic hierarchical
> model of what does appear to work.  This is my main focus of
> theoretical interest but I'm by no means qualified to speak with any
> authority on this.
>

I didn't imply a guarantee, I deliberately said they only need to be
good in "general enough" sense. Do you have a particular model of what
constitutes intelligent behavior? I'm asking because I became
interested in the notion of benevolent modification primarily as
another way to put the question "what action is intelligent?". I
sketched some of the intuitions behind this analogy in the post on SL4
( http://www.sl4.org/archive/0804/18464.html ).

Please correct me if I understood your point incorrectly (I'm trying
to read between the lines here, since you didn't express this
particular perspective explicitly). An agent is the standard that
makes generalizations, that is you can extract preliminary values from
the agent, based on its state of mind (or more precisely, agent itself
does that), and then directly apply them to slightly broader contexts,
but at some point you'll need to let the agent change the values for
new contexts, which creates a loop problem, when on one hand you'd
like to arrange the world in accordance with the current estimation of
agent's values, and on the other you'd like the agent to be able to
stumble on something not previously specified, so that it'll be able
to refine or change the values.

I don't like the notion of values very much, it looks like unnatural
way of describing things to me. All the way down to the actual
decision, during the most complex processes that precede it, in the
outside world and then during perception, it's natural to describe the
parts by their causal relationships. Then suddenly at the point where
you need to make a 'real world' decision, you have utilities and
probabilities, and not earlier when you made billions of decisions
during perception. And finally, your decision is carried forth, again
regarded as causal flow. Only at the ethereal point when your decision
supposedly impacts your action will the 'morality' problem appear, and
at all the other points it's just about being correct (that is,
arranging it so that the reality has a way of making the right mark on
your perception, science) or knowing the way to achieve the goal (that
is, making the reality to come into agreement with your decision,
engineering). Current decision theories try to patch the gap in
between by postulating normative rules that it "must" follow, or
failing to find their predictions in agreement with the facts, they
dream up "values" and use them to roughly sketch the decision
procedure. The problem is that it's more realistic to see decisions
adding up from the "bias" in perception, along the whole way, starting
from obtaining incorrect information, continuing through imperfections
of low-level perception and evolutionary-programmed predispositions,
and ending in preprogrammed responses and limitations of possible
physical actions. It's instructive to see a person as operating in
deterministic environment, then it's just a fixed process, nothing to
be changed. But suppose you could reach out into that deterministic
world and change one thing at a time, what would you do so as to make
something good for this messy causal process? This protocol is my way
of breaking the loop, when each decision does not constitute so much
in choosing an action, as in changing a way in which actions are
chosen, by adding a memory, an elementary skill, a bias towards
certain decisions, even during low-level perception. These "external"
changes I call intelligence, not the operation of deterministic
machine itself. Of course, in real world you'll need to somehow choose
which parts are "inside the matrix" and which are "outside", and more
realistically to blend them together, but I find it a useful intuitive
way of looking at intelligent improvement.

Let's trace a single causal pathway that passes through an agent: it
starts at event (state) A in the environment and results in perception
B, then this perception is processed by the agent to yield the action
Y which leads to outcome (event in the environment) Z. There is a web
of events happening in the world, and presence of the agent influences
it, by applying changes through its B->Y pathway. Agent tries to make
sure that some events A lead to events Z. The mapping from various A
to various Z that happen, including those that result from agent's
actions, I call the agent's morality. It tries to arrange its own B->Y
operation so as to achieve particular A->Z transitions. What you can
observe from small number of experience tells the story only about
partial mapping, but the mapping can be extrapolated.

This extrapolation, generalization, can start from continuous nature
of responses: agent doesn't care about small changes in A, B, Y or Z
by themselves, such changes only matter if they influence some other
events significantly. These simultaneous restrictions specify the
design space for changes that are not considered invasive, whereas a
particular A->Z mapping shows which transitions are desirable, and so
benevolent modification process can try to modify the scene so that on
one hand it's not invasive, and on the other hand it improves the
scene. It is not specified whether A or B are in the mind or in the
environment, and which transition is more fundamental for morality,
A->Z or B->Z. The answer is in the way generalization will work out:
if it finds that "perception" B is more important, it will mainly
generalize A to be things that are perceived as B, or if "real event"
A is more important, it will mainly generalize B to perceive events in
A.

This generalization of concepts in the mind and environment under
mutual influences that restrict allowed modifications is what
corresponds to "increasing coherence" in my model, as far as I can
see.

For example, if A is "a car is approaching towards you on high speed",
and B is "I know that", A->B will only work if you see the car. If you
do see the car, you'll apply B->Y->Z pathway that makes you step
aside. In this case the right thing to do is to generalize B (or Y)
based on A, so that you'll be able to step aside even if you don't see
the car (e.g. by hearing it). This generalization is less invasive,
since changing B (state of mind) will influence less than changing A
(making you step aside only when you do see a car, and not otherwise).
If you change A, you keep the pathway B->Z in your morality, and if
you change B you keep the pathway A->Z.

-- 
Vladimir Nesov
robotact at gmail.com



More information about the extropy-chat mailing list