[ExI] [Ethics] Consequential, deontological, virtue-based, preference-based..., ...
Vladimir Nesov
robotact at gmail.com
Sat Jun 7 13:21:19 UTC 2008
On Thu, May 29, 2008 at 8:11 PM, Jef Allbright <jef at jefallbright.net> wrote:
> On Thu, May 29, 2008 at 7:09 AM, Vladimir Nesov <robotact at gmail.com> wrote:
>>
>> Does the main reason you need to include subjectivity in your system
>> come from the requirement to distinguish between different motives for
>> the same action based on agent's states of mind? I don't think it's
>> needed to be done at all (as I'll describe below). Or do you simply
>> mean that the outcome depends on the relation of agent to its
>> environment (not "good", but "good for a particular agent in
>> particular environment")?
>
> It appears we have something of a disconnect here in our use of the
> term "subjective", perhaps reflecting a larger difference in our
> epistemology. First, I'll mention that the term "subjective" often
> evokes a knee-jerk reaction from those who highly value rationality,
> science and objective measures of truth and tend to righteously
> associate any use of the term "subjective" with vague, soft, mystical,
> whishy-washy, feel-good, post-modernist, deconstructionalist,
> non-realist, relativist, etc. thinking. For the record (for the nth
> time) I deplore such modes, and strive for as much precision and
> accuracy as practical (but not more so.) When I refer to something as
> "subjective" I'm referring to the necessarily subjective model through
> which any agent perceives its umwelt within (I must assume) a coherent
> and consistent reality.
But in many contexts, at least in theoretical discourse, it can be
assumed that the reality is specified down to the quarks, which makes
the difference between subjective look at the facts that follow from
such a description and, as you see it, overzealoous objective view, to
be too close to each other to warrant any distinction. Am I
misundestanding your point again? Even if you insist on "subjective
probabilities", there are things which are too certain for their
potential falsity to influence decision-making, including
epistemology.
>>
>> Reductionism isn't supposed to answer such questions, it only suggests
>> that the answer is to be sought for in the causal structure of
>> reality. If you choose to take guidance from your own state of mind,
>> that's one way (but not reliable and impossible to perfect).
>
> Statements such as the above are jarring to me (but overwhelmingly
> common on this discussion list.) Who is the "you" who exists apart
> such that "you" can take guidance from your own state of mind?
>
You are not singular, you consist of moving parts that interact with each other.
>
> Yes, I speak repeatedly about the increasing good of increasing
> coherence over increasing context of a model for decision-making
> evolving with interaction with reality. Note that this model is
> necessarily entirely subjective -- the map is never the territory.
Map is in the territory, so territory that implements map can also be
mapped. That is how map influences the territory, by being part of it.
A fallacy is in assuming that map is the territory that is being
mapped by it, even if it's not so. But it might be so, sometimes.
> You may have some difficulty with my statement above, and I'm happy to
> entertain any thoughtful objections, but you may have even more
> difficulty with what I will say next:
>
> The term "increasing context" applies equally well to the accumulating
> experience of a single person or to the accumulating experience of a
> group -- with scope of agency corresponding to the scope of the model.
> In other words, agency entails a self, but in no sense is that agency
> necessarily constrained within the bounds of a single organism.
No, I actually have no problem with it. My current working hypothesis
for Friendliness is to construct something that is to human
civilization, like a brain is to inborn lower-level drives.
> So, one might object: "So Jef, you are saying that Hitler's campaign
> to exterminate the Jews should be seen as moral within the subjective
> model that supported such action?" To which I would reply "Yes, but
> only to the extent such model was seen (necessarily from within) to
> increase in coherence AS IT INCREASED IN CONTEXT." Eliminating any
> who object certainly increases coherence, but it does NOT increase in
> context, and thus it tends not to be evolutionarily successful. Cults
> are another fine example of increasing coherence with DECREASING
> context and most certainly not the other way around.
I'm sure nobody is assuming objective morality. Morality is determined
by properties of an agent, but properties of an agent are determined
by its physical makeup, which you can analyze from outside. How does
your higher cognition know what is good for "you"? Parts of the brain
implementing it observe the inborn drives, or environmental infuence.
>> I didn't imply a guarantee, I deliberately said they only need to be
>> good in "general enough" sense. Do you have a particular model of what
>> constitutes intelligent behavior?
>
> Yes and no. As I see it, the essential difficulty is that the
> "intelligence" of any action is dependent on context, and (within the
> domain of questions we would consider to be of moral interest) we can
> never know the full context within which we act.
Hence, ability to generalize as an essential feature of intelligence,
if not the only one.
> I think it is worthwhile to observe that "intelligent" behavior is
> seen as maximizing intended consequences (while minimizing the
> unintended (and unforeseen)). It is the capacity for effective
> complex prediction within a complex environment.
>
> I think this problem is inherently open-ended and to the extent that
> future states are under-specified, decision-making must depend
> increasingly not on expected utility but on a model representing
> best-known hierarchical principles of effective action (i.e.
> instrumental scientific knowledge) promoting a present model of
> (subjective, evolving) values into the future.
>
> As I said to Max several days ago (Max?) morality does not inhere in
> the agent (who will at any time exactly express its nature) but in the
> outwardly pointing arrow pointing in the direction of the space of
> actions implementing principles effective over increasing scope
> promoting an increasing context of increasingly coherent values.
You keep repeating these words, and I'm sure you have some intuitive
picture in your mind that is described by them, but it help in
communicating that picture.
> You appear to assume the possibility of some Archimedian point outside
> the system (where if you could but stand (given a sufficient lever)
> you could move the world.) My point, is that we are always only
> within the system itself, and thus we can never have a truly objective
> basis for navigating into the future.
We are always outside the environment, interacting with it, or
half-of-you is always outside the other-half-of-you. I don't see a
problem is assuming an outside view, it is a useful approximation for
reasoning.
>> I don't like the notion of values very much, it looks like unnatural
>> way of describing things to me.
>
> Yes, "values" are seen as soft, squishy mushy things, the very
> antithesis of hard objective rationality.
>
> My usage of "values" is not entirely synonymous with "preferences"
> however. I'm using "values" in the broader sense implying the actual
> hard physical nature of the agent, inherently deeper than the
> "preferences" which the agent might be aware of and express. In the
> same sense, I encompass the "values" of a simple thermostat,
> fundamentally no different than the "values" of a person, although
> differing a great deal in scale of complexity.
>
Values = physical makeup of an agent? At this granularity, I'd say
that environment is also very important in determining the values, at
which point explanation looses meaning, if no specific concept is
presented.
--
Vladimir Nesov
robotact at gmail.com
More information about the extropy-chat
mailing list