[extropy-chat] The Good

Jef Allbright jef at jefallbright.net
Sun Jun 4 02:09:33 UTC 2006


On 6/3/06, Lee Corbin <lcorbin at tsoft.com> wrote:
> Jef writes

> > I would venture to assert that the concept of "intrinsic good" is well
> > known to anyone who has thought deeply about ethics. An intrinsic good
> > is something which is considered good in and of itself. It's
> > interesting to me is that there are so many mutually exclusive beliefs
> > as to what goods are actually worthy of that description.  Hedonists
> > claim that pleasure is the only intrinsic good.  Kantians claim that
> > that good will is the only intrinsic good.  Aristotle claimed that
> > truth is the only intrinsic good.  Some people claim that love is the
> > only intrinsic good in the universe.
>
> Okay, thanks for the explanation. You've swerved into a description/
> definition that I can understand!
>
> > Jef [I] claims that, just as with each of the world's religions claiming
> > to possess or have access to the only true way, seekers or believers
> > of intrinsic good are asking the wrong question and therefore getting
> > the wrong answer.
>
> Well, it's not too surprising. Arguing about whether something is
> "intrinsically" good, I would think, would be as non-productive as
> arguing whether certain essences inhere in one thing or another.
> Why don't people quit using fancy abstractions and speak in simple
> everyday terms?  It's *OBVIOUS* that you can examine any substance
> in the universe, and be unable to show that it has either the
> essence of "intrinsic goodness", or that is has the property
> of "intrinsic goodness".

In the preceding text I switched from the abstract to discrete
examples, citing the opposing views of hedonism, Kant, Aristotle, and
"white lighters" as to what constitutes the ultimate good.  You said
that doing this helped you get the idea, but it seems to me we're only
part way there, because none of those examples related to substance
[which you refer to], but rather to more abstract concepts.

The concept I am trying to convey is even more abstract [but with a
very practical application!] and thus I can't provide discrete
examples, but I try to build up to it by highlighting incoherence in
lower-level thinking and thereby the need for a more encompassing
understanding of what it really means to say that something is good.

For reference, the abstract point I'm trying to make is the following:

Increasing awareness of principles of what works (increasingly
objective scientific/instrumental knowledge), applied to increasing
awareness of our (increasingly intersubjective) values that work over
increasing scope, leads to what is seen as increasingly moral
decision-making.

The practical application of that understanding is that it becomes
clear that (1) we *can* increasingly agree on certain choices being
better than other choices, and (2) we *should* facilitate this process
of increasingly moral decision-making by intentionally building a
technological framework to increase our awareness of our values and
our instrumental knowledge and apply them to social decision-making.

I wish I knew how to factor out (to abstract) all those
"increasing's", but it's all about evolutionary growth and the Red
Queen would agree that standing still is never an option.


> Like I say, the moment a word or phrase starts posing communications
> difficulties---or even appears at all suspect---ditch it in favor
> of (if need be) longer or more circuitous descriptions. The only
> point is to get the ideas across.

Yes, my objective here is to get a certain idea across, and while it
appears we may be making some progress, we're certainly not there yet.
 I do intend to write a more thorough exposition of this "Arrow of
Morality" thinking, and I value your interaction as contributing to
making my message clearer.


> > There is no intrinsic good because good is inherently subjective.
> > What appears good within any given context can always be shown to
> > be not good from a different context.
>
> That does not sound correct to me! You cannot necessarily *show*
> anything to anyone. The other entity is, in the final analysis,
> a physical device. It may simply not be wired to incorporate into
> its concepts whatever it is that you wish to *show*. Try, for
> example, showing to Hitler that Jews are as acceptable as other
> people.

Strangely, it seems you are supporting my point while claiming to
argue against it.  It seems we are now agreeing that there can be no
absolute agreement on what is good, because each agent functions as a
physical device with differing inputs and differing transfer
functions.


> > I think it's important that futurists get this crucial point,
> > because we are poised for dramatic expansion of the context
> > of our lives and we need to understand what we mean by good
> > so we can make more effective decisions.
>
> Do we really have to use that concept?  Why is it essential
> (in the "required" meaning of the word)?  Ultimately, you can
> always say the much more accurate "X approves of Y", or "humans
> generally favor Y".  Usage of terms like "good" indicates---
> sorry---an adherence to Aristotelian definitions.

Right!  When we understand what is really meant by "good" we will more
clearly and effectively come to cooperate on actions which appear to
be *better* at promoting our shared values, rather than competing over
conflicting beliefs about what is "good."


> > > > The greatest assurance of good in human culture is the fact that
> > > > we share a common evolutionary heritage... and thus we hold deeply
> > > > and widely shared values.
> > >
> > > Yes, that's true, we do. But many other animals are solitary
> > > by nature.
> >
> > Not sure what point you're making here.
>
> Just want to make sure that you're restricting your descriptions
> to human evolved entities.

I'm not restricting this thinking to just humans. But I recognize that
I tend to explain things in an abstract general sense while you may be
focused on the specific that there is currently no example of a
non-human moral agent.  Just as you pointed out that "in the final
analysis, the other entity is a physical device", there is no reason
to accord priviledged status to humans over other entities.

As you know, on the various transhumanist lists there are perrenial
discussions of the moral rights of AIs or dolphins or apes, etc. The
concept of "rights" is just as problematic as the concept of "good",
and for similar reasons. Some insist that inalienable rights exist,
rather than being the result of a "social contract" of sorts. Some
insist that intrinsic good exists (as we've already discussed.)  Some
adhere to moral relativism and seem to ignore our understanding that
some ideas really do work better than others.  These positions often
seem driven more by a strong feeling of "unfairness" in the world
(another similarly problematic concept) than by an understanding of
social and physical dynamics.

In my metaethical thinking, there is no reason to distinguish between
humans and other agents.  Each agent pursues its own goals, and ethics
is concerned with how we know what actions are better than other
actions.  To the extent that non-human agents can express their
values, and to the extent that those values are seen to work, then
quite naturally those agents should be accorded moral status.


> > > > Increasing awareness of these increasingly shared values with
> > > > [will] lead to increasingly effective social decision-making
> > > > that will be increasingly seen as good.
> > >
> > > I believe that this indeed is the way we've progressed the last
> > > 10,000 years or so, but I don't think that you've put your finger
> > > on the actual mechanism.
> >
> > Our preferences are the result of an evolutionary process that has
> > operated over cosmic time, almost all of that without conscious
> > awareness, let alone intention.   At a low level, we have instinctive
> > feelings of good and bad built into us by that process.
>
> Well, I'd say we have instinctive preferences. And you'll have
> to admit that in ev psych books, you'll not find many references
> to "good" and "bad". But you'll find plenty of references to
> preferences.

Yes, you're making a finer distinction with which I agree.

> > Just recently we have arrived at an even higher level of organization
> > where we can use information technology to increase our awareness of
> > our values,
>
> Yes!
>
> > apply our increasing awareness of what works, and thereby
> > implement increasingly effective decision-making, intentionally
> > promoting our values into the future, which is the very essence of
> > morality.
>
> Okay, though I'm not sure what the "essence of morality" is  :-)
> But you're dead right (pardon the expression) to speak of us
> perpetuating our values into the future.
>
> > > For, were it just a matter of "increasing awareness", then why
> > > just the last 10,000 years?  We had at least 80,000 years before
> > > that to become aware of our "shared values", but nothing really
> > > happened.
> >
> > It has always been about "what works" in the sense of natural
> > selection.  Only recently are we becoming aware of our subjective
> > values and our increasingly objective understanding of what works,
> > and thus able to play an intentional role in our further development.
>
> I really have to reject your pragmatic "what works". Just because
> something works does not mean that we as "enlightened" people are
> going to approve of it.

Lee, please note that I consistently say that moral actions are
*always* described by two factors:  (1) Kowledge of what works,
applied to (2) our subjective values.

>  What if the U.S. were to conclude that its
> interests were best served by holding the rest of the world in
> nuclear terror?  That may actually "work" just fine---given history
> as a guide---but most of us would strenuously object.

That's why I keep using the phrase "over increasing scope".  For a man
alone on an island, the concept of morality doesn't even apply.  As
the intersubjective circle widens, then  what works over increasing
scope (of interactees, types of interactions, and duration of time) is
seen as increasingly moral.

- Jef



More information about the extropy-chat mailing list