[extropy-chat] Are ancestor simulations immoral?

Jef Allbright jef at jefallbright.net
Fri Jun 2 15:46:48 UTC 2006

On 6/1/06, Lee Corbin <lcorbin at tsoft.com> wrote:
> Jef Albright (not Jeffrey H.) writes
> > For those who have bought into Kant's Categorical Imperative, then
> > that argument will seem to make sense.  "Without a doubt I would not
> > want  *my* simulation shut down, given my belief that life is better
> > than no life at all, therefore I am morally bound to say that runtime
> > of any simulation of sentience is good."
> >
> > Sounds attractive, and it's good as far as it goes, but it is
> > ultimately incoherent.
> >
> > With apologies to Lee, I'll use that word again, because it is
> > essential:  There is no intrinsic good.  "Good" is always necessarily
> > from the point of view of some subjective agent.
> #!?%#&*$!  No word is *essential*.  To believe that some particular
> word *is* essential, I fear, uncovers a bug in your thinking. As I've
> said before, all of us here have perfectly good vocabularies,

Lee, I was using "essential" in its primary sense of capturing the
essence, rather than it's secondary meaning of indispensible.  I
suppose I could have made this clearer given your demonstrated

> Worse, Jef persists not only in using a phrase I don't understand
> at all,  "intrinsic good", but denies that it even exists!  Now,

Lee, having never me you in person, I have to wonder whether your
histrionics are real or just part of your email game.  It seems pretty
silly that you would argue that I can't refer to a concept that
represents something that doesn't actually exist.  On this list we use
precise language and argue about perceptual, cognitive and cultural
illusions and fallacies quite often.

I would venture to assert that the concept of "intrinsic good" is well
known to anyone who has thought deeply about ethics. An intrinsic good
is something which is considered good in and of itself. It's
interesting to me is that there are so many mutually exclusive beliefs
as to what goods are actually worthy of that description.  Hedonists
claim that pleasure is the only intrinsic good.  Kantians claim that
that good will is the only intrinsic good.  Aristotle claimed that
truth is the only intrinsic good.  Some people claim that love is the
only intrinsic good in the universe.

Jef claims that, just as with each of the world's religions claiming
to possess or have access to the only true way, seekers or believers
of intrinsic good are asking the wrong question and therefore getting
the wrong answer.

There is no intrinsic good because good is inherently subjective.
What appears good within any given context can always be shown to be
not good from a different context.  I think it's important that
futurists get this crucial point, because we are poised for dramatic
expansion of the context of our lives and we need to understand what
we mean by good so we can make more effective decisions.

> I guess that's easier on me than someone claiming "XYZWX" exists,
> which could have serious implications, but to claim that "PMXTW"
> does *not* exist leaves a scary hole, maybe, somewhere in my
> ontology!

Both silly and false.  Believing in something that does not exist can
be just as detrimental as not believing in something that does exist.
Either way it's an inaccuracy in your model leading to less effective

> Well, how about this:  since it doesn't exist, would you mind
> dropping it from your discourse?

Again, I really can't tell when you're being silly and when you're
being serious Lee.

> > While its own growth is always preferable to no growth from the point
> > of view of any evolved agent [ref:  Meaning of Life], from another
> > point agent's point of view, the Other may or may not be a good thing.
> > On the good side, the Other may provide a source of increasing
> > diversity, complexity and growth, increasing opportunities for
> > interaction with Self.  On the bad side, the Other may deplete
> > resources and quite reasonably compete with and destroy Self.
> I should have another cow about "good" here, since I have no clear
> idea of what you mean by it!  But I will *try* to get in the spirit
> of the thing, whatever it is.

Since there is no "good in itself", we can only talk about what is
considered good in some pragmatic sense.

> This last statement seems to boil down to Darwinian evolution.

Yes, there is a strong evolutionary aspect to what we consider good,
both because such goods tend to be those which have survived a
competitive environment, and because our own values are shaped by the
evolutionary process.

> > The greatest assurance of good in human culture is the fact that
> > we share a common evolutionary heritage... and thus we hold deeply
> > and widely shared values.
> Yes, that's true, we do. But many other animals are solitary
> by nature.

Not sure what point you're making here.

> > Increasing awareness of these increasingly shared values with
> > [will] lead to increasingly effective social decision-making
> > that will be increasingly seen as good.
> I believe that this indeed is the way we've progressed the last
> 10,000 years or so, but I don't think that you've put your finger
> on the actual mechanism.

Our preferences are the result of an evolutionary process that has
operated over cosmic time, almost all of that without conscious
awareness, let alone intention.   At a low level, we have instinctive
feelings of good and bad built into us by that process.  At a higher
level, we have culture (including religion) strongly influencing our
decision-making about what is good and what is bad (because these
cultural traits were beneficial adaptations).

Just recently we have arrived at an even higher level of organization
where we can use information technology to increase our awareness of
our values, apply our increasing awareness of what works, and thereby
implement increasingly effective decision-making, intentionally
promoting our values into the future, which is the very essence of

> For, were it just a matter of "increasing awareness", then why
> just the last 10,000 years?  We had at least 80,000 years before
> that to become aware of our "shared values", but nothing really
> happened.

It has always been about "what works" in the sense of natural
selection.  Only recently are we becoming aware of our subjective
values and our increasingly objective understanding of what works, and
thus able to play an intentional role in our further development.

> I think that "increasing awareness" of our shared values is a
> luxury that we can now afford, due to increased mastery of
> nature (technological advances). At this time it's easier to
> sit back in a comfortable job and make money than it is to go
> seize it from the neighboring tribe; but this has been really
> true only the last couple of hundred years!

Yes, except I would say that our increasing awareness of what works is
making our actions more effective rather than "easier", given that in
the bigger picture we continue to interact within a competitive
co-evolving environment.

> > The reason this is important and why I keep bringing it up, is that
> > as we are faced with increasingly diverse challenges brought by
> > accelerating technological change, the old premises and heuristics
> > that we may take as unquestioned or obvious truth are going to let us
> > down.
> >
> > - Jef
> > Increasing awareness for increasing morality
> Yes, the old premises and heuristics may indeed let us down.
> We have to stay on our toes; all conjectures are tentative.
> Lee

More information about the extropy-chat mailing list