[extropy-chat] Are ancestor simulations immoral?
Jef Allbright
jef at jefallbright.net
Wed May 31 16:29:45 UTC 2006
On 5/31/06, Russell Wallace <russell.wallace at gmail.com> wrote:
> On 5/31/06, Lee Corbin <lcorbin at tsoft.com> wrote:
>
> > Please!! To those of you in the far future who are running this
> > simulation! JEFFREY IS OUT OF HIS MIND, AND IS NOT SPEAKING FOR
> > THE REST OF US! This is a very *fine* simulation, thank you!
> > It's just swell! We are so grateful!
>
>
> Lee has a good point here. Suppose this is a simulation. Would you rather
> the simulators had just left the machines running a flying windows screen
> saver? Would you rather not have lived at all? Me, I think on the whole life
> as it is has positive value, so I prefer it to not having lived. (Now I
> think there are ways it could have more strongly positive value; but the
> solution to that is to work on improving it, not to proclaim simulations
> immoral.)
>
For those who have bought into Kant's Categorical Imperative, then
that argument will seem to make sense. "Without a doubt I would not
want *my* simulation shut down, given my belief that life is better
than no life at all, therefore I am morally bound to say that runtime
of any simulation of sentience is good."
Sounds attractive, and it's good as far as it goes, but it is
ultimately incoherent.
With apologies to Lee, I'll use that word again, because it is
essential: There is no intrinsic good. "Good" is always necessarily
from the point of view of some subjective agent.
While its own growth is always preferable to no growth from the point
of view of any evolved agent [ref: Meaning of Life], from another
point agent's point of view, the Other may or may not be a good thing.
On the good side, the Other may provide a source of increasing
diversity, complexity and growth, increasing opportunities for
interaction with Self. On the bad side, the Other may deplete
resources and quite reasonably compete with and destroy Self.
What is "moral" is ultimately about what is considered "good". What
is considered increasingly moral is what is seen to work over
increasing scope from *OUR* increasingly broad inter-subjective point
of view.
As cold as it may seem (it actually isn't) to those brought up to
believe that all humans (and by extension, all self-aware life forms)
must be considered equally important (sacred?, and judge not lest ye
be judged), it doesn't hold in the bigger picture.
Again, in case anyone reading this thinks I'm promoting moral
relativism, nihlism or anarchy, I am most assuredly not. The greatest
assurance of good in human culture is the fact that we share a common
evolutionary heritage (shared also to a great extent with other
members of the animal kingdom) and thus we hold deeply and widely
shared values. Increasing awareness of these increasingly shared
values with lead to increasingly effective social decision-making that
will be increasingly seen as good.
The reason this is important and why I keep bringing it up, is that as
we are faced with increasingly diverse challenges brought by
accelerating technological change, the old premises and heuristics
that we may take as unquestioned or obvious truth are going to let us
down.
- Jef
Increasing awareness for increasing morality
More information about the extropy-chat
mailing list