[extropy-chat] Are ancestor simulations immoral? (An attempted survey)

Lee Corbin lcorbin at tsoft.com
Wed May 24 20:27:14 UTC 2006


Samantha writes

> > Oh, I can easily see why folks might say it's immoral. Consider
> > even the case where mere rumination of an extremely advanced
> > being---as you wrote before---amounts to horrible torture of
> > a human-equivalent sophont. It's *real* easy to say that such
> > thoughts on the part of such an advanced entity are not moral.
> > In fact, I concur, unless it can be shown that this vast
> > creature gets such overwhelming benefit from the activity
> > that it becomes worth it, globally.
> 
> What is really real in such a case?  Is the question meaningful?

Well, it was your idea!  :-)   But yes, why not?  Let's drop out
of virtual reality and uploaded beings for a moment. If a piece
of matter undergoes calculations that are isomorphic to me having
a headache, then that piece of matter is emulating me having a
headache, and I object to this activity!

Now it is *conceivable*, though hardly necessary, that an advanced
being might think in such magnificent detail concerning some
physical process (my aforesaid headache, for example), that it
is indeed tantamount, as you thought, to me having a headache.

Because suppose that we identify within the computation occurring
within its brain certain subsets of data manipulation that are
*isomorphic* to another calculation; then we must say that that
calculation is happening again. Or, in other words, it does seem
possible that within a larger computation---the being ruminating
on 21st century type people and their headaches, say---we find
isomorphic equivalents.

> I think up a hypothetical situation in my Jupiter Brain self involving  
> several million thought up intelligent beings and various situations  
> and possibilities.  In many of them my (from some perspective)  
> dreamed up beings have within the dream dream-suffering.  Is this  
> really real and immoral to even dream in depth or not?

It is important to distinguish between *portrayals* and *emulations*.
Hollywood, for example, portrays thousands of warriors attacking a
castle, but these days there don't need to be real actors. (Yet in
one scene in Lord of the Rings, I think, hundreds of actors did need
to stand for hours out in the hot sun, and so in this case the
discomfort of the characters was emulated, not merely portrayed,
unfortunately.)

The Jupiter brain you have just described is probably running over
*portrayals* of "several million thought-up beings". The J-brain 
probably is not *emulating* them.

> Effectively no autonomous intelligent beings could be created or  
> simulated or evolved by such an intelligence without their freedom of  
> choice in a consistent environment being able and likely to lead to  
> suffering.

That's right. Such historical or ahistorical simulations---emulations,
really---do create bonafide emotions (and pain and pleasure) in those
who are created.

> Having no intelligent beings or no autonomous ones or  
> the Ultimate Nanny State to make sure they never for one moment  
> suffer seems rather stifling.

Yes; I think that the principles of Private Property and Rule by Law,
---from which all our progress and prosperity derive---should be adhered
to in the future as well. Then even if a beneficent AI does control
the solar system, it ought to allow you or I freedom to do with his
or her private property what he or she will.

> As long as the sim/creation has a way for the  
> beings to actually hit a Singularity and/or address and ultimately  
> eliminate a lot of their own suffering (perhaps with reincarnation- 
> like inclusion of all who died in the reality before) then I don't  
> think the creator being was immoral at all.

Yes, but that's not the hard question. What if someone just wants
to run the battle of Gettysburg a few hundred times to settle 
certain "what if" questions? And he has no intention of allowing
the emulated soldiers to advance to transcendence. Say you or I
wish to do that with our own private resources, and say that we
must (for some reason) emulate all the suffering therein in order
to gain verisimilitude. My bet is that it is still best for the
future for my AI master to let me do this; compared to the vast
number of other things that any normal person (e.g. me) will be
doing, this really won't amount to much.

An analogy is laws against mistreatment of animals. Because so
few people would actually go to the pound and get a dog for the
purpose of tormenting him, it is better that people be allowed
to do on their private property what they would like, without
the nosy government trying to keep a tab on and control what
they are permitted to do. But I can *live* with laws against
cruelty to animals---they don't make my life intolerable even
if in a small way they diminish my freedom.

> The level of control necessary to absolutely forbid the creation of  
> any intelligent being that would suffer in the created environment  
> plus the attendant stifling of abilities of other intelligence by the  
> "benevolent" AI would be intolerable.

I know that you and Russell feel this way, but the snooping by your
AI master could be well-nigh invisible. Besides, there would be so
many "approved of" things to do, that it just wouldn't bother me.
But then, I'm strange: I don't really care if Bush taps my phone;
maybe he'll learn something philosophically profound. Distasteful?
Yes. But "intolerable"?  No!

What *is* intolerable is "suffering and dying countless times in a
posthuman hell".  In fact, given all the ways that the singularity
could go wrong, I'll be downright grateful for all the runtime
I get that *is* tolerable.

Lee





More information about the extropy-chat mailing list