[extropy-chat] Are ancestor simulations immoral? (An attempted survey)

Lee Corbin lcorbin at tsoft.com
Wed May 24 20:48:23 UTC 2006


Robert writes

> > > or everyone in the upper levels of reality have thrown away the
> > > "morality" paradigm.

> > Sorry---you've lost me. Do you mean higher levels of embedded simulation?

> I was thinking along the lines that because there is pain & suffering & death
> in *this* reality, the overlords running *this* sim must be running with
> robust_morality_constraints = FALSE;" 

Yes, that they must.

> Though if you think about it if you grant the sims enough computing capacity
> (or time) -- such as that that we are approaching, then one could have nested
> sims, each sub-level could have increasingly less restrictive morality
> constraints.  I believe this parallels something Dante has already written about. 

Yes, but I don't really understand why you suppose that the deeper
the sub-level, the less restrictive the morality constraints. I
would expect exactly the opposite. A Friendly-AI, as I understand
it, would forbid me, and the sims I create, and the sims they
create, etc. to all levels, from being cruel.  On the other hand,
I may *add* constraints, so that---say because of my own peculiar
morality in that I cannot bear people experiencing disappointment
---my sims and all deeper ones must never even be frustrated in
their projects.

> > > Furthermore, once we have the ability to run simulations
> > > ourselves (remember *worlds* of billions of RBs, LCs, JHs, 
> > > etc. -- i.e. 10^10 copies worth of each isn't even
> > > stretching things) there doesn't seem to be much one can
> > > do to prevent it.

> > We know people who intend for their AI to do exactly this.

> That is my impression of that perspective.  Which is why it may
> be necessary to take a fast rocket ship out of this solar system
> at the earliest possible date.

Yes, but don't forget to stay behind too. People are always forgetting
that in the future most likely we'll be able to both do and not do
some action. The copy of you that stays behind will still have a blast
talking to his old friends (e.g. Lee Corbin) and what not. Actually,
I am a bad example: you'll surely take a copy of me along for your 
wild ride, please?

> There must be the potential for gobs of post-human stories about
> interstellar/intergalactic "battles" being fought between the
> "though must be good" and "though ought to be able to do as one
> damn well pleases" philosophies.

Yes, I just re-read an old one by Keith Laumer.

> > I believe that they would prefer for you to simply be frustrated
> > in your designs. Much as people who today would run cock-fights 
> > are discouraged by contrary laws (and we are talking about
> > entities who could enforce such with much greater efficiency
> > than today).

> I'm with Samantha on this (if I understand her comments).  It
> doesn't matter much whether the prison is physical or virtual.
> If my freedom is artificially limited by *anyone* or *anything*
> I have problems with it. 

Oh, you guys!  Relax.  Things could be so much worse!  As I have
loudly asserted for years, "ANYONE OUT THERE WHO WANTS TO RUN
A COPY OF LEE CORBIN HAS MY PERMISSION!  I WON'T BE PICKY ABOUT
THE CIRCUMSTANCES YOU PROVIDE, JUST SO LONG AS IT'S A LIFE WORTH
LIVING!"

You see, with my strategy, a lot of SysOPs and AIs may take me
up on that, and I'll get plenty of runtime.  :-)

> Now what would be interesting is whether or not the overlords
> would attempt to prevent me from constructing a world where
> millions of RBs might suffer if they were created using self-
> copies after I have signed a legally binding document ( e.g.
> contract) upon myself that I and all future self-copies have
> agreed to participate in an experiment which involves the
> potential for experiencing pain and/or suffering and/or death.

Yeah, the busybodies---your overlord---would probably object. Sigh.

> In that case we have worlds populated by millions of copies
> of people who knowingly committed to participating in that
> experiment.  So I can't cause pain & suffering to mice or
> rats but I can do it with people who agree to play the game. 

Well, hmm, perhaps you're making a persuasive case. But most
people object to slavery, say, even if it's all done legally
and I were to lose my liberty, say, in a poker game. 

I'm with you: if I don't own my life, who does?  The damn state?
I should be able to gamble it away if I wish.

> Now, the difference between an enlightened person and an
> unenlightened person in *this* reality is that many enlightened
> people already know they signed that contract.  So the
> interesting thing about the FAI approach (as I understand
> it) is that it potentially bars people from engaging in the
> pursuit of or attaining enlightenment. 

It bars them from pursuing *some* kinds of enlightenment, e.g.,
running historical simulations where actual creatures are 
emulated to the point that they experience negative emotions.

Now I'll actually be *very* happy if any AI whatsoever takes
over the solar system who leaves most of us with lives worth
living at all.  But even better for me, I might get to send
copies along with all the live wires who are heading for parts
unknown just ahead of the FAI.  If y'all will be so kind as to
let me tag along  :-)

Lee




More information about the extropy-chat mailing list