[extropy-chat] Are ancestor simulations immoral? (An attempted survey)

Robert Bradbury robert.bradbury at gmail.com
Wed May 24 14:37:50 UTC 2006


On 5/23/06, Lee Corbin <lcorbin at tsoft.com>, commenting on my comments,
wrote:

> oh, what difference could it make what level it's on?  so long
> as we are talking about verisimilar emulations?


But they don't have to be verisimilar.  If you look at cultures on Earth
today they are all "realities" with *very* different levels of acceptable
suffering.  Spending a day or two watching the History Channel (documenting
societies & wars of antiquity, the middle ages & modern times) makes it
clear that the levels of "morality" which are acceptable by essentially
identical "modern" humans (i.e. the hardware is the same, only the software
is different) exhibit quite a wide range.

> or everyone in the upper levels of reality have thrown away the
> > "morality" paradigm.
>
> Sorry---you've lost me. Do you mean higher levels of embedded simulation?


I was thinking along the lines that because there is pain & suffering &
death in *this* reality, the overlords running *this* sim must be running
with "robust_morality_constraints = FALSE;"
Though if you think about it if you grant the sims enough computing capacity
(or time) -- such as that that we are approaching, then one could have
nested sims, each sub-level could have increasingly less restrictive
morality constrants.  I believe this parallels something Dante has already
written about.

> Furthermore, once we have the ability to run simulations
> > ourselves (remember *worlds* of billions of RBs, LCs, JHs,
> > etc. -- i.e. 10^10 copies worth of each isn't even
> > stretching things) there doesn't seem to be much one can
> > do to prevent it.




> We know people who intend for their AI to do exactly this.


That is my impression of that perspective.  Which is why it may be necessary
to take a fast rocket ship out of this solar system at the earliest possible
date.  There must be the potential for gobs of post-human stories about
interstellar/intergalactic "battles" being fought between the "though must
be good" and "though ought to be able to do as one damn well pleases"
philosophies.  The various Stargate series seem to be evolving in this
direction but I don't think they have fully grasped this (in part because
they seem to be constrained to dealing with largely "human" entities).

I believe that they would prefer for you to simply be frustrated
> in your designs. Much as people who today would run cock-fights
> are discouraged by contrary laws (and we are talking about
> entities who could enforce such with much greater efficiency
> than today).


I'm with Samantha on this (if I understand her comments).  It doesn't matter
much whether the prison is physical or virtual.  If my freedom is
artificially limited by *anyone* or *anything* I have problems with it.

Now what would be interesting is whether or not the overlords would attempt
to prevent me from constructing a world where millions of RBs might suffer
if they were created using self-copies after I have signed a legally binding
document (e.g. contract) upon myself that I and all future self-copies have
agreed to participate in an experiment which involves the potential for
experiencing pain and/or suffering and/or death.  In that case we have
worlds populated by millions of copies of people who knowingly commited to
participating in that experiment.  So I can't cause pain & suffering to mice
or rats but I can do it with people who agree to play the game.

Now, the difference between an enlightened person and an unenlightened
person in *this* reality is that many enlightened people already know they
signed that contract.  So the interesting thing about the FAI approach (as I
understand it) is that it potentially bars people from engaging in the
pursuit of or attaining enlightenment.

Robert
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20060524/c02dff98/attachment.html>


More information about the extropy-chat mailing list