[extropy-chat] Are ancestor simulations immoral? (An attempted survey)
Samantha Atkins
sjatkins at mac.com
Wed May 24 08:51:35 UTC 2006
On May 23, 2006, at 9:08 PM, Lee Corbin wrote:
> Samantha writes
>
>> Also, I have a suspicion that any sufficiently powerful intelligence
>> creates de facto simulations just in the act of mulling over
>> scenarios due to the greater level of detail and scope involved. If
>> so then simulations naturally arise from the very thoughts and
>> daydreams of sufficiently advanced minds.
>
> A good point that should be kept in mind.
>
>> I doubt this world or our current small selves are sufficiently
>> interesting for such astronomical numbers of simulated copies to be
>> at all likely.
>
> I wish that those who believe that there is a good chance that *we*
> are living in a simulation would remember this!
>
>> [Robert wrote]
>>> to run billions and billions of RBs (some of whom are bound to
>>> suffer horribly and die) then some holier than I people are going
>>> to be bound and determined to prevent me from doing that. The only
>>> way that seems possible is to either (a) imprison me on the Earth
>>> (or even in a very high security prison) or (b) take the nanobots
>>> to my mind and force the removal of such "immoral" thoughts ( i.e.
>>> state or equivalent "mind control").
>>
>> Who says it is immoral? Silly maybe, but immoral?
>
> Oh, I can easily see why folks might say it's immoral. Consider
> even the case where mere rumination of an extremely advanced
> being---as you wrote before---amounts to horrible torture of
> a human-equivalent sophont. It's *real* easy to say that such
> thoughts on the part of such an advanced entity are not moral.
> In fact, I concur, unless it can be shown that this vast
> creature gets such overwhelming benefit from the activity
> that it becomes worth it, globally.
What is really real in such a case? Is the question meaningful? I
think up a hypothetical situation in my Jupiter Brain self involving
several million thought up intelligent beings and various situations
and possibilities. In many of them my (from some perspective)
dreamed up beings have within the dream dream-suffering. Is this
really real and immoral to even dream in depth or not?
Effectively no autonomous intelligent beings could be created or
simulated or evolved by such an intelligence without their freedom of
choice in a consistent environment being able and likely to lead to
suffering. Having no intelligent beings or no autonomous ones or
the Ultimate Nanny State to make sure they never for one moment
suffer seems rather stifling.
Perhaps we focus too much on the suffering and too much think the
very fact of suffering plus intelligent simulator/creator being means
the being is immoral. As long as the sim/creation has a way for the
beings to actually hit a Singularity and/or address and ultimately
eliminate a lot of their own suffering (perhaps with reincarnation-
like inclusion of all who died in the reality before) then I don't
think the creator being was immoral at all.
>
>> I would rather risk suffering and dying, even countless times, in
>> some lunatic posthuman hell than allow some sophonts such unlimited
>> power to forbid absolutely anything and everything that might be used
>> somehow, sometime in a way they don't like. That would be worse
>> than hell for me.
>
> Sure that you are not exaggerating? Provided that I fell in
> to the clutches of a rather benevolent AI, life could still be
> very full of almost everything I value. True, I couldn't engage
> in "too accurate" historical simulations and a few other banned
> activities, but I'd greatly prefer it to your "suffering and
> and dying countless times in a posthuman hell".
>
The level of control necessary to absolutely forbid the creation of
any intelligent being that would suffer in the created environment
plus the attendant stifling of abilities of other intelligence by the
"benevolent" AI would be intolerable.
- samantha
More information about the extropy-chat
mailing list