[ExI] Are Dyson swarms a good idea?

Jason Resch jasonresch at gmail.com
Wed Jan 28 00:40:45 UTC 2026


On Tue, Jan 27, 2026, 3:32 PM Adrian Tymes via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Tue, Jan 27, 2026 at 2:01 PM Jason Resch via extropy-chat
> <extropy-chat at lists.extropy.org> wrote:
> > On Tue, Jan 27, 2026, 1:33 PM Adrian Tymes via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
> >> Let us assume the multi-world hypothesis from quantum mechanics, for
> >> ease of framing.  Literally everything you do dooms our universe to
> >> not be any of the other universes that branch off from that decision
> >> point - and is therefore, by your logic, evil in equal measure to the
> >> nigh-infinite (or maybe literally infinite) combined potential in
> >> those other universes that ours will never experience.
> >
> > But that same multiverse theory says all possibilities are realized
> (though with different measure). This is why we can justify putting our
> seatbelt, even in a multiverse.
>
> If the existence of things in other multiverses is sufficient, then
> nothing you do matters according to your definition: all outcomes will
> exist in some universes, regardless of what you do.
>

But not with the same measure (
https://en.wikipedia.org/wiki/Measure_(mathematics) ). This was the point I
was making about putting your seatbelt on. You aren't equally likely to die
or survive an accident just because both possibilities happen.

Is there food in your refrigerator right now? Are the odds 50:50, or does
it depend on you doing certain things to ensure the probability remains
high?



> If not, then anything you do or don't do is infinitely evil according
> to your definition, for all the outcomes are excluded from our
> universe.
>

I am not sure who's definition you are using, but it isn't mine.

I would define evil as follows:

There are outcomes we favor (good ones). Our goal as agents in this
universe is to increase the probability of favorable outcomes. What makes
an action evil or wrong is working to decrease the probability of favorable
outcomes.

Now you do bring up an interesting consideration, which is that the
selection of one outcome is always to the exclusion of some other outcome.
This consideration suggests the most moral thing to do, when given multiple
options, is to select the action that is expected to bring about the most
favorable outcomes.



> >> If everything you could do, including nothing, is infinitely evil,
> >> then that measure of evil is rendered meaningless.  There would be
> >> nothing that is more evil or less evil: they'd all be infinitely evil.
> >
> > Only if you think a living world is no better than a dead one. But then,
> why get up in the morning?
>
> There are infinitely many varieties of living world.  All but one are
> excluded from our universe.  Are you saying that their being in other
> universes is okay (which means it doesn't matter how our universe
> unfolds) or that it is not okay (which means excluding all those other
> universes is not okay)?
>

Neither, I think. There are conceivable universes that are hell worlds
where everyone in them knows only suffering, and there are conceivable
universes that are paradises. Most are universes like ours, which fall
somewhere in-between.

I wouldn't say that it is okay that hell worlds exist, and I think given
the opportunity, we should act to decrease the probability of our world
becoming a hell world. Likewise, when given the opportunity, we should work
towards increasing the probability that our world becomes a paradise.


> This is why most people do not give things that might potentially
> exist (given certain choices) equal moral weight to things that
> already actually exist in our universe.


It's convenient for us to ignore or dismiss the concerns of future people
and generations, but I would contend that such dismissal is irrational.


Claiming equivalency makes
> choice of action meaningless,


I don't see how that follows.

Sure you agree it would be immoral to build a device that will release a
deadly virus 200 years from now (a time when no one presently alive will be
around).

How do you justify this act being immoral when it affects only potential
(and presently non-existent) people?

and the purpose of moral weight is to
> guide choice of action,


I agree.

so any definition of moral weight that makes
> it useless for its purpose is rejected.
>

Well you will be happy to know I do not subscribe to such a theory.

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20260127/dddf5a35/attachment.htm>


More information about the extropy-chat mailing list