[ExI] QT and SR

Lee Corbin lcorbin at rawbw.com
Mon Sep 15 12:32:50 UTC 2008

Stuart the Avantguardian writes

> --- Lee wrote:
>> > I don't see why people would have a problem
>> > with the possible FTL nature of a correlation
>> > in the EPR experiment, or wave-function collapse,
>> It's incomprehensible on the theory of SR, that's why.
> Sure it is a paradox but that doesn't make it false.

If you believe SR, then how can you believe "instantaneous"
wave-function collapse? (Even if somehow that bizarre
idea becomes coherent, which almost no one claims it is.)

>> > but have no problem with the idea of the entire
>> > universe being causally split in the mere seconds
>> > it takes for someone to make a measurement,
>> The whole universe never ends up being split in its
>> entirety unless it's of finite extent. The splits
>> start locally and speed outwards only at c.
> Well that is just about the scariest cosmology I have ever
> heard of. Violate every conservation law in existense

Not at all. You probably have the idea (though unlike some
people I don't really claim to know what you are thinking!)
that when a new "branch" is created, it is analogous to a
new file being created, or copied, i.e., that as many new
resources are somehow required as went into the original.

But when the Mississippi splits into two separate streams,
is any conservation law broken? Likewise, at the delta
where it splits into innumerable streams, no conservation
laws are broken because the entire flow of water is still
the same, merely broken into discrete channels.

It's the same on MWI branching. The "measure" of two
separate streams, when added, equals the measure of
the single undistinguished branch before splitting.

>and then have the split come along like the Langoliers and clean up the horror just in a nick of time. If the split was just one 
>iota slower than c, then everything would be cooked from the radiation of exponentially reproducing suns. Unless of course the 
>universe were finite. Of course if the universe *were* finite then implications of MWI would be truly profound.

Hmm. I admit that I've never wondered if splitting would
be problematical in a finite universe. Let me think out loud.

Suppose we represent the universe at the time of the big
bang by five zeros 00000 that have a weight of, say, one

Then the first quantum event happens, and we have two
universes, each of eight ounces:

00000   and   00001

Then these each break into two, and then those into two, and
finally we have the upper limit of 32 possible "branches" or
universes. Let's examine the one which is 10011. Suppose that
it now bifurcates into 10011 and 11011. That merely makes
it merge with a pre-existing 11011, i.e., become identical with.
This is analogous to interference.

Why? Because in the typical interference archetype, a beam
splitter causes one photon to go straight (keep going to the
right) and one to go up (and so the universe splits). But if
interference occur, the upward traveling photon happens to
be reflected to the right and the lower rightward moving
photons happens to be reflected upwards so that they meet
in the other beam splitter, and the two branches become so
identical that the two photons in essence merge, and the
two universes go back to being one.  (It's this last "merging"
process that I myself find so weird. It doesn't work unless the
two beams are nearly perfectly set up so that the two beams
are in exactly the same phase, and even then, the probability
of merging is quantum-mechanical, and falls off if the beams
are ever so little out of phase.)

>> > each and *every* time a measurement is made.
>> It *is* a horrible zoo; David Deutsch says on p. 213
>> of "The Fabric of Reality":
>>    "...rely on such things as solid matter or
>>    magnetized materials, which could not exist
>>    in the absence of quantum-mechanical effects.
>>    For example, any solid body consists of an
>>    array of atoms, which are themselves composed
>>    of electrically charged particles (electrons,
>>    and protons in the nuclei). But because of
>>    classical chaos, no array of charged particles
>>    could be stable under classical laws of motion.
>>    The positively and negatively charged particles
>>    would simply move out of position and crash into
>>    each other, and the structure would disintegrate.
>>    *It is only the strong quantum interference
>>    between the various paths taken by charged
>>    particles in parallel universes* that prevents
>>    such catastrophes and makes solid matter possible.
> Well this certainly begs the anthropic principle. Talk about balancing on a razor's edge.

I don't follow.

>> So any solid object is making nearly infinitely
>> many measurements each nanosecond, and those "splits"
>> radiate away at c, so that the whole fabric of reality
>> is a seething jumble of massive interference everywhere.
> But Copenhagen is already a seething jumble of massive
> interference everywhere. MWI is putting that seething
> jumble into a funhouse hall of mirrors. Although to be honest,
> the implications of MWI in a finite universe are very bizzare.

They don't seem so to me, not at least from what you've said.

> Still do not epicycles worry you?

Definitely. We almost always like to stick to Occam's Razor
and employ the simplest explanations that fit all the facts.
I urge you to read "The Fabric of Reality" and get the full
force of Deutsch's descriptions of the "shadow photons".
It seems likely to me that you'll agree that Everett's MWI
is the simplest idea anyone has ever thought of to account
for them.

> I mean Tycho Brahe's epicycles described the solar system
> just fine from a predictive stand point. They were just a
> jumbled mess to work with....

> Don't you see that an infinite universe that constantly grows
> incomplete copies of itself like monstrous hair is just like
> epicycles?

Yeah, but it's not copying, only branching. And although
MWI is "extravagent on universes", it employs one fewer
principle than CI or other theories like it. Namely, there
is no "collapse" postuate. So the number of *principles*
is reduced. An analogy might be that although Newton
may have made the entire universe far more mind-boggling
by suggesting that ever single particle of it is gravitationally
affecting every other particle, that complexity arises merely
from *one* nice principle with tremendous explanatory
power. You no longer need a deity to arrange the movements
of the stars and planets and so on.

> Another issue I have with MWI is computional complexity.
> First off, an infinite universe, immediately rules out any
> simulation-type theories. Turing machines are defined to
> have a finite number of states. I hope you realize that an
> infinite universe cannot have a finite number of states.

Good point. This is one implication of MWI I had not thought

On the other hand, on p. 211 of "The Fabric of Reality", Deutsch
is completely definite about the number of universes (i.e. branches)
being on the order of the continuum. So that's not merely a
*countable* infinity at all. We're already at aleph-one. So any
more splitting isn't conceptually problematical at that point,
(when things are already about as "worse" as they can get).

> Therefore an infinite universe can neither be a turing machine
> nor be simulated on one.

Well---nice point again. Hmm, actually that's maybe one less
thing to worry about   :-)

No, seriously, if I entertain the idea that we are living in a simulation,
I merely suppose that some entity has arranged a finite emulation.
It would take me on the order of 2.5 million years, as astronomers
have finally agreed, to see that something was wrong with Andromeda.
But even then, not to worry. The entity running the simulation can just
mock up astronomical lightwaves coming in from that corner of the
sky and keep me continuing to think that I'm seeing a real Andromeda.

> That being said, assuming that the universe is finite, MWI grows
> in computational complexity exponentially versus Copenhagen's
> which remains steady or perhaps increases linearly due to entropy.
> I think MWI running on a computer would run out of memory long
> before Copengahen.

Yes. But then, again, ever since Newton, the simulators have had
their work cut out for them, (unless they take the easier route and
just play with our simulated neurons).

> And you really don't want to know what MWI in a finite universe
> implies. It's not just swallowing a bullet; its swallowing a cannonball. ;-)

When I think back to my five bit example, it doesn't seem so
rough to me.  Well, yes, on that analogy it is 32 times as
"complicated" with 32 times as many things "going on",
while in Copenhagen, 10011 either goes to 10011 or 11011
with a 50/50 chance, so less storage and I guess less calculation
is involved (it being a lot easier to keep track of 5 bits than 32).
You're right about that.


More information about the extropy-chat mailing list