[ExI] Uploads are self

Jason Resch jasonresch at gmail.com
Fri Mar 20 14:22:06 UTC 2026


On Fri, Mar 20, 2026, 7:55 AM Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On 20/03/2026 01:02, Jason Resch  wrote:
> > On Thu, Mar 19, 2026, 11:46 AM Ben Zaiboc via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
> >
> >     On 19/03/2026 11:15, Jason Resch wrote:
> >
> >     > On Wed, Mar 18, 2026, 6:06 AM Ben Zaiboc via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
> >
> >     >
> >
> >     >     On 18/03/2026 04:03, Jason Resch wrote:
> >
> >     >
> >
> >     >     Ben wrote:
> >
> >     I think the language here is getting a bit too complex, making it
> difficult to follow (I think you are contradicting yourself above, but I'm
> sure you don't think so, so some clarification is needed).
> >
> >
> >
> >     Can we say that each mind is a specific information pattern (which
> is a shorthand for 'a dynamic information pattern with certain
> characteristics, some of which we aren't yet sure of'), and that of course
> there are many things that different minds have in common?
> >
> >
> >
> > I don't think we can. If the mind is a dynamic information pattern, then
> it is constantly changing, and so there is no way to pin it on being any
> specific set of information.
>
>
> If that was true, then uploading wouldn't even be theoretically possible.
>
> The way you pin it down is to read the pattern of neuron connections and
> weights at a single point in time. The fruit fly upload has demonstrated
> that you can capture this fixed pattern, instantiate it in a non-biological
> processing substrate and it will continue to produce the same kind of
> behaviour (changing mind-states in response to changing sensory
> information) as the biological fly.
>
> Take the 'Game of Life' example. When run, the game produces constantly
> changing patterns, but it isn't necessary to capture these changes in order
> to transfer the game to a different computer. You just need the code and a
> specific starting point. That's all static information. It's the same with
> uploading. The idea is not to capture all the constantly changing patterns,
> but to scan the static connections and weights that give rise to them. It's
> rather like the difference between copying a musical manuscript and giving
> it to a musician, and making an audio recording of a performance. The
> recording contains way more information, but the manuscript produces the
> same result, when processed in the right way (which, just like uploading,
> is easier than you'd expect. You give it to someone with an appropriate
> instrument who knows how to read music).
>
> The difference is that the mind doesn't just run its own pattern and
> nothing else, it gets input in real-time from the senses, and constantly
> changes in response. Without any changing external input, a mind would be
> no use, and would probably lapse into a catatonic state. Anyone who's tried
> a sensory deprivation tank knows about this. What typically happens is you
> just fall asleep. And that's even with all the signals coming from your
> body all the time.
>


I think we're in agreement then. I was pushing back on your description of
a mind as a "dynamic information pattern." I think if it is *dynamic* then
it can't be *specific*, and it seems given what you say above, that you
agree on this. If we can't to capture a specific mind-state, it must be a
kind at a point-in-time (not dynamic).


>
> > This is especially true when you consider different possible branching
> paths that may follow from one original state of the mind at one particular
> point in time.
> >
> > For example, if you assume many worlds:  across all the branches where
> you diverged a year ago, your mind has entered a vast number of distinct
> states, yet they all shared a common point of origination a year ago.
> >
> > I think what we could say is that a single observer-moment could be
> identified with a particular computational-state. But once time and change
> are introduced, there's no single objective description we could give that
> includes all the infinite ways a mind may evolve from that point.
>
>
> True, and irrelevant. This constant change applies to a running,
> instantiated mind, not a recording of the data that gives rise to it. It
> applies to the original person before uploading, and to the same person
> after the upload. The thing that makes uploading possible is that there is
> a physical structure that embodies this changing information pattern, and
> we can read this structure, re-create it somewhere else, then set it
> running again.
>

My comment was relevant to what I thought you were saying, but I think that
we're now in agreement.


>
> >     And there will be some things that some minds have in common, and
> some things that all (known) minds have in common. Probably.
> >
> >
> >
> >     You claim that all minds of interest have 'subjective experience' in
> common. I agree (it's a bit of a tautology, really).
> >
> >
> > Yes, but more specifically, all subjective experiences are experienced
> in a way that feels immediate and direct. This is what makes all
> experiences had by any mind feel like "they are mine."
>
>
> That's what 'subjective' means. You are saying here "all minds of interest
> have subjective experience but, more specifically, they have subjective
> experience". Unless you have a definition of 'subjective' that is different
> to the common one.
>
>
> >     You seem to claim that this means that all minds are therefore the
> same (?).
> >
> >
> > No I am not saying they are all the same. I am saying they all have what
> is needed to feel as though they "are mine."
>
>
> Ok, so you don't think that all minds are the same. Good.
>
>
> > I think we agree broadly about this, but that you may still be missing
> my point here. Think about the question: "Of all the beings that exist in
> the universe, how do you know which one is you?"
>
>
> I don't even understand the question. I don't have any access to anyone
> else's inner experience, let alone all the beings in the universe, so
> there's no need to identify myself to myself. I'd say this is a
> non-question.
>

What makes it such that when you upload an approximate capture of "Ben Z.'s
brain state" into a computer and run it that you should suddenly then have
access to the inner experiences of this computer brain emulation?


>
> > You don't decide who you are by checking the name on the ID card in your
> wallet. Instead you use the simple fact: "I am the one having the direct,
> immediate experiences of being Ben Z."
>
>
> Yes, because I am the only one who can.
>
>
> > In other words you rely on this feature of the subjective experiences
> you have access to, to decide which person (out of all the people in the
> universe) you happen to be.
>
>
> There's no need to rely on anything, because there is no other possibility.
>

If that's so, then uploading is a dead end.

You need some principle that expands the set of "internal conscious states
that you have access to," if you are to have any hope of subjective
survival via uploading.



>
> > But next: consider that this feature of experience (feeling like it is
> mine, because it is direct and immediate) is a feature of every experience
> had by every conscious being.
>
>
> We all assume this is the case, but nobody actually knows it for sure.
>
>
> >
> > So this method of deciding who it is you are, is flawed. This is the
> point I am making.
>
>
> I'm fine with a method that is not needed being flawed. As far as I can
> see, "deciding who it is you are" doesn't actually mean anything.
>

It is necessary to answer that question if you want to have hope of
surviving as an upload.


>
> > I am saying something a bit different than "we are all one" (which is
> ambiguous and mystical sounding). What I am saying is rather "all
> experiences are mine" because they all have what is required for any
> experience to be mine: they all feel as if they are mine.
> >
> > By this I am not saying all experiences are "Jason R.'s" or all
> experiences are "Ben Z.'s", I am saying all experiences have the properties
> required to make them mine -- every experience is felt as if it is
> happening to me (in a first person, direct, and immediate way).
>
>
> I can't make any sense of this at all.
> All experiences are not mine, only my experiences are.
>

You are using the word "my" to do all the heavy lifting in the above
sentence.

How do you define the scope of experiences they are (or will be) yours vs.
those that will always remain the experiences of others? This is the
primary problem in the philosophy of personal identity.

It's impossible for me to experience something that someone else is
> experiencing.
>

But how do you distinguish self from someone else? The subject of this
email thread is "are uploads self?" What makes it such that this computer
over here, by running a particular sort of program, turns into something
that will create experiences that *you* will have? But then, if we change
the program slightly, then suddenly the experiences it generates are *no
longer* the sorts of experiences you will have?

Explain to me how you think this works.


>
>
> >     We can quibble about what 'precise' means, but the fact is we just
> don't yet know what level of precision will be necessary for an accurate
> upload of someone's mind (I was just speculating about the 'attractor
> state' thing). There will probably be a spectrum, and some kind of
> consensus will emerge about just how 'precise' the information needs to be.
> >
> >
> > If there is any wiggle room, then a person's survival can't be tied to a
> specific information pattern. The concept of "you" then necessarily
> dissolves into a spectrum that ultimately includes everyone.
> >
> > Here is a good description of the continuum of persons:
> http://frombob.to/you/aconvers.html
> >
>
>
> Yes, I've read (some of) that before. It lost me at "we don't live in the
> physical world". Remember, I'm a materialist. Also, it's far too long,  and
> not very interesting. I skimmed through it, and there's a bit that's
> suspiciously reminiscent of scientology, something rather confusing about
> virtual worlds, but nothing that seemed worth reading in detail.
>

Well you missed some important details. He lives in a "virtual" world. E.g.
Bob is a mind upload. It is a fully materialist story. I don't know where
you got the scientology angle from, aside from the fact that it is a story
involving aliens.


>
> > "And we can take this even further. It can be shown that there exist an
> infinite number of universes that each contain almost Everyone!
> >
> > You see, The Object contains the Continuum of Souls. It is a connected
> set, with a frothy, fractal structure, of rather high dimensionality. The
> Continuum contains an infinite number of Souls, all Souls in fact, and an
> infinite number of them are You. Or at least, close enough to being You so
> that nobody could tell the difference. Not even You.
> >
> > And the Continuum also contains an infinite number of souls that are
> almost You. And an infinite number that are sort of You. And because it is
> a Continuum, and because there is really no objective way to tell which one
> is really You, then any method one uses to try to distinguish between You
> and non-You will produce nothing but illusion. In a sense, there is only
> one You, and it is Everyone.
>
>
> This is gibberish. I thought you didn't want to be 'ambiguous and mystical
> sounding'.
>

Well this gibberish is the inevitable result that follows from not
requiring 100% exact instances to survive as an upload. You can either
accept these consequences OR say that if one neural weight is not exactly
right, that you won't survive the upload process.


>
> >     I don't think the example of 'losing a single long-term memory' is
> very realistic, given the nature of memories and the way we store them, but
> you are again asking questions that we don't have any answer to yet.
> >
> >
> > But for the purposes of the thought experiment we can imagine the
> possibility of such a thing.
> >
> > If you take a long train ride, you emerge on the other end having gained
> or lost some memories. Few consider train rides lethal. Yet many might
> consider a faulty upload or teletransporter that performed the same
> modification to be fatal. Is this consistent?
> >
> > If not, then my point is perfect identity of memory isn't necessary to
> survival.
>
>
> As I said, we don't have an answer to that yet.
>

Do you think riding a train kills you?


>
> >
> >     But all this doesn't matter. We will obviously do our best to
> replicate as closely as we can, within the limits that animal experiments
> establish, the original mind.
> >
> >
> > It matters for those who currently think:
> > "If it isn't exact, then I won't survive, so why bother freezing my
> brain?"
> >
> > What do you say to such people?
>
>
> There are a few things you could try, but ultimately it's up to each
> person do decide.
> The thing that occurs to me is that the default is the worst option. You
> die, and that's it. No more you.
> So trying anything that is potentially better is, well, better. Maybe it's
> a gamble, maybe it won't pay off, but that's better than nothing.
>

If uploading were free that argument would work. But given how unlikely it
is they an upload would be perfect (1 in billions? 1 in quadrillions?),
then to anyone assuming exactitude is necessary for survival, it makes
uploading like buying a very expensive lottery ticket.


> You could also explain that there is no such thing as 'exact', and give
> examples of where less than 'exact' is as good as exactly exact, and I
> think that an understanding of how our brains work can't do any harm. It's
> my study of biology, and neurology in particular that's given me confidence
> that uploading is at least theoretically sound.
>

I agree they exactitude is unnecessary for survival. I further accept the
consequences they follow from this assumption.


> But in the end, there will always be people who have no scientific
> background, perhaps religious, with entrenched dualistic thinking, that
> believe with a big B in gods and demons and such, and some of those will
> reject uploading. Eventually, maybe, these people will reduce in number by
> a natural process of selection, as more and more people move away from
> biology. 'After Life' by Simon Funk gives an idea of how this might work (
> https://sifter.org/~simon/AfterLife/).
>

This sounds very interesting, thanks!


>
> >      I don't really see the point of all this talk of incomplete
> uploads, missing memories etc., when we will do what we can to avoid them.
> I'm sure that after we have perfected uploading to some degree, we will
> want to investigate these issues, but it's just not relevant now.
> >
> >
> > It is, for the people who philosophically believe they won't survive the
> upload process.
> >
> > For example:
> > https://www.brainpreservation.org/content-2/killed-bad-philosophy/
> >
> > It is a big issue for a lot of people.
> >
> > In fact, this bad philosophy affects even the cryonics community. Alcor,
> for instance, is opposed to using chemical preservation even though it
> likely results is less information loss. The opposition stems from the fact
> that the preservation chemicals are poisonous biologically. So here is an
> example where people who hope to survive by having their frozen brains
> thawed and ice damage healed, are jeopardizing the recovery of people who
> are philosophically inclined to believe in survival via scanning and upload
> to a new substrate.
>
>
> Yes, I never understood why Alcor don't make this an option, so people can
> decide themselves. Having Aldehyde-stabilised cryonic preservation as an
> option doesn't prevent people deciding to take standard cryopreservation.
>

Yes, excellent point!


> Fortunately, I don't need to go with Alcor, which is looking more and more
> of a risky choice, given recent events in the US.
>

What alternatives have you looked at, if you don't mind my asking?


>
> >     > These are exactly the sort of questions one must ask to break
> through to seeing the unimportance of particular details in the pattern as
> being necessary to subjective survival.
> >
> >
> >     You are assuming a conclusion here.
> >
> >
> > I established that conclusion in my write up.
> >
> >     My suspicion is that 'particular details' will be very important -
> vital, even - for subjective survival, but we don't know what they are.
> >
> >
> > You discounted identical atoms.
> > You discounted identical information patterns.
> > What's left?
>
>
> Atoms are irrelevant, except as embodiment of information.
> Information patterns are what's important. What I'm saying is that I
> expect that certain parts of the patterns (sub-patterns, if you like) will
> turn out to be more important than others when it comes to subjective
> experience.
>

But will those sub-patterns need to be exact?


>
> >     Let's get the answers before drawing any conclusions.
> >
> >
> > I agree we should do everything we can to get answers.
> >
> >     This will have to wait until we have the technology needed.
> >
> >
> > Unfortunately, no technology or experiment will help in this case.
> Please see my document to understand why.
>
>
> Your document makes no sense. I'm sticking with empirical science.
>

Could you point out the mistake I made in this section, and describe an
experiment that empirical science could perform, using any conceivable
future technology, that would settle the question I present below:

___________________________________________
Before we get into the philosophical arguments, it is worth taking some
time to see why such arguments are the only path available to progress on
these questions.

The reason is that empirical science, being that which is practiced by way
of objective experiments, cannot answer these questions in a satisfactory
way. This remains true no matter how advanced technology becomes in the
future.

Consider the case where we transferred John’s biological brain into a
functionally-equivalent silicon brain in a new robot body. What we are
interested in is whether John’s original self has survived the transfer to
this new body.

But no matter what question we ask of this robot instance of John, it will
(owing to functional equivalence) always give the same answers as had we
asked the original John with his biological brain.

If we ask, “Hey John, are you in there?” He’ll answer, “Yes, I made it! I
am here.”

If we ask, “Do you still feel like yourself?” He’ll answer, “I feel the
same as before.”

If we ask, “But is it the real, original you?” He’ll answer, “Yes, it is
me. I survived!”

John’s insistence that he has survived is fully predictable from the mere
fact of functional-equivalence between the biological and silicon brains.
They behave the same and hence will give the same answers in reply to the
same questions.

As long as there is functional equivalence, the uploaded brain will never
feel like it is someone else, or that it is not the same person it was
before. And because comparing and analyzing behavior defines the limit of
what is empirically verifiable, no test can ever hope to expose the result
of John’s subjective survival.

Thus, there’s no objective experiment we can perform on John that would
convince anyone else that the same “soul of John” as found in the
biological brain has continued on in the robot brain. There is only a
subjective test, which is to undergo the same test John underwent, but for
yourself, and to see if you do indeed find yourself looking out at the
world through the eyes of a new robot body.

Accordingly, some leap of faith is required to make that step, whether it
be into an uploading machine, a teleporter pad, or even to undergo invasive
brain surgery.

This isn’t to say there aren’t good reasons why one should subjectively
survive a body replacement. Rather, it is only to show that such reasons
won’t come from empirical science. We must get there by way of rational
reasoning and argument.

These are the domain of philosophy.
___________________________________________



If you can show a counter-example experiment, then I will concede you are
right and empirical science is the path forward on this question, and I
will update my document.


Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20260320/bec9bbc3/attachment.htm>


More information about the extropy-chat mailing list