[ExI] Uploads are self
Ben Zaiboc
benzaiboc at proton.me
Fri Mar 20 11:54:36 UTC 2026
On 20/03/2026 01:02, Jason Resch wrote:
> On Thu, Mar 19, 2026, 11:46 AM Ben Zaiboc via extropy-chat <extropy-chat at lists.extropy.org> wrote:
>
> On 19/03/2026 11:15, Jason Resch wrote:
>
> > On Wed, Mar 18, 2026, 6:06 AM Ben Zaiboc via extropy-chat <extropy-chat at lists.extropy.org> wrote:
>
> >
>
> > On 18/03/2026 04:03, Jason Resch wrote:
>
> >
>
> > Ben wrote:
>
> I think the language here is getting a bit too complex, making it difficult to follow (I think you are contradicting yourself above, but I'm sure you don't think so, so some clarification is needed).
>
>
>
> Can we say that each mind is a specific information pattern (which is a shorthand for 'a dynamic information pattern with certain characteristics, some of which we aren't yet sure of'), and that of course there are many things that different minds have in common?
>
>
>
> I don't think we can. If the mind is a dynamic information pattern, then it is constantly changing, and so there is no way to pin it on being any specific set of information.
If that was true, then uploading wouldn't even be theoretically possible.
The way you pin it down is to read the pattern of neuron connections and weights at a single point in time. The fruit fly upload has demonstrated that you can capture this fixed pattern, instantiate it in a non-biological processing substrate and it will continue to produce the same kind of behaviour (changing mind-states in response to changing sensory information) as the biological fly.
Take the 'Game of Life' example. When run, the game produces constantly changing patterns, but it isn't necessary to capture these changes in order to transfer the game to a different computer. You just need the code and a specific starting point. That's all static information. It's the same with uploading. The idea is not to capture all the constantly changing patterns, but to scan the static connections and weights that give rise to them. It's rather like the difference between copying a musical manuscript and giving it to a musician, and making an audio recording of a performance. The recording contains way more information, but the manuscript produces the same result, when processed in the right way (which, just like uploading, is easier than you'd expect. You give it to someone with an appropriate instrument who knows how to read music).
The difference is that the mind doesn't just run its own pattern and nothing else, it gets input in real-time from the senses, and constantly changes in response. Without any changing external input, a mind would be no use, and would probably lapse into a catatonic state. Anyone who's tried a sensory deprivation tank knows about this. What typically happens is you just fall asleep. And that's even with all the signals coming from your body all the time.
> This is especially true when you consider different possible branching paths that may follow from one original state of the mind at one particular point in time.
>
> For example, if you assume many worlds: across all the branches where you diverged a year ago, your mind has entered a vast number of distinct states, yet they all shared a common point of origination a year ago.
>
> I think what we could say is that a single observer-moment could be identified with a particular computational-state. But once time and change are introduced, there's no single objective description we could give that includes all the infinite ways a mind may evolve from that point.
True, and irrelevant. This constant change applies to a running, instantiated mind, not a recording of the data that gives rise to it. It applies to the original person before uploading, and to the same person after the upload. The thing that makes uploading possible is that there is a physical structure that embodies this changing information pattern, and we can read this structure, re-create it somewhere else, then set it running again.
> And there will be some things that some minds have in common, and some things that all (known) minds have in common. Probably.
>
>
>
> You claim that all minds of interest have 'subjective experience' in common. I agree (it's a bit of a tautology, really).
>
>
> Yes, but more specifically, all subjective experiences are experienced in a way that feels immediate and direct. This is what makes all experiences had by any mind feel like "they are mine."
That's what 'subjective' means. You are saying here "all minds of interest have subjective experience but, more specifically, they have subjective experience". Unless you have a definition of 'subjective' that is different to the common one.
> You seem to claim that this means that all minds are therefore the same (?).
>
>
> No I am not saying they are all the same. I am saying they all have what is needed to feel as though they "are mine."
Ok, so you don't think that all minds are the same. Good.
> I think we agree broadly about this, but that you may still be missing my point here. Think about the question: "Of all the beings that exist in the universe, how do you know which one is you?"
I don't even understand the question. I don't have any access to anyone else's inner experience, let alone all the beings in the universe, so there's no need to identify myself to myself. I'd say this is a non-question.
> You don't decide who you are by checking the name on the ID card in your wallet. Instead you use the simple fact: "I am the one having the direct, immediate experiences of being Ben Z."
Yes, because I am the only one who can.
> In other words you rely on this feature of the subjective experiences you have access to, to decide which person (out of all the people in the universe) you happen to be.
There's no need to rely on anything, because there is no other possibility.
> But next: consider that this feature of experience (feeling like it is mine, because it is direct and immediate) is a feature of every experience had by every conscious being.
We all assume this is the case, but nobody actually knows it for sure.
>
> So this method of deciding who it is you are, is flawed. This is the point I am making.
I'm fine with a method that is not needed being flawed. As far as I can see, "deciding who it is you are" doesn't actually mean anything.
> I am saying something a bit different than "we are all one" (which is ambiguous and mystical sounding). What I am saying is rather "all experiences are mine" because they all have what is required for any experience to be mine: they all feel as if they are mine.
>
> By this I am not saying all experiences are "Jason R.'s" or all experiences are "Ben Z.'s", I am saying all experiences have the properties required to make them mine -- every experience is felt as if it is happening to me (in a first person, direct, and immediate way).
I can't make any sense of this at all.
All experiences are not mine, only my experiences are.
It's impossible for me to experience something that someone else is experiencing.
> We can quibble about what 'precise' means, but the fact is we just don't yet know what level of precision will be necessary for an accurate upload of someone's mind (I was just speculating about the 'attractor state' thing). There will probably be a spectrum, and some kind of consensus will emerge about just how 'precise' the information needs to be.
>
>
> If there is any wiggle room, then a person's survival can't be tied to a specific information pattern. The concept of "you" then necessarily dissolves into a spectrum that ultimately includes everyone.
>
> Here is a good description of the continuum of persons: http://frombob.to/you/aconvers.html
>
Yes, I've read (some of) that before. It lost me at "we don't live in the physical world". Remember, I'm a materialist. Also, it's far too long, and not very interesting. I skimmed through it, and there's a bit that's suspiciously reminiscent of scientology, something rather confusing about virtual worlds, but nothing that seemed worth reading in detail.
> "And we can take this even further. It can be shown that there exist an infinite number of universes that each contain almost Everyone!
>
> You see, The Object contains the Continuum of Souls. It is a connected set, with a frothy, fractal structure, of rather high dimensionality. The Continuum contains an infinite number of Souls, all Souls in fact, and an infinite number of them are You. Or at least, close enough to being You so that nobody could tell the difference. Not even You.
>
> And the Continuum also contains an infinite number of souls that are almost You. And an infinite number that are sort of You. And because it is a Continuum, and because there is really no objective way to tell which one is really You, then any method one uses to try to distinguish between You and non-You will produce nothing but illusion. In a sense, there is only one You, and it is Everyone.
This is gibberish. I thought you didn't want to be 'ambiguous and mystical sounding'.
> I don't think the example of 'losing a single long-term memory' is very realistic, given the nature of memories and the way we store them, but you are again asking questions that we don't have any answer to yet.
>
>
> But for the purposes of the thought experiment we can imagine the possibility of such a thing.
>
> If you take a long train ride, you emerge on the other end having gained or lost some memories. Few consider train rides lethal. Yet many might consider a faulty upload or teletransporter that performed the same modification to be fatal. Is this consistent?
>
> If not, then my point is perfect identity of memory isn't necessary to survival.
As I said, we don't have an answer to that yet.
>
> But all this doesn't matter. We will obviously do our best to replicate as closely as we can, within the limits that animal experiments establish, the original mind.
>
>
> It matters for those who currently think:
> "If it isn't exact, then I won't survive, so why bother freezing my brain?"
>
> What do you say to such people?
There are a few things you could try, but ultimately it's up to each person do decide.
The thing that occurs to me is that the default is the worst option. You die, and that's it. No more you.
So trying anything that is potentially better is, well, better. Maybe it's a gamble, maybe it won't pay off, but that's better than nothing.
You could also explain that there is no such thing as 'exact', and give examples of where less than 'exact' is as good as exactly exact, and I think that an understanding of how our brains work can't do any harm. It's my study of biology, and neurology in particular that's given me confidence that uploading is at least theoretically sound.
But in the end, there will always be people who have no scientific background, perhaps religious, with entrenched dualistic thinking, that believe with a big B in gods and demons and such, and some of those will reject uploading. Eventually, maybe, these people will reduce in number by a natural process of selection, as more and more people move away from biology. 'After Life' by Simon Funk gives an idea of how this might work (https://sifter.org/~simon/AfterLife/).
> I don't really see the point of all this talk of incomplete uploads, missing memories etc., when we will do what we can to avoid them. I'm sure that after we have perfected uploading to some degree, we will want to investigate these issues, but it's just not relevant now.
>
>
> It is, for the people who philosophically believe they won't survive the upload process.
>
> For example:
> https://www.brainpreservation.org/content-2/killed-bad-philosophy/
>
> It is a big issue for a lot of people.
>
> In fact, this bad philosophy affects even the cryonics community. Alcor, for instance, is opposed to using chemical preservation even though it likely results is less information loss. The opposition stems from the fact that the preservation chemicals are poisonous biologically. So here is an example where people who hope to survive by having their frozen brains thawed and ice damage healed, are jeopardizing the recovery of people who are philosophically inclined to believe in survival via scanning and upload to a new substrate.
Yes, I never understood why Alcor don't make this an option, so people can decide themselves. Having Aldehyde-stabilised cryonic preservation as an option doesn't prevent people deciding to take standard cryopreservation.
Fortunately, I don't need to go with Alcor, which is looking more and more of a risky choice, given recent events in the US.
> > These are exactly the sort of questions one must ask to break through to seeing the unimportance of particular details in the pattern as being necessary to subjective survival.
>
>
> You are assuming a conclusion here.
>
>
> I established that conclusion in my write up.
>
> My suspicion is that 'particular details' will be very important - vital, even - for subjective survival, but we don't know what they are.
>
>
> You discounted identical atoms.
> You discounted identical information patterns.
> What's left?
Atoms are irrelevant, except as embodiment of information.
Information patterns are what's important. What I'm saying is that I expect that certain parts of the patterns (sub-patterns, if you like) will turn out to be more important than others when it comes to subjective experience.
> Let's get the answers before drawing any conclusions.
>
>
> I agree we should do everything we can to get answers.
>
> This will have to wait until we have the technology needed.
>
>
> Unfortunately, no technology or experiment will help in this case. Please see my document to understand why.
Your document makes no sense. I'm sticking with empirical science.
--
Ben
More information about the extropy-chat
mailing list