<div dir="auto"><div><br><br><div class="gmail_quote gmail_quote_container"><div dir="ltr" class="gmail_attr">On Fri, Mar 20, 2026, 7:55 AM Ben Zaiboc via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On 20/03/2026 01:02, Jason Resch wrote:<br>
> On Thu, Mar 19, 2026, 11:46 AM Ben Zaiboc via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a>> wrote:<br>
><br>
> On 19/03/2026 11:15, Jason Resch wrote:<br>
><br>
> > On Wed, Mar 18, 2026, 6:06 AM Ben Zaiboc via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a>> wrote:<br>
><br>
> ><br>
><br>
> > On 18/03/2026 04:03, Jason Resch wrote:<br>
><br>
> ><br>
><br>
> > Ben wrote:<br>
><br>
> I think the language here is getting a bit too complex, making it difficult to follow (I think you are contradicting yourself above, but I'm sure you don't think so, so some clarification is needed).<br>
><br>
><br>
><br>
> Can we say that each mind is a specific information pattern (which is a shorthand for 'a dynamic information pattern with certain characteristics, some of which we aren't yet sure of'), and that of course there are many things that different minds have in common?<br>
><br>
><br>
><br>
> I don't think we can. If the mind is a dynamic information pattern, then it is constantly changing, and so there is no way to pin it on being any specific set of information. <br>
<br>
<br>
If that was true, then uploading wouldn't even be theoretically possible.<br>
<br>
The way you pin it down is to read the pattern of neuron connections and weights at a single point in time. The fruit fly upload has demonstrated that you can capture this fixed pattern, instantiate it in a non-biological processing substrate and it will continue to produce the same kind of behaviour (changing mind-states in response to changing sensory information) as the biological fly.<br>
<br>
Take the 'Game of Life' example. When run, the game produces constantly changing patterns, but it isn't necessary to capture these changes in order to transfer the game to a different computer. You just need the code and a specific starting point. That's all static information. It's the same with uploading. The idea is not to capture all the constantly changing patterns, but to scan the static connections and weights that give rise to them. It's rather like the difference between copying a musical manuscript and giving it to a musician, and making an audio recording of a performance. The recording contains way more information, but the manuscript produces the same result, when processed in the right way (which, just like uploading, is easier than you'd expect. You give it to someone with an appropriate instrument who knows how to read music).<br>
<br>
The difference is that the mind doesn't just run its own pattern and nothing else, it gets input in real-time from the senses, and constantly changes in response. Without any changing external input, a mind would be no use, and would probably lapse into a catatonic state. Anyone who's tried a sensory deprivation tank knows about this. What typically happens is you just fall asleep. And that's even with all the signals coming from your body all the time.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto">I think we're in agreement then. I was pushing back on your description of a mind as a "dynamic information pattern." I think if it is *dynamic* then it can't be *specific*, and it seems given what you say above, that you agree on this. If we can't to capture a specific mind-state, it must be a kind at a point-in-time (not dynamic).</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<br>
> This is especially true when you consider different possible branching paths that may follow from one original state of the mind at one particular point in time.<br>
><br>
> For example, if you assume many worlds: across all the branches where you diverged a year ago, your mind has entered a vast number of distinct states, yet they all shared a common point of origination a year ago.<br>
><br>
> I think what we could say is that a single observer-moment could be identified with a particular computational-state. But once time and change are introduced, there's no single objective description we could give that includes all the infinite ways a mind may evolve from that point.<br>
<br>
<br>
True, and irrelevant. This constant change applies to a running, instantiated mind, not a recording of the data that gives rise to it. It applies to the original person before uploading, and to the same person after the upload. The thing that makes uploading possible is that there is a physical structure that embodies this changing information pattern, and we can read this structure, re-create it somewhere else, then set it running again.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">My comment was relevant to what I thought you were saying, but I think that we're now in agreement.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<br>
> And there will be some things that some minds have in common, and some things that all (known) minds have in common. Probably.<br>
><br>
><br>
><br>
> You claim that all minds of interest have 'subjective experience' in common. I agree (it's a bit of a tautology, really).<br>
><br>
><br>
> Yes, but more specifically, all subjective experiences are experienced in a way that feels immediate and direct. This is what makes all experiences had by any mind feel like "they are mine."<br>
<br>
<br>
That's what 'subjective' means. You are saying here "all minds of interest have subjective experience but, more specifically, they have subjective experience". Unless you have a definition of 'subjective' that is different to the common one.<br>
<br>
<br>
> You seem to claim that this means that all minds are therefore the same (?). <br>
><br>
><br>
> No I am not saying they are all the same. I am saying they all have what is needed to feel as though they "are mine."<br>
<br>
<br>
Ok, so you don't think that all minds are the same. Good.<br>
<br>
<br>
> I think we agree broadly about this, but that you may still be missing my point here. Think about the question: "Of all the beings that exist in the universe, how do you know which one is you?"<br>
<br>
<br>
I don't even understand the question. I don't have any access to anyone else's inner experience, let alone all the beings in the universe, so there's no need to identify myself to myself. I'd say this is a non-question.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">What makes it such that when you upload an approximate capture of "Ben Z.'s brain state" into a computer and run it that you should suddenly then have access to the inner experiences of this computer brain emulation?</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<br>
> You don't decide who you are by checking the name on the ID card in your wallet. Instead you use the simple fact: "I am the one having the direct, immediate experiences of being Ben Z."<br>
<br>
<br>
Yes, because I am the only one who can.<br>
<br>
<br>
> In other words you rely on this feature of the subjective experiences you have access to, to decide which person (out of all the people in the universe) you happen to be.<br>
<br>
<br>
There's no need to rely on anything, because there is no other possibility.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">If that's so, then uploading is a dead end.</div><div dir="auto"><br></div><div dir="auto">You need some principle that expands the set of "internal conscious states that you have access to," if you are to have any hope of subjective survival via uploading.</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<br>
> But next: consider that this feature of experience (feeling like it is mine, because it is direct and immediate) is a feature of every experience had by every conscious being.<br>
<br>
<br>
We all assume this is the case, but nobody actually knows it for sure.<br>
<br>
<br>
><br>
> So this method of deciding who it is you are, is flawed. This is the point I am making.<br>
<br>
<br>
I'm fine with a method that is not needed being flawed. As far as I can see, "deciding who it is you are" doesn't actually mean anything.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">It is necessary to answer that question if you want to have hope of surviving as an upload.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<br>
> I am saying something a bit different than "we are all one" (which is ambiguous and mystical sounding). What I am saying is rather "all experiences are mine" because they all have what is required for any experience to be mine: they all feel as if they are mine.<br>
><br>
> By this I am not saying all experiences are "Jason R.'s" or all experiences are "Ben Z.'s", I am saying all experiences have the properties required to make them mine -- every experience is felt as if it is happening to me (in a first person, direct, and immediate way).<br>
<br>
<br>
I can't make any sense of this at all.<br>
All experiences are not mine, only my experiences are.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">You are using the word "my" to do all the heavy lifting in the above sentence.</div><div dir="auto"><br></div><div dir="auto">How do you define the scope of experiences they are (or will be) yours vs. those that will always remain the experiences of others? This is the primary problem in the philosophy of personal identity.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
It's impossible for me to experience something that someone else is experiencing.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">But how do you distinguish self from someone else? The subject of this email thread is "are uploads self?" What makes it such that this computer over here, by running a particular sort of program, turns into something that will create experiences that *you* will have? But then, if we change the program slightly, then suddenly the experiences it generates are *no longer* the sorts of experiences you will have?</div><div dir="auto"><br></div><div dir="auto">Explain to me how you think this works.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<br>
<br>
> We can quibble about what 'precise' means, but the fact is we just don't yet know what level of precision will be necessary for an accurate upload of someone's mind (I was just speculating about the 'attractor state' thing). There will probably be a spectrum, and some kind of consensus will emerge about just how 'precise' the information needs to be.<br>
><br>
><br>
> If there is any wiggle room, then a person's survival can't be tied to a specific information pattern. The concept of "you" then necessarily dissolves into a spectrum that ultimately includes everyone.<br>
><br>
> Here is a good description of the continuum of persons: <a href="http://frombob.to/you/aconvers.html" rel="noreferrer noreferrer" target="_blank">http://frombob.to/you/aconvers.html</a><br>
><br>
<br>
<br>
Yes, I've read (some of) that before. It lost me at "we don't live in the physical world". Remember, I'm a materialist. Also, it's far too long, and not very interesting. I skimmed through it, and there's a bit that's suspiciously reminiscent of scientology, something rather confusing about virtual worlds, but nothing that seemed worth reading in detail.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Well you missed some important details. He lives in a "virtual" world. E.g. Bob is a mind upload. It is a fully materialist story. I don't know where you got the scientology angle from, aside from the fact that it is a story involving aliens.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"></blockquote></div></div><div dir="auto"></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<br>
> "And we can take this even further. It can be shown that there exist an infinite number of universes that each contain almost Everyone!<br>
><br>
> You see, The Object contains the Continuum of Souls. It is a connected set, with a frothy, fractal structure, of rather high dimensionality. The Continuum contains an infinite number of Souls, all Souls in fact, and an infinite number of them are You. Or at least, close enough to being You so that nobody could tell the difference. Not even You.<br>
><br>
> And the Continuum also contains an infinite number of souls that are almost You. And an infinite number that are sort of You. And because it is a Continuum, and because there is really no objective way to tell which one is really You, then any method one uses to try to distinguish between You and non-You will produce nothing but illusion. In a sense, there is only one You, and it is Everyone.<br>
<br>
<br>
This is gibberish. I thought you didn't want to be 'ambiguous and mystical sounding'.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Well this gibberish is the inevitable result that follows from not requiring 100% exact instances to survive as an upload. You can either accept these consequences OR say that if one neural weight is not exactly right, that you won't survive the upload process.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<br>
> I don't think the example of 'losing a single long-term memory' is very realistic, given the nature of memories and the way we store them, but you are again asking questions that we don't have any answer to yet.<br>
><br>
><br>
> But for the purposes of the thought experiment we can imagine the possibility of such a thing.<br>
><br>
> If you take a long train ride, you emerge on the other end having gained or lost some memories. Few consider train rides lethal. Yet many might consider a faulty upload or teletransporter that performed the same modification to be fatal. Is this consistent?<br>
><br>
> If not, then my point is perfect identity of memory isn't necessary to survival.<br>
<br>
<br>
As I said, we don't have an answer to that yet.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Do you think riding a train kills you?</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<br>
><br>
> But all this doesn't matter. We will obviously do our best to replicate as closely as we can, within the limits that animal experiments establish, the original mind.<br>
><br>
><br>
> It matters for those who currently think:<br>
> "If it isn't exact, then I won't survive, so why bother freezing my brain?"<br>
><br>
> What do you say to such people?<br>
<br>
<br>
There are a few things you could try, but ultimately it's up to each person do decide.<br>
The thing that occurs to me is that the default is the worst option. You die, and that's it. No more you.<br>
So trying anything that is potentially better is, well, better. Maybe it's a gamble, maybe it won't pay off, but that's better than nothing.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">If uploading were free that argument would work. But given how unlikely it is they an upload would be perfect (1 in billions? 1 in quadrillions?), then to anyone assuming exactitude is necessary for survival, it makes uploading like buying a very expensive lottery ticket.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
You could also explain that there is no such thing as 'exact', and give examples of where less than 'exact' is as good as exactly exact, and I think that an understanding of how our brains work can't do any harm. It's my study of biology, and neurology in particular that's given me confidence that uploading is at least theoretically sound.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">I agree they exactitude is unnecessary for survival. I further accept the consequences they follow from this assumption.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
But in the end, there will always be people who have no scientific background, perhaps religious, with entrenched dualistic thinking, that believe with a big B in gods and demons and such, and some of those will reject uploading. Eventually, maybe, these people will reduce in number by a natural process of selection, as more and more people move away from biology. 'After Life' by Simon Funk gives an idea of how this might work (<a href="https://sifter.org/~simon/AfterLife/" rel="noreferrer noreferrer" target="_blank">https://sifter.org/~simon/AfterLife/</a>).<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">This sounds very interesting, thanks!</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<br>
> I don't really see the point of all this talk of incomplete uploads, missing memories etc., when we will do what we can to avoid them. I'm sure that after we have perfected uploading to some degree, we will want to investigate these issues, but it's just not relevant now. <br>
><br>
><br>
> It is, for the people who philosophically believe they won't survive the upload process.<br>
><br>
> For example:<br>
> <a href="https://www.brainpreservation.org/content-2/killed-bad-philosophy/" rel="noreferrer noreferrer" target="_blank">https://www.brainpreservation.org/content-2/killed-bad-philosophy/</a><br>
><br>
> It is a big issue for a lot of people.<br>
><br>
> In fact, this bad philosophy affects even the cryonics community. Alcor, for instance, is opposed to using chemical preservation even though it likely results is less information loss. The opposition stems from the fact that the preservation chemicals are poisonous biologically. So here is an example where people who hope to survive by having their frozen brains thawed and ice damage healed, are jeopardizing the recovery of people who are philosophically inclined to believe in survival via scanning and upload to a new substrate.<br>
<br>
<br>
Yes, I never understood why Alcor don't make this an option, so people can decide themselves. Having Aldehyde-stabilised cryonic preservation as an option doesn't prevent people deciding to take standard cryopreservation.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Yes, excellent point!</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Fortunately, I don't need to go with Alcor, which is looking more and more of a risky choice, given recent events in the US.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">What alternatives have you looked at, if you don't mind my asking?</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<br>
> > These are exactly the sort of questions one must ask to break through to seeing the unimportance of particular details in the pattern as being necessary to subjective survival.<br>
><br>
><br>
> You are assuming a conclusion here. <br>
><br>
><br>
> I established that conclusion in my write up.<br>
><br>
> My suspicion is that 'particular details' will be very important - vital, even - for subjective survival, but we don't know what they are.<br>
><br>
><br>
> You discounted identical atoms.<br>
> You discounted identical information patterns.<br>
> What's left?<br>
<br>
<br>
Atoms are irrelevant, except as embodiment of information.<br>
Information patterns are what's important. What I'm saying is that I expect that certain parts of the patterns (sub-patterns, if you like) will turn out to be more important than others when it comes to subjective experience.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">But will those sub-patterns need to be exact?</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<br>
> Let's get the answers before drawing any conclusions.<br>
><br>
><br>
> I agree we should do everything we can to get answers.<br>
><br>
> This will have to wait until we have the technology needed.<br>
><br>
><br>
> Unfortunately, no technology or experiment will help in this case. Please see my document to understand why.<br>
<br>
<br>
Your document makes no sense. I'm sticking with empirical science.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Could you point out the mistake I made in this section, and describe an experiment that empirical science could perform, using any conceivable future technology, that would settle the question I present below:</div><div dir="auto"><br></div><div dir="auto">___________________________________________</div><div dir="auto"><div dir="auto">Before we get into the philosophical arguments, it is worth taking some time to see why such arguments are the only path available to progress on these questions.</div><div dir="auto"><br></div><div dir="auto">The reason is that empirical science, being that which is practiced by way of objective experiments, cannot answer these questions in a satisfactory way. This remains true no matter how advanced technology becomes in the future.</div><div dir="auto"><br></div><div dir="auto">Consider the case where we transferred John’s biological brain into a functionally-equivalent silicon brain in a new robot body. What we are interested in is whether John’s original self has survived the transfer to this new body.</div><div dir="auto"><br></div><div dir="auto">But no matter what question we ask of this robot instance of John, it will (owing to functional equivalence) always give the same answers as had we asked the original John with his biological brain.</div><div dir="auto"><br></div><div dir="auto">If we ask, “Hey John, are you in there?” He’ll answer, “Yes, I made it! I am here.”</div><div dir="auto"><br></div><div dir="auto">If we ask, “Do you still feel like yourself?” He’ll answer, “I feel the same as before.”</div><div dir="auto"><br></div><div dir="auto">If we ask, “But is it the real, original you?” He’ll answer, “Yes, it is me. I survived!”</div><div dir="auto"><br></div><div dir="auto">John’s insistence that he has survived is fully predictable from the mere fact of functional-equivalence between the biological and silicon brains. They behave the same and hence will give the same answers in reply to the same questions.</div><div dir="auto"><br></div><div dir="auto">As long as there is functional equivalence, the uploaded brain will never feel like it is someone else, or that it is not the same person it was before. And because comparing and analyzing behavior defines the limit of what is empirically verifiable, no test can ever hope to expose the result of John’s subjective survival.</div><div dir="auto"><br></div><div dir="auto">Thus, there’s no objective experiment we can perform on John that would convince anyone else that the same “soul of John” as found in the biological brain has continued on in the robot brain. There is only a subjective test, which is to undergo the same test John underwent, but for yourself, and to see if you do indeed find yourself looking out at the world through the eyes of a new robot body.</div><div dir="auto"><br></div><div dir="auto">Accordingly, some leap of faith is required to make that step, whether it be into an uploading machine, a teleporter pad, or even to undergo invasive brain surgery.</div><div dir="auto"><br></div><div dir="auto">This isn’t to say there aren’t good reasons why one should subjectively survive a body replacement. Rather, it is only to show that such reasons won’t come from empirical science. We must get there by way of rational reasoning and argument.</div><div dir="auto"><br></div><div dir="auto">These are the domain of philosophy.</div><div dir="auto">___________________________________________</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto">If you can show a counter-example experiment, then I will concede you are right and empirical science is the path forward on this question, and I will update my document.</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto">Jason </div></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
</blockquote></div></div></div>