[ExI] Uploads are self

Ben Zaiboc benzaiboc at proton.me
Sat Mar 21 19:54:13 UTC 2026


On 21/03/2026 18:06, Jason Resch wrote:
> On Sat, Mar 21, 2026, 12:53 PM Ben Zaiboc via extropy-chat <extropy-chat at lists.extropy.org> wrote:
>
>     On 21/03/2026 12:38, Jason Resch wrote:
>     >
>     > On Fri, Mar 20, 2026, 7:55 AM Ben Zaiboc via extropy-chat <extropy-chat at lists.extropy.org> wrote:
>     >
>     >     On 20/03/2026 01:02, Jason Resch  wrote:
>     >     > On Thu, Mar 19, 2026, 11:46 AM Ben Zaiboc via extropy-chat <extropy-chat at lists.extropy.org> wrote:
>
>     etc (!)
>
>     >     > I think we agree broadly about this, but that you may still be missing my point here. Think about the question: "Of all the beings that exist in the universe, how do you know which one is you?"
>     >
>     >
>     >     I don't even understand the question. I don't have any access to anyone else's inner experience, let alone all the beings in the universe, so there's no need to identify myself to myself. I'd say this is a non-question.
>     >
>     >
>     > What makes it such that when you upload an approximate capture of "Ben Z.'s brain state" into a computer and run it that you should suddenly then have access to the inner experiences of this computer brain emulation?
>
>
>     Who are you talking about here?: "you should suddenly then have access"?
>     I'm not sure what this means.
>
>
>
> I am talking about subjective survival. By this I mean:
>
>
> If Ben's biological brain dies, but it is scanned and uploaded into a computer and run, and it continues to act like Ben, can the Ben whose bio brain died, expect to live on, "subjectively* both "in" and "as" this upload -- in the same sense as Ben would have expected to carry on subjectively had his bio brain not failed him?
>
>
> So far, you seem to be a avoiding this question


Well, I thought I was addressing the question all along.
Ok, let me be clear: The answer is YES!
Yes, OF COURSE that's what I expect. This is the whole point of uploading.


> , but it is the crux of what my paper is attempting to answer. At the bottom of this email you hand wave this question away and say it doesn't matter: so long as it is functionally-equivalent then that's the best we can do and nothing more can be said.


No, I'm not hand-waving the question away, I'm addressing it directly. Functionally equivalent is what we're aiming for, and if it's achieved, then Job Done, if not, then we have a problem. That is the top and bottom of it.


> it is true no more can be said if we limit arguments to empiricism. But the whole point of my write up is that *more can be said* if we expand to include rational arguments, but you seem allergic to these.
> As such I don't think we can make any more progress on this thread, unless you are willing to engage on the question above. You may continue to say it is an irrelevant meaningless question, and there is a certain logic in that position, but it avoids what most people seek to preserve by freezing their brain and later uploading. If you wonder why cryonics is less popular, it is in no small part because so little has been done to try to answer this very question which matters to so many.


What I'm allergic to is this barmy idea of 'Open Individualism', which 1) makes no sense to me, and 2) is irrelevant to uploading.

The rational argument is: IF functionalism is true (all signs point to 'Yes'), AND a successful copy of the brain's connectome and connection weights can be made and recreated in a suitable computing substrate capable of activating it (progress is being made towards this), THEN the original person lives on in the new substrate.

That's about as clear and simple as I can make it.

-- 
Ben



More information about the extropy-chat mailing list