[ExI] Mind Uploads need to be near perfect

Jason Resch jasonresch at gmail.com
Sat Mar 21 21:19:16 UTC 2026


Thanks Bill. A great idea as always to bring in AI to help clarify things.
While what the AI says here is true, I think it misses the essential
element of disagreement between my position and Ben's. Namely, whether a
functionalist account of consciousness is all that is important to the
question of surviving via uploading. I took this question to Chat GPT
Thinking, and had the following exchange:

*Jason:* When it comes to answering the question of whether a mind-upload
of my brain will still be me, in the personal, subjective sense -- that the
experiences of this upload will be mine, that I will personally survive the
transition to this new substrate, what assumptions must I make? Are
theories in the philosophy of mind (e.g. functionalism, or
computationalism) enough, or must one make further assumptions regarding
theories in the philosophy of personal identity?

*ChatGPT:* No. Functionalism or computationalism are not enough by
themselves.

They mainly address *what a mind is* and *whether mentality can be realized
in another substrate*. Your question adds a second issue: *what makes a
future subject of experience numerically identical to you*. Those are
different philosophical jobs. Functionalism says mental states are defined
by their causal/functional role rather than by their specific material
makeup, and computationalist versions say the relevant organization can in
principle be implemented in different physical media. That supports the
idea that an upload could have a mind at all. But it does not, by itself,
settle whether that mind is *you*. (Stanford Encyclopedia of Philosophy
<https://plato.stanford.edu/entries/functionalism/>)

The cleanest way to see the gap is this: a theory of mind answers, “Would
the upload think, feel, remember, deliberate, and perhaps be conscious?” A
theory of personal identity answers, “Would that future thinker literally
be me rather than merely a qualitatively similar successor?” The
personal-identity literature treats this as a separate question about the
conditions under which a person survives over time. (Internet Encyclopedia
of Philosophy <https://iep.utm.edu/person-i/>)

So, to answer *yes* to “the upload’s experiences will be mine; I will
personally survive,” you usually need at least these further assumptions:

   1.

   *Substrate independence of mind.*
   You must assume that the relevant mental organization can be realized
   outside the biological brain. Functionalism and computationalism are
   designed to support exactly that kind of claim. (Stanford Encyclopedia
   of Philosophy <https://plato.stanford.edu/entries/functionalism/>)
   2.

   *Substrate independence of consciousness, not just cognition.*
   It is not enough that the upload would behave like you or process
   information like you. For the upload’s experiences to be yours in the
   first-person sense, you must also assume that phenomenal consciousness is
   preserved by preserving the right functional/computational organization.
   That is a stronger assumption, because functionalism is famously contested
   on qualia and consciousness; even standard reference works treat
   consciousness as lacking any agreed-upon theory, and functionalism faces
   the zombie/explanatory-gap/qualia objections. (Stanford Encyclopedia of
   Philosophy <https://plato.stanford.edu/entries/functionalism/>)
   3.

   *A psychological or organizational criterion of personal identity.*
   You must assume that what makes a future person you is some form of
   psychological continuity, informational continuity, or continuity of
   first-person perspective, rather than biological continuity or sameness of
   organism. The standard alternatives in the literature include soul views,
   psychological-continuity views, and bodily/animalist views. If you adopt
   animalism or a bodily criterion, an upload is generally not you, because
   the biological organism does not go with the upload. (Internet
   Encyclopedia of Philosophy <https://iep.utm.edu/person-i/>)
   4.

   *No further hidden essence beyond the preserved structure.*
   You must reject the idea that there is some extra fact—an immaterial
   ego, soul-substance, or brute “thisness” of the subject—that cannot be
   copied or re-instantiated by preserving functional/computational structure.
   Otherwise the upload could be a perfect replica that still is not
you. (Internet
   Encyclopedia of Philosophy <https://iep.utm.edu/person-i/>)

There is one more assumption that becomes crucial in upload cases:

   5.

   *A rule for duplication or branching.*
   If the original remains alive while a copy is created, then
   functional/psychological continuity points to both future beings. But
   numerical identity is normally one-one, not one-many. The personal-identity
   literature treats this as the classic fission problem: if two future beings
   are equally continuous with you, functional similarity alone does not tell
   you which one is you. Some philosophers therefore add a *non-branching
   condition*; others adopt more revisionary views; Parfit-style approaches
   say strict identity may fail even though what matters for survival-like
   concern is preserved. (Stanford Encyclopedia of Philosophy
   <https://plato.stanford.edu/archives/fall2017/entries/identity-personal/>
   )

That branching point is why “uploading” splits into two importantly
different cases:

*Destructive upload:*
If your brain is destroyed and only one upload results, then a
psychological-continuity theory has a cleaner route to saying you survive,
because there is a unique successor. But even here, you still need the
extra assumptions above: consciousness must transfer by preserved
organization, and identity must consist in that continuity rather than in
biological life. (Stanford Encyclopedia of Philosophy
<https://plato.stanford.edu/archives/fall2017/entries/identity-personal/>)

*Non-destructive copying:*
If the biological original remains and an upload is created, then
functionalism/computationalism alone are definitely not enough. They may
imply that the upload is minded and perhaps conscious, but they do not
decide why the original rather than the copy should remain “the” you—or why
both would not count equally. This is exactly where personal-identity
theory has to do additional work. (Stanford Encyclopedia of Philosophy
<https://plato.stanford.edu/archives/fall2017/entries/identity-personal/>)

So the most precise answer is:

   -

   *Functionalism/computationalism may be enough to argue that an upload
   could have a mind, and perhaps even consciousness.*
   -

   *They are not enough to show that the upload is personally you in the
   strict first-person survival sense.*
   -

   To get that stronger conclusion, you also need a theory of personal
   identity—typically a neo-Lockean or reductionist view on which
   psychological/functional/computational continuity is sufficient for
   survival, plus some answer to duplication cases. (Stanford Encyclopedia
   of Philosophy <https://plato.stanford.edu/entries/computational-mind/>)

My own compression of the landscape would be:

If you want to say *“yes, I survive upload”*, you are usually committing
yourself to something like this:

I am not essentially a biological organism or immaterial soul; I am a
conscious subject whose persistence is constituted by the right sort of
ongoing psychological/computational organization, and in non-branching
cases that organization is enough for survival.

If you are not willing to grant *all* of that, then functionalism alone
does not get you all the way to first-person survival. (Stanford
Encyclopedia of Philosophy
<https://plato.stanford.edu/entries/functionalism/>)

If you want, I can next give you a map of the main positions—animalism,
soul theory, Lockean continuity, Parfit reductionism, constitution view,
and open individualism—and show exactly what each one says about
destructive upload versus copied upload.




I continued the conversation by asking a follow up question asking to say
what conclusions different theories of personal identity lead to, for those
who are interested:
https://chatgpt.com/share/69bf0b05-33bc-8006-90ee-4c34f64d3131

Jason

On Sat, Mar 21, 2026 at 1:45 PM BillK via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> Following from the discussion Ben and Jason had about mind uploads, I
> asked Grok about "good enough" or "perfect". Grok discussed the 80/20 rule,
> where "good enough" and polishing it later was often the best option. But
> Grok decided that mind uploads needed to be near perfect on the first
> attempt,
> See discussion below.
> BillK
>
> Question -
> For future human mind uploads, how near-perfect will be required?
>
> Grok 4.20 Expert -
>
> *Near-perfect fidelity will almost certainly be required* for human mind
> uploads (whole brain emulation or WBE), but “near-perfect” is defined at
> the *functional scale where mind emerges*, not at the impossible
> atomic/quantum level. Absolute 100 % perfect replication of every single
> molecule in every instant is unnecessary and probably irrelevant—thanks to
> “scale separation” (the brain’s hierarchical nature lets you ignore finer
> details once you’ve nailed the right layer).
>
> Here’s the breakdown, based on the most authoritative roadmap (Sandberg &
> Bostrom 2008, still the standard reference) plus modern neuroscience and
> recent successes:
> 1. Technical fidelity tiers (what experts actually model)
>
> The classic emulation success ladder:
> Level What’s Captured Required for… Approx. Compute Consensus View
> 3–4 Spiking neurons + full connectome + synapse types/weights Species-typical
> behavior, basic learning 10¹⁸ FLOPS Minimum viable “human-like”
> 5 Electrophysiology (ion channels, voltages, currents) + ~5×5×50 nm
> spatial scan Individual personality, memories, skills 10²² FLOPS Probably
> sufficient for most practical uploads
> 6–7 Metabolome + proteome (neurotransmitters, proteins, gene expression) Full
> plasticity, long-term adaptation, qualia/consciousness 10²⁵–10²⁶ FLOPS Safe
> “near-perfect” zone for convincing uploads
> 8–11 Protein complexes → single molecules → quantum Only if no scale
> separation exists Insane (10³⁰+) Almost certainly overkill; quantum
> effects in brain are negligible on relevant timescales
>
> *Spatial resolution needed*: ~5 nm in key directions to see synapses and
> thin axons (current electron microscopy on small brains already hits this;
> fruit-fly whole-brain connectome + synapse prediction already produces
> multiple real behaviors at 95 % accuracy). *Temporal*: <1 ms to capture
> spikes and synaptic timing. *Chemical tags*: Which neurotransmitter,
> receptor type, phosphorylation state—critical for “who you are.”
>
> Kurzweil says neural/synaptic level is enough for uploads by ~2045.
> Sandberg/Bostrom workshop consensus: levels 4–6 get you to “individual
> brain emulation” (recognizably *you*, retains most memories/skills) and
> likely “mind emulation” (subjective experience). Higher levels only if the
> brain has no useful scale separation—which would make uploading impossible
> anyway.
>
> Recent proof: The 2024–2026 fruit-fly whole-brain emulation (125k neurons,
> 50M synapses) already drives real embodied behaviors when plugged into a
> physics-simulated body. That’s level ~4–5 fidelity—and it *works*. Human
> scale just needs the same approach, bigger.
> 2. Philosophical “is it really YOU?” requirement
>
> This is where “near-perfect” gets stricter:
>
>    - *Functionalist / pattern view* (most transhumanists, Koene,
>    Kurzweil, Parfit-inspired branching identity): If the *information-processing
>    pattern* is preserved at the right scale, the upload *is* a continuer
>    of you. A perfect synaptic + dynamics copy is sufficient. Small
>    approximations below noise level don’t matter—you don’t notice which exact
>    molecules are firing right now anyway.
>    - *Gradual replacement* (neuron-by-neuron swap): Highest
>    continuity—widely accepted as preserving identity even by skeptics.
>    Requires near-perfect tech but no destructive scan.
>    - *Destructive scan-and-copy*: The upload wakes up thinking it’s you
>    100 %, but the original dies. Many philosophers say it’s a perfect copy,
>    not “you.” Psychological branching or psychological-continuity theories
>    still say both branches can legitimately be “you.”
>    - *Substrate chauvinist / quantum views* (Penrose, some dualists):
>    You’d need literally perfect biology-level fidelity (or impossible quantum
>    state transfer). Most experts dismiss this—decoherence kills quantum
>    effects in <10⁻¹³ s.
>
> *Bottom line on identity*: Near-perfect at level 5–6 + gradual method or
> very high-fidelity non-destructive scan is what most serious thinkers would
> accept as “safe enough” for personal continuation. Anything coarser risks
> noticeable differences (amnesia, personality drift, or the dreaded “zombie”
> worry).
> 3. Why “good enough” is *not* better than near-perfect here (unlike your
> last question)
>
>    - *One-shot high stakes*: Unlike a blog post or MVP, you can’t easily
>    iterate on your own mind after upload. Errors compound (chaos + long-term
>    plasticity). Early uploads will need over-engineering.
>    - *Environment must also be near-perfect*: Sensory input and virtual
>    body simulation must be accurate enough that the upload can’t tell the
>    difference—otherwise psychosis or identity fracture. Minor distortions =
>    serious mental consequences.
>    - *Once mature*: Then “good enough + fast iteration” wins. Uploads can
>    be copied, edited, sped up, and A/B tested ethically in simulation. The
>    first successful ones just have to be close enough that the emulated person
>    says “yep, still me.”
>
> Practical threshold most experts converge on
>
>    - Scan at ~5–50 nm spatial + chemical tagging + 1 ms dynamics.
>    - Simulate at level 5–6 fidelity (synapses + electrophysiology + key
>    chemistry).
>    - Use gradual replacement or non-destructive methods where possible.
>    - Validate until the emulation passes personalized Turing tests *and*
>    internal self-report matches (under PAM—Principle of Assuming the Most).
>
> That’s “near-perfect” in engineering terms: accurate to within natural
> brain noise, preserving every functionally relevant bit. Absolute molecular
> perfection? Unnecessary and wasteful.
>
> In short: For mind uploads, *near-perfect at the synaptic-to-molecular
> functional layer will be required*—because the mind is delicate and
> identity is precious. But once you hit that threshold, the same principle
> you asked about last time kicks in: iterated “good enough” versions will
> rapidly outpace any biological human. The first upload has to be *really*
> close; everything after that gets to be gloriously imperfect and improvable.
>
> Motion (in silicon) will beat meditation (in carbon). But you still need
> to nail the launch.
> -----------------------------------
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20260321/26895a85/attachment-0001.htm>


More information about the extropy-chat mailing list