[ExI] VERY cool. Toward mind uploading?

Jason Resch jasonresch at gmail.com
Sun Mar 8 21:45:07 UTC 2026


On Sun, Mar 8, 2026, 2:26 PM BillK via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Sun, 8 Mar 2026 at 18:55, Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> It's still confused regarding the reference.
>>
>> This reference is about the first time an insect brain has been emulated
>> within a virtual environment and found to replicate various behaviors like
>> running, self cleaning, and seeking food.
>>
>> The AI still seems to think this is about deriving the connectome (which
>> is just a file) rather than emulating the connectome (which requires
>> computing their behavior), which has been done, and it as a prerequisite to
>> get here. But it's not the subject of the medium article.
>>
>> Jason
>>
>>> _______________________________________________
>
>
> I pointed out to Gemini that the original article wasn't just about
> mapping the connectome.
> It agreed and tried again.
> BillK
>
> Gemini AI 3.1 Pro Thinking -
>
> You are absolutely right to draw that distinction. There is a massive
> ontological leap between having a *map* (the connectome) and having a *running
> program* (the emulation) that can actually navigate a physical or
> simulated environment.
>

Now it seems to get it, and it is better reflected below. But it still
seems confused about some aspects. More below:


The achievement described in the article—taking the *Drosophila* larva
> connectome
>

For some reason it seems anchored on the larval connectome. While this was
done, it is not what is used in this latest result which is the full adult
fruit fly connectome:

https://www.nih.gov/news-events/nih-research-matters/complete-wiring-map-adult-fruit-fly-brain

and using it to drive a physically simulated body through naturalistic
> behaviors—is a "Sputnik moment" for functionalism. It moves the debate from
> "Can we describe the hardware?" to "We have successfully executed the
> software."
>
> Here is a discussion of the implications for the three objections based on
> this *functional emulation*:
> 1. "Emulations of brains aren't possible"
>
> *The Implication: This objection has essentially been defeated at the
> proof-of-concept level.* The "Impossibility" argument usually relies on
> the idea that biological brains have a "causal thickness" that digital
> logic cannot capture—perhaps through quantum effects (Penrose) or the
> messy, analog nature of wetware.
>
> However, by creating an emulation that successfully drives a body through
> *multiple* behaviors, researchers have shown that the "computational
> theory of mind" works in practice. If the digital larva can navigate, avoid
> obstacles, and seek "food" in a way that is indistinguishable from the
> biological version, then the biological substrate wasn't "magical." It was
> doing work that can be represented mathematically. The objection now has to
> retreat to a "Scaling Argument": *“You can do it for 3,000 neurons, but
> you can’t do it for 86 billion.”* That is an engineering challenge, not a
> philosophical impossibility.
>

Just a small correction. The fruit fly connectome has ~140,000 neurons
rather than 3,000.


2. "Emulations of brains aren't conscious"
>
> *The Implication: The "Philosophical Zombie" becomes an empirical problem,
> not just a thought experiment.*
>

I don't think this changes into an empirical problem . The question of
zombie hood remains even if we had a full emulation of a human brain. Such
problems can only be solved rationally (i.e. philosophically) by reasoning
from deeper principals.


This is the core of the "Hard Problem." Critics like John Searle (The
> Chinese Room) would argue that the simulated larva doesn't "know" it's
> seeking food; it’s just moving variables through an array.
>
I think this is a fair assessment of what Searle would have said. Searle
always believed brain emulations were possible in principle, he just
believed that they wouldn't be conscious of they weren't biological.


However, when an emulation is *multi-behavioral*, it becomes much harder to
> deny it some form of agency. A single-reflex simulation looks like a
> calculator; a multi-behavioral agent that "decides" between competing goals
> (e.g., "should I stay in the light or find food?") looks like a mind. If
> the emulation is functionally identical to the larva, the functionalist
> asks: *On what grounds do you claim the biological larva is "conscious"
> but the digital one is not?* If the answer is "because one is made of
> carbon," the critic is accused of "substrate chauvinism."
>
I did like its earlier example of if it avoided noxious stimuli, using that
as an argument that it felt pain. But if course such things can always be
described fully mechanistically at lower levels and through that low level
description, deny the presence of significance of higher level mental
phenomena.



3. "Emulations of brains are differently conscious"
>
> *The Implication: This becomes the most scientifically interesting and
> likely valid objection.* Even if we accept that the digital larva is
> "conscious," the Substack article highlights a crucial factor: the *physically
> simulated body*.
>
> In functionalism, consciousness is often tied to the "loop" between the
> brain, the body, and the environment. Because the digital larva exists in a
> simulated environment, its "reality" is defined by the physics engine it
> inhabits.
>
>    -
>
>    *Temporal consciousness:* If you speed up the simulation, does the
>    larva "experience" time faster?
>
>
I don't think anyone argue this. Changes to hardware computing speed are
imperceptible to software. Just as we can't feel when we're undergoing time
dilation.


>    -
>
>    *Substrate differences:* In the biological larva, "hunger" is a
>    chemical state (hormones). In the emulation, "hunger" is a numerical
>    variable. This suggests that while the *logic* of the larva's mind has
>    been uploaded, the *phenomenology* (the "what it is like to be" a
>    larva) might be fundamentally altered. It may be a "High-Fidelity
>    Translation" of consciousness rather than an identical copy.
>
> Arguments that a simulation hasn't gone to a fine enough grain of fidelity
are always possible, but I think this is a separate issue from that of
qualia being different on the basis of a different substrate. That is a
case where Chalmers's Dancing Qualia and Zuboff's visual cortex hemispheric
replacement arguments are helpful. If a functionally equivalent neural
circuit of a different substrate were used, and if using that substrate
changed one's qualia, then we get the bizarre situation where your behavior
is functionally unchanged (by definition) and so you can't reacted to,
notice, or be distracted by any of the radical alterations in your
qualitative experience (say of half your visual field consisting of
inverted colors).


Final Assessment
>
> The transition from "mapping" to "driving a body" is the difference
> between a *photograph* of a person and a *clone* of a person.
>
> By demonstrating that a connectome can be "booted up" to produce
> autonomous, naturalistic behavior, the article provides the strongest
> evidence yet for *Strong Functionalism*. It suggests that "mind" is not a
> substance, but a process—and that process has now been successfully ported
> from carbon to silicon for the first time in history.
>

I agree with the AI: It is hard to overstate the significance of this
achievement.

Jason

------------------------------
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20260308/383fcb80/attachment-0001.htm>


More information about the extropy-chat mailing list