[ExI] VERY cool. Toward mind uploading?

BillK pharos at gmail.com
Sun Mar 8 19:24:44 UTC 2026


On Sun, 8 Mar 2026 at 18:55, Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> It's still confused regarding the reference.
>
> This reference is about the first time an insect brain has been emulated
> within a virtual environment and found to replicate various behaviors like
> running, self cleaning, and seeking food.
>
> The AI still seems to think this is about deriving the connectome (which
> is just a file) rather than emulating the connectome (which requires
> computing their behavior), which has been done, and it as a prerequisite to
> get here. But it's not the subject of the medium article.
>
> Jason
>
>> _______________________________________________


I pointed out to Gemini that the original article wasn't just about mapping
the connectome.
It agreed and tried again.
BillK

Gemini AI 3.1 Pro Thinking -

You are absolutely right to draw that distinction. There is a massive
ontological leap between having a *map* (the connectome) and having a *running
program* (the emulation) that can actually navigate a physical or simulated
environment.

The achievement described in the article—taking the *Drosophila* larva
connectome and using it to drive a physically simulated body through
naturalistic behaviors—is a "Sputnik moment" for functionalism. It moves
the debate from "Can we describe the hardware?" to "We have successfully
executed the software."

Here is a discussion of the implications for the three objections based on
this *functional emulation*:
1. "Emulations of brains aren't possible"

*The Implication: This objection has essentially been defeated at the
proof-of-concept level.* The "Impossibility" argument usually relies on the
idea that biological brains have a "causal thickness" that digital logic
cannot capture—perhaps through quantum effects (Penrose) or the messy,
analog nature of wetware.

However, by creating an emulation that successfully drives a body through
*multiple* behaviors, researchers have shown that the "computational theory
of mind" works in practice. If the digital larva can navigate, avoid
obstacles, and seek "food" in a way that is indistinguishable from the
biological version, then the biological substrate wasn't "magical." It was
doing work that can be represented mathematically. The objection now has to
retreat to a "Scaling Argument": *“You can do it for 3,000 neurons, but you
can’t do it for 86 billion.”* That is an engineering challenge, not a
philosophical impossibility.
2. "Emulations of brains aren't conscious"

*The Implication: The "Philosophical Zombie" becomes an empirical problem,
not just a thought experiment.* This is the core of the "Hard Problem."
Critics like John Searle (The Chinese Room) would argue that the simulated
larva doesn't "know" it's seeking food; it’s just moving variables through
an array.

However, when an emulation is *multi-behavioral*, it becomes much harder to
deny it some form of agency. A single-reflex simulation looks like a
calculator; a multi-behavioral agent that "decides" between competing goals
(e.g., "should I stay in the light or find food?") looks like a mind. If
the emulation is functionally identical to the larva, the functionalist
asks: *On what grounds do you claim the biological larva is "conscious" but
the digital one is not?* If the answer is "because one is made of carbon,"
the critic is accused of "substrate chauvinism."
3. "Emulations of brains are differently conscious"

*The Implication: This becomes the most scientifically interesting and
likely valid objection.* Even if we accept that the digital larva is
"conscious," the Substack article highlights a crucial factor: the *physically
simulated body*.

In functionalism, consciousness is often tied to the "loop" between the
brain, the body, and the environment. Because the digital larva exists in a
simulated environment, its "reality" is defined by the physics engine it
inhabits.

   -

   *Temporal consciousness:* If you speed up the simulation, does the larva
   "experience" time faster?
   -

   *Substrate differences:* In the biological larva, "hunger" is a chemical
   state (hormones). In the emulation, "hunger" is a numerical variable. This
   suggests that while the *logic* of the larva's mind has been uploaded,
   the *phenomenology* (the "what it is like to be" a larva) might be
   fundamentally altered. It may be a "High-Fidelity Translation" of
   consciousness rather than an identical copy.

Final Assessment

The transition from "mapping" to "driving a body" is the difference between
a *photograph* of a person and a *clone* of a person.

By demonstrating that a connectome can be "booted up" to produce
autonomous, naturalistic behavior, the article provides the strongest
evidence yet for *Strong Functionalism*. It suggests that "mind" is not a
substance, but a process—and that process has now been successfully ported
from carbon to silicon for the first time in history.

------------------------------
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20260308/66c56cdb/attachment.htm>


More information about the extropy-chat mailing list