<div dir="ltr"><div class="gmail_quote gmail_quote_container"><div dir="ltr" class="gmail_attr">On Sun, 8 Mar 2026 at 18:55, Jason Resch via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto">It's still confused regarding the reference.<div dir="auto"><br></div><div dir="auto">This reference is about the first time an insect brain has been emulated within a virtual environment and found to replicate various behaviors like running, self cleaning, and seeking food.</div><div dir="auto"><br></div><div dir="auto">The AI still seems to think this is about deriving the connectome (which is just a file) rather than emulating the connectome (which requires computing their behavior), which has been done, and it as a prerequisite to get here. But it's not the subject of the medium article.</div><div dir="auto"><br></div><div dir="auto">Jason </div></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
</blockquote></div>
_______________________________________________</blockquote><div><br></div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">I pointed out to Gemini that the original article wasn't just about mapping the connectome.</div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">It agreed and tried again.</div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">BillK</div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default"><br></div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">Gemini AI 3.1 Pro Thinking -</div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default"></div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default"><p>You are absolutely right to draw that distinction. There is a massive ontological leap between having a <b>map</b> (the connectome) and having a <b>running program</b> (the emulation) that can actually navigate a physical or simulated environment.</p><p>The achievement described in the article—taking the <i>Drosophila</i> larva connectome and using it to drive a physically simulated body through naturalistic behaviors—is a "Sputnik moment" for functionalism. It moves the debate from "Can we describe the hardware?" to "We have successfully executed the software."</p><p>Here is a discussion of the implications for the three objections based on this <b>functional emulation</b>:</p><h3>1. "Emulations of brains aren't possible"</h3><p><b>The Implication: This objection has essentially been defeated at the proof-of-concept level.</b>
The "Impossibility" argument usually relies on the idea that biological brains have a "causal thickness" that digital logic cannot capture—perhaps through quantum effects (Penrose) or the messy, analog nature of wetware.</p><p>However, by creating an emulation that successfully drives a body through <i>multiple</i> behaviors, researchers have shown that the "computational theory of mind" works in practice. If the digital larva can navigate, avoid obstacles, and seek "food" in a way that is indistinguishable from the biological version, then the biological substrate wasn't "magical." It was doing work that can be represented mathematically. The objection now has to retreat to a "Scaling Argument": <i>“You can do it for 3,000 neurons, but you can’t do it for 86 billion.”</i> That is an engineering challenge, not a philosophical impossibility.</p><h3>2. "Emulations of brains aren't conscious"</h3><p><b>The Implication: The "Philosophical Zombie" becomes an empirical problem, not just a thought experiment.</b>
This is the core of the "Hard Problem." Critics like John Searle (The Chinese Room) would argue that the simulated larva doesn't "know" it's seeking food; it’s just moving variables through an array.</p><p>However, when an emulation is <b>multi-behavioral</b>, it becomes much harder to deny it some form of agency. A single-reflex simulation looks like a calculator; a multi-behavioral agent that "decides" between competing goals (e.g., "should I stay in the light or find food?") looks like a mind. If the emulation is functionally identical to the larva, the functionalist asks: <i>On what grounds do you claim the biological larva is "conscious" but the digital one is not?</i> If the answer is "because one is made of carbon," the critic is accused of "substrate chauvinism."</p><h3>3. "Emulations of brains are differently conscious"</h3><p><b>The Implication: This becomes the most scientifically interesting and likely valid objection.</b>
Even if we accept that the digital larva is "conscious," the Substack article highlights a crucial factor: the <b>physically simulated body</b>.</p><p>In functionalism, consciousness is often tied to the "loop" between the brain, the body, and the environment. Because the digital larva exists in a simulated environment, its "reality" is defined by the physics engine it inhabits.</p><ul><li><p><b>Temporal consciousness:</b> If you speed up the simulation, does the larva "experience" time faster?</p></li><li><p><b>Substrate differences:</b> In the biological larva, "hunger" is a chemical state (hormones). In the emulation, "hunger" is a numerical variable.
This suggests that while the <i>logic</i> of the larva's mind has been uploaded, the <b>phenomenology</b> (the "what it is like to be" a larva) might be fundamentally altered. It may be a "High-Fidelity Translation" of consciousness rather than an identical copy.</p></li></ul><h3>Final Assessment</h3><p>The transition from "mapping" to "driving a body" is the difference between a <b>photograph</b> of a person and a <b>clone</b> of a person.</p><p>By demonstrating that a connectome can be "booted up" to produce autonomous, naturalistic behavior, the article provides the strongest evidence yet for <b>Strong Functionalism</b>. It suggests that "mind" is not a substance, but a process—and that process has now been successfully ported from carbon to silicon for the first time in history.</p><p>------------------------------</p><br></div></div></div>