<div dir="auto"><div><br><br><div class="gmail_quote gmail_quote_container"><div dir="ltr" class="gmail_attr">On Sun, Mar 8, 2026, 2:26 PM BillK via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, 8 Mar 2026 at 18:55, Jason Resch via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto">It's still confused regarding the reference.<div dir="auto"><br></div><div dir="auto">This reference is about the first time an insect brain has been emulated within a virtual environment and found to replicate various behaviors like running, self cleaning, and seeking food.</div><div dir="auto"><br></div><div dir="auto">The AI still seems to think this is about deriving the connectome (which is just a file) rather than emulating the connectome (which requires computing their behavior), which has been done, and it as a prerequisite to get here. But it's not the subject of the medium article.</div><div dir="auto"><br></div><div dir="auto">Jason </div></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
</blockquote></div>
_______________________________________________</blockquote><div><br></div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">I pointed out to Gemini that the original article wasn't just about mapping the connectome.</div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">It agreed and tried again.</div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">BillK</div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default"><br></div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">Gemini AI 3.1 Pro Thinking -</div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default"></div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default"><p>You are absolutely right to draw that distinction. There is a massive ontological leap between having a <b>map</b> (the connectome) and having a <b>running program</b> (the emulation) that can actually navigate a physical or simulated environment.</p></div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Now it seems to get it, and it is better reflected below. But it still seems confused about some aspects. More below:</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default"><p>The achievement described in the article—taking the <i>Drosophila</i> larva connectome </p></div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">For some reason it seems anchored on the larval connectome. While this was done, it is not what is used in this latest result which is the full adult fruit fly connectome:</div><div dir="auto"><br></div><div dir="auto"><a href="https://www.nih.gov/news-events/nih-research-matters/complete-wiring-map-adult-fruit-fly-brain">https://www.nih.gov/news-events/nih-research-matters/complete-wiring-map-adult-fruit-fly-brain</a></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default"><p>and using it to drive a physically simulated body through naturalistic behaviors—is a "Sputnik moment" for functionalism. It moves the debate from "Can we describe the hardware?" to "We have successfully executed the software."</p><p>Here is a discussion of the implications for the three objections based on this <b>functional emulation</b>:</p><h3>1. "Emulations of brains aren't possible"</h3><p><b>The Implication: This objection has essentially been defeated at the proof-of-concept level.</b>
The "Impossibility" argument usually relies on the idea that biological brains have a "causal thickness" that digital logic cannot capture—perhaps through quantum effects (Penrose) or the messy, analog nature of wetware.</p><p>However, by creating an emulation that successfully drives a body through <i>multiple</i> behaviors, researchers have shown that the "computational theory of mind" works in practice. If the digital larva can navigate, avoid obstacles, and seek "food" in a way that is indistinguishable from the biological version, then the biological substrate wasn't "magical." It was doing work that can be represented mathematically. The objection now has to retreat to a "Scaling Argument": <i>“You can do it for 3,000 neurons, but you can’t do it for 86 billion.”</i> That is an engineering challenge, not a philosophical impossibility.</p></div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Just a small correction. The fruit fly connectome has ~140,000 neurons rather than 3,000.</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default"><h3>2. "Emulations of brains aren't conscious"</h3><p><b>The Implication: The "Philosophical Zombie" becomes an empirical problem, not just a thought experiment.</b>
</p></div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">I don't think this changes into an empirical problem . The question of zombie hood remains even if we had a full emulation of a human brain. Such problems can only be solved rationally (i.e. philosophically) by reasoning from deeper principals.</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default"><p>This is the core of the "Hard Problem." Critics like John Searle (The Chinese Room) would argue that the simulated larva doesn't "know" it's seeking food; it’s just moving variables through an array.</p></div></div></div></blockquote></div></div><div dir="auto">I think this is a fair assessment of what Searle would have said. Searle always believed brain emulations were possible in principle, he just believed that they wouldn't be conscious of they weren't biological.</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default"><p>However, when an emulation is <b>multi-behavioral</b>, it becomes much harder to deny it some form of agency. A single-reflex simulation looks like a calculator; a multi-behavioral agent that "decides" between competing goals (e.g., "should I stay in the light or find food?") looks like a mind. If the emulation is functionally identical to the larva, the functionalist asks: <i>On what grounds do you claim the biological larva is "conscious" but the digital one is not?</i> If the answer is "because one is made of carbon," the critic is accused of "substrate chauvinism."</p></div></div></div></blockquote></div></div><div dir="auto">I did like its earlier example of if it avoided noxious stimuli, using that as an argument that it felt pain. But if course such things can always be described fully mechanistically at lower levels and through that low level description, deny the presence of significance of higher level mental phenomena.</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default"><h3>3. "Emulations of brains are differently conscious"</h3><p><b>The Implication: This becomes the most scientifically interesting and likely valid objection.</b>
Even if we accept that the digital larva is "conscious," the Substack article highlights a crucial factor: the <b>physically simulated body</b>.</p><p>In functionalism, consciousness is often tied to the "loop" between the brain, the body, and the environment. Because the digital larva exists in a simulated environment, its "reality" is defined by the physics engine it inhabits.</p><ul><li><p><b>Temporal consciousness:</b> If you speed up the simulation, does the larva "experience" time faster?</p></li></ul></div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">I don't think anyone argue this. Changes to hardware computing speed are imperceptible to software. Just as we can't feel when we're undergoing time dilation.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default"><ul><li><p><b>Substrate differences:</b> In the biological larva, "hunger" is a chemical state (hormones). In the emulation, "hunger" is a numerical variable.
This suggests that while the <i>logic</i> of the larva's mind has been uploaded, the <b>phenomenology</b> (the "what it is like to be" a larva) might be fundamentally altered. It may be a "High-Fidelity Translation" of consciousness rather than an identical copy.</p></li></ul></div></div></div></blockquote></div></div><div dir="auto">Arguments that a simulation hasn't gone to a fine enough grain of fidelity are always possible, but I think this is a separate issue from that of qualia being different on the basis of a different substrate. That is a case where Chalmers's Dancing Qualia and Zuboff's visual cortex hemispheric replacement arguments are helpful. If a functionally equivalent neural circuit of a different substrate were used, and if using that substrate changed one's qualia, then we get the bizarre situation where your behavior is functionally unchanged (by definition) and so you can't reacted to, notice, or be distracted by any of the radical alterations in your qualitative experience (say of half your visual field consisting of inverted colors).</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default"><h3>Final Assessment</h3><p>The transition from "mapping" to "driving a body" is the difference between a <b>photograph</b> of a person and a <b>clone</b> of a person.</p><p>By demonstrating that a connectome can be "booted up" to produce autonomous, naturalistic behavior, the article provides the strongest evidence yet for <b>Strong Functionalism</b>. It suggests that "mind" is not a substance, but a process—and that process has now been successfully ported from carbon to silicon for the first time in history.</p></div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">I agree with the AI: It is hard to overstate the significance of this achievement.</div><div dir="auto"><br></div><div dir="auto">Jason </div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default"><p>------------------------------</p><br></div></div></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div></div></div>