<div dir="auto"><div dir="auto">To support Stathis's position:</div><div dir="auto"><br></div>Functionalism requires 2 things:<div dir="auto">1. that the physics used by the brain is computable</div><div dir="auto">2. That nothing in the brain requires an infinite amount of information</div><div dir="auto"><br></div><div dir="auto">For 1: No known law of physics is uncomputable. Some argue wave function collapse is incomputable, but you can simulate all possibilities (i.e. many worlds) either on a quantum computer or on a classical computer with exponential slowdown.</div><div dir="auto"><br></div><div dir="auto">For 2: The brain (and rest of the body) is created from the finite information of the DNA (~700 MB) together with information learned through the senses which is also finite (~Gigabit / second). Moreover, quantum mechanics imposes a strict upper bound (The Bekenstein bound) on the information content of physical systems if finite energy and volume.</div><div dir="auto"><br></div><div dir="auto">So the only argument against the logical possibility of function requires posing some new non-computable physics (Like Penrose), or suggesting that the brain contains an infinite amount of information.</div><div dir="auto"><br></div><div dir="auto">If physics is computable and the brain's information content is finite, then in principle an appropriately programmed computer could perfectly emulate the behavior of the brain.</div><div dir="auto"><br></div><div dir="auto">This appears confirmed so far, as detailed brain simulations using existing knowledge of the biochemical properties of neurons have replicated behaviors and patterns of firing across large brain regions. See, for example, the Human Brain Project's results with mouse brains and whisker stimulation: <a href="https://youtu.be/ldXEuUVkDuw">https://youtu.be/ldXEuUVkDuw</a></div><div dir="auto"><br></div><div dir="auto">Jason</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, May 8, 2021, 12:05 PM Stathis Papaioannou via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><br></div><div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, 9 May 2021 at 02:35, Brent Allsop via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" rel="noreferrer noreferrer" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div dir="ltr"><br><div>Hi Stathis,</div><div><br></div><div>On Sat, May 8, 2021 at 7:46 AM Stathis Papaioannou via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" rel="noreferrer noreferrer" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>The link takes me to the video, but then the video doesn’t play, perhaps because I am using a mobile device. Anyway, as I explained, the logical argument is independent of any particular physical theory. We could say that the brain works via magic from the god Zeus. If the behavior of the brain could be replicated via different magic from the god Apollo, then the associated consciousness would necessarily also be replicated. It isn’t possible to fix consciousness to a particular substrate, a particular physics or a particular process.</div></blockquote><div><br></div><div><div>Yes <a href="http://slehar.com/wwwRel/HarmonicGestalt.mp4" rel="noreferrer noreferrer" target="_blank">http://slehar.com/wwwRel/HarmonicGestalt.mp4</a> is just an mp4 file, best downloaded, then watched.</div><div><br></div><div>The entire substitution argument, and your logic, is most definitely dependent on the assumption that the neurons being replaced, one at a time, via the method described, are: "<span style="color:rgb(80,0,80)">independent computational elements that communicate by electrical signals propagated down axons and collaterals and transmitted to other neurons through chemical synapses.</span>" The "classic neuron doctrine"</div></div></div></div></blockquote><div dir="auto"><br></div><div dir="auto">It is presented that way in Chalmers’ paper, but the argument applies to any brain process. It is an argument from the definition of consciousness, showing that if consciousness is substrate dependent, then the idea of consciousness is absurd, because it would be logically possible to change it radically without the subject or anyone else realising.</div><div dir="auto"><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><div dir="auto"></div></div><div>This is the simple definition of what abstract computers of today are "<span style="color:rgb(80,0,80)">independent computational elements that communicate by electrical signals transmitted down wires to other neurons through chemical synapses.</span>" any such system requires interpretation or transuding systems from any one representation, to something different representing the downstream link, in order to preserve the same abstract meaning, otherwise it wouldn't be "substrate independent". THAT is what the neuro substitution is working on, and it can't work on anything different than that type of computation. There is nothing it is intrinsically like for any such abstracted away from physical reality computational system, by design</div></div></div></blockquote><div dir="auto"><br></div><div dir="auto">There is no necessity to preserve any abstract meaning anywhere in the chain as long as the output is identical for all inputs. The internal processing can be mangled a million ways, like one operating system emulating another operating system, and the consciousness will be preserved provided the emulation is done properly. The only empirical test we could do is to confirm that the emulation is actually done properly: that the consciousness is preserved is a deduction, not separately subject to experimental confirmation.</div><div dir="auto"><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div dir="auto"></div><div>The prediction is that consciousness computation is something completely different. It is "computationally bound elemental intrinsic qualities like redness and greenness" There must be something that has an intrinsic redness quality (let's call whatever it is a red Lego block) and something with a different greenness intrinsic quality (a green Lego block) and you must be able to bind these together into some kind of computational standing wave, representing information in a substrate quality dependent way. The system must be able to be consciously aware of when one of the red Lego blocks changes to a green Lego block, in a way that it is dependent on those particular qualities, otherwise it isn't functioning correctly.</div></div></div></blockquote><div dir="auto"><br></div><div dir="auto">The standing wave must have some ultimate effect on the output of the system, i.e. on the muscles. If this is replicated on some other way, the consciousness will also be replicated. So you would have to claim that it is logically impossible to remove locate the effect of the standing wave (or whatever it may be) on the muscles. Logical impossibility is a very strong restriction, meaning that not even a miracle could do it.</div><div dir="auto"><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div dir="auto"></div><div>By definition, it is simply a logic impossibility to do any kind of neuro substitution on any such system, and your "logical" argument simply doesn't apply, or at best isn't logically possible, by definition.</div><div><br></div><div><b style="color:rgb(0,0,0);font-size:large"><font face="comic sans ms, sans-serif" style="font-family:"comic sans ms",sans-serif;color:rgb(255,0,255)">"This Paradigm is Wrong!"</font></b><br></div><div> -- Steven Lehar</div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div></div></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" rel="noreferrer noreferrer" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer noreferrer noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div></div>-- <br><div dir="ltr" data-smartmail="gmail_signature">Stathis Papaioannou</div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" rel="noreferrer noreferrer" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer noreferrer noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>