<div dir="auto">Here's a simpler way to consider a substitution with a functionally equivalent parts in a way less subject to worrying whether or not all the important behaviors of neurons are captured:<div dir="auto"><br></div><div dir="auto">A) Consider a full brain simulation down to quark gluon level.</div><div dir="auto"><br></div><div dir="auto">1. Run on a Windows computer with AMD processors.</div><div dir="auto"><br></div><div dir="auto">2. Run on a Mac computer with Intel processors.</div><div dir="auto"><br></div><div dir="auto">The two computers can be changed without having any bearing on the behavior of the simulated brain.</div><div dir="auto"><br></div><div dir="auto">Jason</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto">P.S. as I understand it, the generally described neuron substitution argument does not use generic and identically behaving neurons, but ones wired up in the same way as the neuron it replaces, with the same weights and biases as the original, such that it's spiking/firing behavior and interactions with it's neighboring connected neurons will be the same as it was with the original.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, May 22, 2022, 7:49 PM Stathis Papaioannou via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, 23 May 2022 at 09:06, Stuart LaForge via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
Quoting Stathis Papaioannou:<br>
<br>
<br>
>><br>
>> It might intuitively aid the understanding of my argument to examine a<br>
>> higher order network. The substitution argument suggests that a small<br>
>> part of my brain could be replaced by a functionally identical<br>
>> artificial part, and I would not be able to tell the difference. The<br>
>> problem with this argument is that the function of any neuron or<br>
>> neural circuit of the brain is not determined solely by the properties<br>
>> of the neuron or neural circuit, but by its holistic relationship with<br>
>> all the other neurons it is connected to. So not only could an<br>
>> artificial neuron not be an "indistinguishable substitute" for the<br>
>> native neuron, but even another identical biological neuron would not<br>
>> be a sufficient replacement unless it was somehow grown or developed<br>
>> in the context of a brain identical to yours.<br>
><br>
><br>
> ?Functionally identical? means that the replacement interacts with the<br>
> remaining tissue exactly the same way as the original did. If it doesn?t,<br>
> then it isn?t functionally identical.<br>
<br>
I don't have an issue with the meaning of "functionally identical"; I <br>
have just don't believe such a functionally identical replacement is <br>
possible. Not when the function of a neuron is so dependent on the <br>
neurons that it is connected to. It is a flawed premise and <br>
invalidates the argument.<br></blockquote><div><br></div><div>It does not invalidate the argument since the argument does not depend on it being practically possible to make the substitution.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
> It might be more intuitively obvious to consider your family, than a<br>
>> brain. If you were instantaneously replaced with a clone of yourself,<br>
>> even if that clone had been trained in your memories up until let's<br>
>> say last month, your family would notice some pretty jarring<br>
>> differences between you and your clone. Those problems could<br>
>> eventually go away as your family adapted to your clone, and your<br>
>> clone adapted to your family, but the actual replacement itself would<br>
>> be obvious to your family when it occurred.<br>
><br>
><br>
> How would your family notice a difference if your behaviour were exactly<br>
> the same?<br>
<br>
The clone would have no memory of any family interactions that <br>
happened since the clone was last updated. Commitments, arguments, <br>
obligations, promises, plans and other relationship details would <br>
seemingly be forgotten. At best, the family would wonder why you had <br>
lost your memories of the last 30 days; possibly assuming you were <br>
doing drugs or something. You can't behave correctly when that <br>
behavior is based on knowledge you don't posses.<br></blockquote><div><br></div><div>If it's missing memories, it isn't functionally identical.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
> Similarly, an artificial replacement neuron/neural circuit (or even a<br>
>> biological one) would have to undergo "on the job training" to<br>
>> sufficiently substitute for the component it was replacing. And if the<br>
>> circuit was extensive enough, you and the people around you would<br>
>> notice a difference.<br>
><br>
><br>
> Technical difficulty is not a problem in a thought experiment. The argument<br>
> is that IF a part of your brain were replaced with a functionally identical<br>
> analogue THEN your consciousness would necessarily be preserved.<br>
<br>
Technical difficulty bordering on impossibility can make a thought <br>
experiment irrelevant. For example, IF I had a functional time <br>
machine, THEN I could undo all my mistakes in the past. The <br>
substitution argument is logically sound but it stems from false <br>
premises.<br></blockquote><div><br></div><div>Well, if you had a time machine you could undo the mistakes of the past, and if that's all you want to show then the argument is valid (ignoring the logical paradoxes of time travel). The fact that a time machine may be impossible does not change this. In a similar way, all that is claimed in the substitution argument is that if you reproduced the behaviour of the substituted part then you would also reproduce any associated consciousness.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Secondly it is a mistake to assume that ones consciousness cannot be <br>
changed while preserving ones identity. Your consciousness changes all <br>
the time but you do not cease being you because of it. When I have too <br>
much to drink, my consciousness changes, but I am still me albeit, <br>
drunk. So a not-quite-functionally-identical analogue to a neural <br>
circuit that noticeably changes your consciousness, would not suddenly <br>
render you no longer yourself. It would simply change you in ways <br>
similar to the numerous changes you have already experienced in the <br>
course of your life. The whole point of the brain is neural plasticity.<br></blockquote><div><br></div><div>The argument is not about identity, it is about consciousness not being substrate specific.<br></div></div><br>-- <br><div dir="ltr">Stathis Papaioannou</div></div><div id="m_-2657965232514762273DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2"><br>
<table style="border-top:1px solid #d3d4de">
<tr>
<td style="width:55px;padding-top:13px"><a href="https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail" target="_blank" rel="noreferrer"><img src="https://ipmcdn.avast.com/images/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif" alt="" width="46" height="29" style="width:46px;height:29px"></a></td>
<td style="width:470px;padding-top:12px;color:#41424e;font-size:13px;font-family:Arial,Helvetica,sans-serif;line-height:18px">Virus-free. <a href="https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail" style="color:#4453ea" target="_blank" rel="noreferrer">www.avast.com</a>
</td>
</tr>
</table><a href="#m_-2657965232514762273_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2" width="1" height="1" rel="noreferrer"></a></div><div id="m_-2657965232514762273gmail-m_-924659830263318866DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2"><br>
<table style="border-top:1px solid rgb(211,212,222)">
<tbody><tr>
<td style="width:55px;padding-top:13px"><a href="https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail" target="_blank" rel="noreferrer"><img src="https://ipmcdn.avast.com/images/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif" alt="" style="width:46px;height:29px" width="46" height="29"></a></td>
<td style="width:470px;padding-top:12px;color:rgb(65,66,78);font-size:13px;font-family:Arial,Helvetica,sans-serif;line-height:18px">Virus-free. <a href="https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail" style="color:rgb(68,83,234)" target="_blank" rel="noreferrer">www.avast.com</a>
</td>
</tr>
</tbody></table><a href="#m_-2657965232514762273_m_-924659830263318866_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2" width="1" height="1" rel="noreferrer"></a></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>