[ExI] Substitution argument

Brent Allsop brent.allsop at gmail.com
Mon May 23 14:58:53 UTC 2022


Hi Stuart,

What do you mean by: "The substitution argument is logically sound but it
stems from false premises."



On Sun, May 22, 2022 at 9:09 PM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> Here's a simpler way to consider a substitution with a functionally
> equivalent parts in a way less subject to worrying whether or not all the
> important behaviors of neurons are captured:
>
> A) Consider a full brain simulation down to quark gluon level.
>
> 1. Run on a Windows computer with AMD processors.
>
> 2. Run on a Mac computer with Intel processors.
>
> The two computers can be changed without having any bearing on the
> behavior of the simulated brain.
>
> Jason
>
>
> P.S. as I understand it, the generally described neuron substitution
> argument does not use generic and identically behaving neurons, but ones
> wired up in the same way as the neuron it replaces, with the same weights
> and biases as the original, such that it's spiking/firing behavior and
> interactions with it's neighboring connected neurons will be the same as it
> was with the original.
>
> On Sun, May 22, 2022, 7:49 PM Stathis Papaioannou via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>>
>> On Mon, 23 May 2022 at 09:06, Stuart LaForge via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>>
>>> Quoting Stathis Papaioannou:
>>>
>>>
>>> >>
>>> >> It might intuitively aid the understanding of my argument to examine a
>>> >> higher order network. The substitution argument suggests that a small
>>> >> part of my brain could be replaced by a functionally identical
>>> >> artificial part, and I would not be able to tell the difference. The
>>> >> problem with this argument is that the function of any neuron or
>>> >> neural circuit of the brain is not determined solely by the properties
>>> >> of the neuron or neural circuit, but by its holistic relationship with
>>> >> all the other neurons it is connected to. So not only could an
>>> >> artificial neuron not be an "indistinguishable substitute" for the
>>> >> native neuron, but even another identical biological neuron would not
>>> >> be a sufficient replacement unless it was somehow grown or developed
>>> >> in the context of a brain identical to yours.
>>> >
>>> >
>>> > ?Functionally identical? means that the replacement interacts with the
>>> > remaining tissue exactly the same way as the original did. If it
>>> doesn?t,
>>> > then it isn?t functionally identical.
>>>
>>> I don't have an issue with the meaning of "functionally identical"; I
>>> have just don't believe such a functionally identical replacement is
>>> possible. Not when the function of a neuron is so dependent on the
>>> neurons that it is connected to. It is a flawed premise and
>>> invalidates the argument.
>>>
>>
>> It does not invalidate the argument since the argument does not depend on
>> it being practically possible to make the substitution.
>>
>>
>>> > It might be more intuitively obvious to consider your family, than a
>>> >> brain. If you were instantaneously replaced with a clone of yourself,
>>> >> even if that clone had been trained in your memories up until let's
>>> >> say last month, your family would notice some pretty jarring
>>> >> differences between you and your clone. Those problems could
>>> >> eventually go away as your family adapted to your clone, and your
>>> >> clone adapted to your family, but the actual replacement itself would
>>> >> be obvious to your family when it occurred.
>>> >
>>> >
>>> > How would your family notice a difference if your behaviour were
>>> exactly
>>> > the same?
>>>
>>> The clone would have no memory of any family interactions that
>>> happened since the clone was last updated. Commitments, arguments,
>>> obligations, promises, plans and other relationship details would
>>> seemingly be forgotten. At best, the family would wonder why you had
>>> lost your memories of the last 30 days; possibly assuming you were
>>> doing drugs or something. You can't behave correctly when that
>>> behavior is based on knowledge you don't posses.
>>>
>>
>> If it's missing memories, it isn't functionally identical.
>>
>>
>>> > Similarly, an artificial replacement neuron/neural circuit (or even a
>>> >> biological one) would have to undergo "on the job training" to
>>> >> sufficiently substitute for the component it was replacing. And if the
>>> >> circuit was extensive enough, you and the people around you would
>>> >> notice a difference.
>>> >
>>> >
>>> > Technical difficulty is not a problem in a thought experiment. The
>>> argument
>>> > is that IF a part of your brain were replaced with a functionally
>>> identical
>>> > analogue THEN your consciousness would necessarily be preserved.
>>>
>>> Technical difficulty bordering on impossibility can make a thought
>>> experiment irrelevant. For example, IF I had a functional time
>>> machine, THEN I could undo all my mistakes in the past. The
>>> substitution argument is logically sound but it stems from false
>>> premises.
>>>
>>
>> Well, if you had a time machine you could undo the mistakes of the past,
>> and if that's all you want to show then the argument is valid (ignoring the
>> logical paradoxes of time travel). The fact that a time machine may be
>> impossible does not change this. In a similar way, all that is claimed in
>> the substitution argument is that if you reproduced the behaviour of the
>> substituted part then you would also reproduce any associated consciousness.
>>
>>
>>> Secondly it is a mistake to assume that ones consciousness cannot be
>>> changed while preserving ones identity. Your consciousness changes all
>>> the time but you do not cease being you because of it. When I have too
>>> much to drink, my consciousness changes, but I am still me albeit,
>>> drunk. So a not-quite-functionally-identical analogue to a neural
>>> circuit that noticeably changes your consciousness, would not suddenly
>>> render you no longer yourself. It would simply change you in ways
>>> similar to the numerous changes you have already experienced in the
>>> course of your life. The whole point of the brain is neural plasticity.
>>>
>>
>> The argument is not about identity, it is about consciousness not being
>> substrate specific.
>>
>> --
>> Stathis Papaioannou
>>
>>
>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> Virus-free.
>> www.avast.com
>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
>> <#m_-7450593785778402687_m_-2657965232514762273_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>>
>>
>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> Virus-free.
>> www.avast.com
>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
>> <#m_-7450593785778402687_m_-2657965232514762273_m_-924659830263318866_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220523/4f115ddf/attachment.htm>


More information about the extropy-chat mailing list