[ExI] Substitution argument
Stuart LaForge
avant at sollegro.com
Sun May 22 23:05:24 UTC 2022
Quoting Stathis Papaioannou:
>>
>> It might intuitively aid the understanding of my argument to examine a
>> higher order network. The substitution argument suggests that a small
>> part of my brain could be replaced by a functionally identical
>> artificial part, and I would not be able to tell the difference. The
>> problem with this argument is that the function of any neuron or
>> neural circuit of the brain is not determined solely by the properties
>> of the neuron or neural circuit, but by its holistic relationship with
>> all the other neurons it is connected to. So not only could an
>> artificial neuron not be an "indistinguishable substitute" for the
>> native neuron, but even another identical biological neuron would not
>> be a sufficient replacement unless it was somehow grown or developed
>> in the context of a brain identical to yours.
>
>
> ?Functionally identical? means that the replacement interacts with the
> remaining tissue exactly the same way as the original did. If it doesn?t,
> then it isn?t functionally identical.
I don't have an issue with the meaning of "functionally identical"; I
have just don't believe such a functionally identical replacement is
possible. Not when the function of a neuron is so dependent on the
neurons that it is connected to. It is a flawed premise and
invalidates the argument.
> It might be more intuitively obvious to consider your family, than a
>> brain. If you were instantaneously replaced with a clone of yourself,
>> even if that clone had been trained in your memories up until let's
>> say last month, your family would notice some pretty jarring
>> differences between you and your clone. Those problems could
>> eventually go away as your family adapted to your clone, and your
>> clone adapted to your family, but the actual replacement itself would
>> be obvious to your family when it occurred.
>
>
> How would your family notice a difference if your behaviour were exactly
> the same?
The clone would have no memory of any family interactions that
happened since the clone was last updated. Commitments, arguments,
obligations, promises, plans and other relationship details would
seemingly be forgotten. At best, the family would wonder why you had
lost your memories of the last 30 days; possibly assuming you were
doing drugs or something. You can't behave correctly when that
behavior is based on knowledge you don't posses.
> Similarly, an artificial replacement neuron/neural circuit (or even a
>> biological one) would have to undergo "on the job training" to
>> sufficiently substitute for the component it was replacing. And if the
>> circuit was extensive enough, you and the people around you would
>> notice a difference.
>
>
> Technical difficulty is not a problem in a thought experiment. The argument
> is that IF a part of your brain were replaced with a functionally identical
> analogue THEN your consciousness would necessarily be preserved.
Technical difficulty bordering on impossibility can make a thought
experiment irrelevant. For example, IF I had a functional time
machine, THEN I could undo all my mistakes in the past. The
substitution argument is logically sound but it stems from false
premises.
Secondly it is a mistake to assume that ones consciousness cannot be
changed while preserving ones identity. Your consciousness changes all
the time but you do not cease being you because of it. When I have too
much to drink, my consciousness changes, but I am still me albeit,
drunk. So a not-quite-functionally-identical analogue to a neural
circuit that noticeably changes your consciousness, would not suddenly
render you no longer yourself. It would simply change you in ways
similar to the numerous changes you have already experienced in the
course of your life. The whole point of the brain is neural plasticity.
Stuart LaForge
More information about the extropy-chat
mailing list