[ExI] Meaningless Symbols.

Stathis Papaioannou stathisp at gmail.com
Mon Jan 18 12:01:11 UTC 2010


2010/1/18 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Sat, 1/16/10, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>>> You keep assigning absolute atomic status to neurons
>>> and their behaviors, forgetting that just as the brain is
>>> made of neurons, neurons are made of objects too. Those
>>> intra-neuronal objects have as much right to claim atomic
>>> status as does the neuron, and larger inter-neuronal
>>> structures can also make that claim.
>>
>> Everything I've said applies equally well if you consider
>> simulating the behaviour of subneuronal or multineuronal structures.
>> Neurons are just a convenient unit to work with.
>
> If you really thought so then you would consider the brain as the atomic unit. This seems to me the only sensible approach given our limited knowledge of actual neuroscience. But it looks as you prefer to draw conclusions from extremely speculative predictions about the experiences and behaviors of partial brain-replacement frankenstein monsters. It just misses the point.

If it is possible to replace part of the brain leaving behaviour
unchanged then the obvious next step is to replace the whole brain,
and the patient with the whole brain replacement would not be a zombie
either, since it is absurd to think that you would be 100% conscious
with 99% of your brain replaced and 0% conscious after the last 1% is
replaced.

The proposed experiments might be speculative insofar as they cannot
be carried out today, but I think they are perfectly easily
understandable, and they break no logical or physical law.

> Either the brain is a computer or it's not, and we can know the answer without torturing anyone in the hospital with crazy experiments. You don't yet see this, and I accept the blame for wasting so much time on the fun.

The brain may not be a computer but its function, including
consciousness, may be able to be replicated by another machine,
including a computer, just as machines can replicate the function of
all sorts of other things found in nature.

You don't need to do the experiments to draw conclusions from them,
just as you don't need to build a Chinese Room to draw conclusions
from that. Unlike the CR, we will probably be in a position one day to
replace damaged neural tissue with electronic prostheses. So it's
important to know what you would actually make of this, given that the
patients will come out of the surgery saying they feel well. You
either have to say that they are partial zombies who only think that
they feel well (as Lee Corbin thinks is possible) or, if you think
this is incoherent, that the electronic prostheses must have
consciousness in them as well, whether by virtue of their programming,
their matter, or because it must be instilled in them by God to keep
the universe consistent.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list