[ExI] Do digital computers feel?

Stathis Papaioannou stathisp at gmail.com
Sun Jan 22 01:21:56 UTC 2017


On 23 December 2016 at 06:23, Brent Allsop <brent.allsop at gmail.com> wrote:

>
> On 12/21/2016 4:21 PM, Stathis Papaioannou wrote:
>
> Your intuition is that in order to reproduce consciousness it may not be
> sufficient to just reproduce the behaviour of the human brain, because
> consciousness might reside in the actual brain substance. This, I think, is
> what Brent is claiming. He further claims that one day we may be able to
> work out the exact correlates of experience - glutamate for red experiences
> for example (for illustrative purposes - it wouldn't be as simple as this).
> But there is an argument due to philosopher David Chalmers that assumes
> this common intuition to be true and shows that it leads to absurdity:
>
> http://consc.net/papers/qualia.html
>
>
> and
>
> On 12/22/2016 1:31 AM, Stathis Papaioannou wrote:
>
> The theory of mind called "functionalism" holds that consciousness results
> from the brain carrying out its business of cognition, rather than from the
> actual substrate of the brain. This would mean that if the function of the
> brain could be reproduced using another substrate, such as a digital
> computer, the associated consciousness would also be reproduced. The paper
> by Chalmers I cited is a reductio ad absurdum starting with the assumption
> that consciousness is substrate-dependent, thus establishing functionalism
> as the better theory.
>
>
> Thanks for bringing this up!  This neural substitution argument for
> functionalism was around way before Chalmers used the argument in his
> paper.  For example Hans Moravec made this same argument way back in 1988,
> in his book Mind Children.
>
> https://www.amazon.com/Mind-Children-Future-Robot-
> Intelligence/dp/0674576187
>
> So at least Stathis Papaioannou, Hans Moravec, David Chalmers, James
> Carroll (CC-ed), and a bunch of others think this argument is sound,
> causing them to think "functionalism is the better theory" resulting in the
> apparent "hard problem" conundrum.  I think all these people are world
> leading, understanding wise, in this field, so we need to take this
> argument seriously.  But, despite this, it seems obvious to me that this so
> called "hard" problem is a simple misunderstanding of how phenomenal
> computation works below the abstracted layer - at the hardware quality
> dependent layer.
>

The "hard problem" and functionalism are not really related. The "hard
problem" can still be stated if consciousness is substrate dependent or if
it is due to an immortal soul.


> Let me describe the hardware quality dependent layer in today's computers
> in a slightly qualitatively advanced way to illustrate how this
> misunderstanding results.  One of the fundamental operations of a
> computation device is comparisons:  Is a 1 different than a zero?  So
> fundamentally, today's computer circuits are composed of lots of such
> comparison gates that let you know if the voltage on one wire is the same
> as the voltage on another wire.  In other words, we are talking about a
> simple exclusive or functional operation:
>
> https://en.wikipedia.org/wiki/XOR_gate
>
> So, instead of just implementing our XOR logical comparison function with
> simple voltages that are not physically very qualitatively different lets
> use neurotransmitter molecule comparisons like between glutamate and
> glycine.   Let's implement our XOR function with a comparison neuron that
> fires if two of it's input synapses are chemically the same and not fire if
> they are different.  In effect, this comparison neuron is a good glutamate
> detector.  If glutamate is being fed to one of it's input synapses, nothing
> but glutamate in the other will cause it to fire.
>
> So, the complete XOR neural setup is composed of 3 significant neurons.
> There are two input neurons that can dump different nero transmitters into
> the two input synapses.  and the third comparison neuron that fires, if the
> two input synapses are chemically the same.  So let's perform the neural
> substitution on this xor gate.  We first replace one of the input neurons
> with a silicone system that can function identically.  When it outputs a
> positive voltage, it is considered as representing what glutamate is
> chemically like.  Outputting a zero voltage is considered to represent
> dumping something chemically different than glutamate into the synapse of
> the comparitor neuron.  At this point, you have to add a physical
> translator between this first silicone neuron substitutuion and the real
> comparitor neuron.  So when the silicone neuron outputs a positive voltage,
> the translation mechanism feeds glutamate to the comparison neuron.
> Obviously, since the real neuron is receiving glutamate, it is happy, and
> it fires since it's two inputs are chemically or qualitatively the same.
> Now, obviously, in order to replace the comparitor neuron, you also need to
> replace the other input with a translator system.  This system translates
> glutamate, coming from the second input neuron, into a positive voltage
> being fed into the newly artificial comparitor neuron.  So, this simple XOR
> gate is functioning identically to the comparitor neuron.  It fires if the
> two inputs are the same, but doesn't fire if they are different.
>
> With that, you should be able to see the flaw in this neural substitution
> logic.  The physical qualities being compared between these two
> functionally identical XOR systems is critically important when it comes to
> our consciousness.  That is why Thomas Nagel is wanting to know what the
> two comparison systems are physically and qualitatively like.  The two
> inputs being compared, and what they are physically, chemichally, and
> qualitatively like is important to understanding the nature of physical
> qualitative comparison.  The two systems can be thought of as functionally
> the same, but the qualities of what they are comparing is physically very
> different.
>

Well, I don't see the flaw. If just one of the input neurons in the XOR
system is changed, but it behaves in the same way, then the system behaves
in the same way. The artificial neuron detects glutamate when the original
neuron would have and sends output to the comparator neuron when the the
original neuron would have. That is what "functionally identical" means.


-- 
Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20170122/8a893b3a/attachment.html>


More information about the extropy-chat mailing list