[ExI] Do digital computers feel?

Brent Allsop brent.allsop at gmail.com
Mon Jan 30 03:59:26 UTC 2017



On 1/21/2017 6:21 PM, Stathis Papaioannou wrote:
>
>
> On 23 December 2016 at 06:23, Brent Allsop <brent.allsop at gmail.com 
> <mailto:brent.allsop at gmail.com>> wrote:
>
>
>     On 12/21/2016 4:21 PM, Stathis Papaioannou wrote:
>>     Your intuition is that in order to reproduce consciousness it may
>>     not be sufficient to just reproduce the behaviour of the human
>>     brain, because consciousness might reside in the actual brain
>>     substance. This, I think, is what Brent is claiming. He further
>>     claims that one day we may be able to work out the exact
>>     correlates of experience - glutamate for red experiences for
>>     example (for illustrative purposes - it wouldn't be as simple as
>>     this). But there is an argument due to philosopher David Chalmers
>>     that assumes this common intuition to be true and shows that it
>>     leads to absurdity:
>>
>>     http://consc.net/papers/qualia.html
>>     <http://consc.net/papers/qualia.html>
>>
>>
>
>     and
>
>     On 12/22/2016 1:31 AM, Stathis Papaioannou wrote:
>>     The theory of mind called "functionalism" holds that
>>     consciousness results from the brain carrying out its business of
>>     cognition, rather than from the actual substrate of the brain.
>>     This would mean that if the function of the brain could be
>>     reproduced using another substrate, such as a digital computer,
>>     the associated consciousness would also be reproduced. The paper
>>     by Chalmers I cited is a reductio ad absurdum starting with the
>>     assumption that consciousness is substrate-dependent, thus
>>     establishing functionalism as the better theory.
>
>     Thanks for bringing this up!  This neural substitution argument
>     for functionalism was around way before Chalmers used the argument
>     in his paper.  For example Hans Moravec made this same argument
>     way back in 1988, in his book Mind Children.
>
>     https://www.amazon.com/Mind-Children-Future-Robot-Intelligence/dp/0674576187
>     <https://www.amazon.com/Mind-Children-Future-Robot-Intelligence/dp/0674576187>
>
>     So at least Stathis Papaioannou, Hans Moravec, David Chalmers,
>     James Carroll (CC-ed), and a bunch of others think this argument
>     is sound, causing them to think "functionalism is the better
>     theory" resulting in the apparent "hard problem" conundrum.  I
>     think all these people are world leading, understanding wise, in
>     this field, so we need to take this argument seriously.  But,
>     despite this, it seems obvious to me that this so called "hard"
>     problem is a simple misunderstanding of how phenomenal computation
>     works below the abstracted layer - at the hardware quality
>     dependent layer.
>
>
> The "hard problem" and functionalism are not really related. The "hard 
> problem" can still be stated if consciousness is substrate dependent 
> or if it is due to an immortal soul.

The so called "hard problem" has lots of possible meanings.  I was 
referring to the problem that Chalmers refereed to and used the neural 
substitution to argue for functionality.  I'd love to know what you mean 
by "hard problem" and how it can be stated in a substrate dependent way.

>     Let me describe the hardware quality dependent layer in today's
>     computers in a slightly qualitatively advanced way to illustrate
>     how this misunderstanding results.  One of the fundamental
>     operations of a computation device is comparisons:  Is a 1
>     different than a zero?  So fundamentally, today's computer
>     circuits are composed of lots of such comparison gates that let
>     you know if the voltage on one wire is the same as the voltage on
>     another wire.  In other words, we are talking about a simple
>     exclusive or functional operation:
>
>     https://en.wikipedia.org/wiki/XOR_gate
>     <https://en.wikipedia.org/wiki/XOR_gate>
>
>     So, instead of just implementing our XOR logical comparison
>     function with simple voltages that are not physically very
>     qualitatively different lets use neurotransmitter molecule
>     comparisons like between glutamate and glycine.   Let's implement
>     our XOR function with a comparison neuron that fires if two of
>     it's input synapses are chemically the same and not fire if they
>     are different.  In effect, this comparison neuron is a good
>     glutamate detector.  If glutamate is being fed to one of it's
>     input synapses, nothing but glutamate in the other will cause it
>     to fire.
>
>     So, the complete XOR neural setup is composed of 3 significant
>     neurons.  There are two input neurons that can dump different nero
>     transmitters into the two input synapses.  and the third
>     comparison neuron that fires, if the two input synapses are
>     chemically the same.  So let's perform the neural substitution on
>     this xor gate. We first replace one of the input neurons with a
>     silicone system that can function identically.  When it outputs a
>     positive voltage, it is considered as representing what glutamate
>     is chemically like. Outputting a zero voltage is considered to
>     represent dumping something chemically different than glutamate
>     into the synapse of the comparitor neuron.  At this point, you
>     have to add a physical translator between this first silicone
>     neuron substitutuion and the real comparitor neuron.  So when the
>     silicone neuron outputs a positive voltage, the translation
>     mechanism feeds glutamate to the comparison neuron.  Obviously,
>     since the real neuron is receiving glutamate, it is happy, and it
>     fires since it's two inputs are chemically or qualitatively the
>     same.  Now, obviously, in order to replace the comparitor neuron,
>     you also need to replace the other input with a translator
>     system.  This system translates glutamate, coming from the second
>     input neuron, into a positive voltage being fed into the newly
>     artificial comparitor neuron.  So, this simple XOR gate is
>     functioning identically to the comparitor neuron.  It fires if the
>     two inputs are the same, but doesn't fire if they are different.
>
>     With that, you should be able to see the flaw in this neural
>     substitution logic.  The physical qualities being compared between
>     these two functionally identical XOR systems is critically
>     important when it comes to our consciousness.  That is why Thomas
>     Nagel is wanting to know what the two comparison systems are
>     physically and qualitatively like.  The two inputs being compared,
>     and what they are physically, chemichally, and qualitatively like
>     is important to understanding the nature of physical qualitative
>     comparison.  The two systems can be thought of as functionally the
>     same, but the qualities of what they are comparing is physically
>     very different.
>
>
> Well, I don't see the flaw. If just one of the input neurons in the 
> XOR system is changed, but it behaves in the same way, then the system 
> behaves in the same way. The artificial neuron detects glutamate when 
> the original neuron would have and sends output to the comparator 
> neuron when the the original neuron would have. That is what 
> "functionally identical" means.
>

And in another e-mail you said:

"You are using several terms that are confusing, and might be seen as 
begging the question: "representation", "qualities", "awareness". We can 
agree on what behaviour is: it is that which is observable from the 
outside. We can agree on what qualia are: they are private experiences 
that, unlike behaviour, can only be guessed at by an external observer. 
I pointed out in my previous post that by "function" I meant behaviour, 
while you perhaps took it as also including qualia. So you see, it can 
quickly get confusing."


Yes it can get confusing, and I am just not yet communicating 
adequately, as you are completely missing and abstracting away the 
functionality I'm trying to talk about.  You do this when you say: "The 
artificial neuron detects glutamate when the original neuron would 
have."  This is incorrect as it does not detect real physical glutamate 
nor it's real qualitative functionality, it is only detecting an 
abstracted representation of glutamate, represented by something 
physically very different, and only working the way it does (so you can 
think of it as if it is behaving like glutamate), because of a hardware 
translation system.

Remember, that it is glutamate that has the redness quality.  The 
physical behavior of glutamate is the only thing in the simplified 
example world that physically behaves or functions like redness. So, 
when you are "detecting" real glutamate, you are detecting the physical 
qualities or functionality of redness.  But when you swap this out, with 
something different, you are replacing it with some physical device that 
behaves in a very different functional way that, by definition, is not 
the functionality or physical quality of real glutamate.  It is some 
different physical function that has some different translation hardware 
which enables you to think of it as if it was behaving like real 
glutamate, but it is not at all real glutamate, nor is there any real 
redness functionality going on in the artificial system.  The artificial 
system doesn't have redness, any more than the word red does.  But you 
can think of whatever is representing it, that doesn't have redness, as 
if it did, only if you have adequate translation hardware.

In the case of qualia, "functionally identical" means the same 
functionality of whatever is the physical or detectable attributes of 
redness, which you are abstracting away when you do this kind of 
substitution.  You are ignoring and swapping out the important 
functionality that is the functionality of a redness experience.

Brent Allsop



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20170129/0c9ad3ae/attachment.html>


More information about the extropy-chat mailing list