[ExI] Do digital computers feel?
stathisp at gmail.com
Thu Dec 22 22:35:08 UTC 2016
> On 23 Dec. 2016, at 6:23 am, Brent Allsop <brent.allsop at gmail.com> wrote:
>> On 12/21/2016 4:21 PM, Stathis Papaioannou wrote:
>> Your intuition is that in order to reproduce consciousness it may not be sufficient to just reproduce the behaviour of the human brain, because consciousness might reside in the actual brain substance. This, I think, is what Brent is claiming. He further claims that one day we may be able to work out the exact correlates of experience - glutamate for red experiences for example (for illustrative purposes - it wouldn't be as simple as this). But there is an argument due to philosopher David Chalmers that assumes this common intuition to be true and shows that it leads to absurdity:
>> On 12/22/2016 1:31 AM, Stathis Papaioannou wrote:
>> The theory of mind called "functionalism" holds that consciousness results from the brain carrying out its business of cognition, rather than from the actual substrate of the brain. This would mean that if the function of the brain could be reproduced using another substrate, such as a digital computer, the associated consciousness would also be reproduced. The paper by Chalmers I cited is a reductio ad absurdum starting with the assumption that consciousness is substrate-dependent, thus establishing functionalism as the better theory.
> Thanks for bringing this up! This neural substitution argument for functionalism was around way before Chalmers used the argument in his paper. For example Hans Moravec made this same argument way back in 1988, in his book Mind Children.
> So at least Stathis Papaioannou, Hans Moravec, David Chalmers, James Carroll (CC-ed), and a bunch of others think this argument is sound, causing them to think "functionalism is the better theory" resulting in the apparent "hard problem" conundrum. I think all these people are world leading, understanding wise, in this field, so we need to take this argument seriously. But, despite this, it seems obvious to me that this so called "hard" problem is a simple misunderstanding of how phenomenal computation works below the abstracted layer - at the hardware quality dependent layer.
> Let me describe the hardware quality dependent layer in today's computers in a slightly qualitatively advanced way to illustrate how this misunderstanding results. One of the fundamental operations of a computation device is comparisons: Is a 1 different than a zero? So fundamentally, today's computer circuits are composed of lots of such comparison gates that let you know if the voltage on one wire is the same as the voltage on another wire. In other words, we are talking about a simple exclusive or functional operation:
> So, instead of just implementing our XOR logical comparison function with simple voltages that are not physically very qualitatively different lets use neurotransmitter molecule comparisons like between glutamate and glycine. Let's implement our XOR function with a comparison neuron that fires if two of it's input synapses are chemically the same and not fire if they are different. In effect, this comparison neuron is a good glutamate detector. If glutamate is being fed to one of it's input synapses, nothing but glutamate in the other will cause it to fire.
> So, the complete XOR neural setup is composed of 3 significant neurons. There are two input neurons that can dump different nero transmitters into the two input synapses. and the third comparison neuron that fires, if the two input synapses are chemically the same. So let's perform the neural substitution on this xor gate. We first replace one of the input neurons with a silicone system that can function identically. When it outputs a positive voltage, it is considered as representing what glutamate is chemically like. Outputting a zero voltage is considered to represent dumping something chemically different than glutamate into the synapse of the comparitor neuron. At this point, you have to add a physical translator between this first silicone neuron substitutuion and the real comparitor neuron. So when the silicone neuron outputs a positive voltage, the translation mechanism feeds glutamate to the comparison neuron. Obviously, since the real neuron is receiving glutamate, it is happy, and it fires since it's two inputs are chemically or qualitatively the same. Now, obviously, in order to replace the comparitor neuron, you also need to replace the other input with a translator system. This system translates glutamate, coming from the second input neuron, into a positive voltage being fed into the newly artificial comparitor neuron. So, this simple XOR gate is functioning identically to the comparitor neuron. It fires if the two inputs are the same, but doesn't fire if they are different.
> With that, you should be able to see the flaw in this neural substitution logic. The physical qualities being compared between these two functionally identical XOR systems is critically important when it comes to our consciousness. That is why Thomas Nagel is wanting to know what the two comparison systems are physically and qualitatively like. The two inputs being compared, and what they are physically, chemichally, and qualitatively like is important to understanding the nature of physical qualitative comparison. The two systems can be thought of as functionally the same, but the qualities of what they are comparing is physically very different.
Yes, but do you agree that despite the silicon-based comparator neurone you describe being physically different, the rest of the brain will function exactly the same?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat