<div><div class="gmail_msg"><div class="gmail_msg">On 30 January 2017 at 14:59, Brent Allsop <span class="gmail_msg"><<a href="mailto:brent.allsop@gmail.com" class="gmail_msg" target="_blank">brent.allsop@gmail.com</a>></span> wrote:<br class="gmail_msg"></div></div><div class="gmail_msg"><div class="gmail_extra gmail_msg"><div class="gmail_quote gmail_msg"><blockquote class="gmail_quote gmail_msg" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF" class="gmail_msg"><span class="m_-8225378424083781406gmail- gmail_msg">
<p class="gmail_msg"><br class="gmail_msg">
</p>
<br class="gmail_msg">
<div class="m_-8225378424083781406gmail-m_2088364404348181593moz-cite-prefix gmail_msg">On 1/21/2017 6:21 PM, Stathis
Papaioannou wrote:<br class="gmail_msg">
</div>
<blockquote type="cite" class="gmail_msg">
<div class="gmail_msg"><br class="gmail_msg">
<div class="gmail_extra gmail_msg"><br class="gmail_msg">
<div class="gmail_quote gmail_msg">On 23 December 2016 at 06:23, Brent
Allsop <span class="gmail_msg"><<a href="mailto:brent.allsop@gmail.com" class="gmail_msg" target="_blank">brent.allsop@gmail.com</a>></span>
wrote:<br class="gmail_msg">
<blockquote class="gmail_quote gmail_msg" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF" class="gmail_msg"><span class="gmail_msg"> <br class="gmail_msg">
<div class="m_-8225378424083781406gmail-m_2088364404348181593m_6647584519020618404moz-cite-prefix gmail_msg">On
12/21/2016 4:21 PM, Stathis Papaioannou wrote:<br class="gmail_msg">
</div>
<blockquote type="cite" class="gmail_msg">Your intuition is that in
order to reproduce consciousness it may not be
sufficient to just reproduce the behaviour of the
human brain, because consciousness might reside in
the actual brain substance. This, I think, is what
Brent is claiming. He further claims that one day we
may be able to work out the exact correlates of
experience - glutamate for red experiences for
example (for illustrative purposes - it wouldn't be
as simple as this). But there is an argument due to
philosopher David Chalmers that assumes this common
intuition to be true and shows that it leads to
absurdity:
<div class="gmail_msg"><br class="gmail_msg">
</div>
<div class="gmail_msg">
<p style="margin:0px;font-size:12px;line-height:normal;font-family:helvetica" class="gmail_msg"><span style="font-size:12pt" class="gmail_msg"><a href="http://consc.net/papers/qualia.html" class="gmail_msg" target="_blank">http://consc.net/papers/qualia.html</a></span></p>
</div>
<br class="gmail_msg">
</blockquote>
<br class="gmail_msg">
</span> and<span class="gmail_msg"><br class="gmail_msg">
<br class="gmail_msg">
<div class="m_-8225378424083781406gmail-m_2088364404348181593m_6647584519020618404moz-cite-prefix gmail_msg">On
12/22/2016 1:31 AM, Stathis Papaioannou wrote:<br class="gmail_msg">
</div>
<blockquote type="cite" class="gmail_msg">The theory of mind called
"functionalism" holds that consciousness results
from the brain carrying out its business of
cognition, rather than from the actual substrate of
the brain. This would mean that if the function of
the brain could be reproduced using another
substrate, such as a digital computer, the
associated consciousness would also be reproduced.
The paper by Chalmers I cited is a reductio ad
absurdum starting with the assumption that
consciousness is substrate-dependent, thus
establishing functionalism as the better theory. <br class="gmail_msg">
</blockquote>
<br class="gmail_msg">
</span> Thanks for bringing this up! This neural
substitution argument for functionalism was around way
before Chalmers used the argument in his paper. For
example Hans Moravec made this same argument way back in
1988, in his book Mind Children.<br class="gmail_msg">
<br class="gmail_msg">
<a class="m_-8225378424083781406gmail-m_2088364404348181593m_6647584519020618404moz-txt-link-freetext gmail_msg" href="https://www.amazon.com/Mind-Children-Future-Robot-Intelligence/dp/0674576187" target="_blank">https://www.amazon.com/Mind-Children-Future-Robot-Intelligence/dp/0674576187</a><br class="gmail_msg">
<br class="gmail_msg">
So at least Stathis Papaioannou, Hans Moravec, David
Chalmers, James Carroll (CC-ed), and a bunch of others
think this argument is sound, causing them to think
"functionalism is the better theory" resulting in the
apparent "hard problem" conundrum. I think all these
people are world leading, understanding wise, in this
field, so we need to take this argument seriously. But,
despite this, it seems obvious to me that this so called
"hard" problem is a simple misunderstanding of how
phenomenal computation works below the abstracted layer
- at the hardware quality dependent layer.<br class="gmail_msg">
</div>
</blockquote>
<div class="gmail_msg"><br class="gmail_msg">
</div>
<div class="gmail_msg">The "hard problem" and functionalism are not really
related. The "hard problem" can still be stated if
consciousness is substrate dependent or if it is due to an
immortal soul.<br class="gmail_msg">
</div>
</div>
</div>
</div>
</blockquote>
<br class="gmail_msg"></span>
The so called "hard problem" has lots of possible meanings. I was
referring to the problem that Chalmers refereed to and used the
neural substitution to argue for functionality. I'd love to know
what you mean by "hard problem" and how it can be stated in a
substrate dependent way.</div></blockquote><div class="gmail_msg"><br class="gmail_msg"></div></div></div></div></div><div><div class="gmail_msg"><div class="gmail_extra gmail_msg"><div class="gmail_quote gmail_msg"><div class="gmail_msg">The "hard problem" is the question of why there should be any qualia at all. If you show that redness is associated with glutamate, you have not answered this question.</div></div></div></div></div><div><div class="gmail_msg"><div class="gmail_extra gmail_msg"><div class="gmail_quote gmail_msg"><div class="gmail_msg"> </div><blockquote class="gmail_quote gmail_msg" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div bgcolor="#FFFFFF" class="gmail_msg"><span class="m_-8225378424083781406gmail- gmail_msg">
<blockquote type="cite" class="gmail_msg">
<div class="gmail_msg">
<div class="gmail_extra gmail_msg">
<div class="gmail_quote gmail_msg">
<div class="gmail_msg"> </div>
<blockquote class="gmail_quote gmail_msg" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF" class="gmail_msg"> Let me describe the
hardware quality dependent layer in today's computers in
a slightly qualitatively advanced way to illustrate how
this misunderstanding results. One of the fundamental
operations of a computation device is comparisons: Is a
1 different than a zero? So fundamentally, today's
computer circuits are composed of lots of such
comparison gates that let you know if the voltage on one
wire is the same as the voltage on another wire. In
other words, we are talking about a simple exclusive or
functional operation:<br class="gmail_msg">
<br class="gmail_msg">
<a class="m_-8225378424083781406gmail-m_2088364404348181593m_6647584519020618404moz-txt-link-freetext gmail_msg" href="https://en.wikipedia.org/wiki/XOR_gate" target="_blank">https://en.wikipedia.org/wiki/XOR_gate</a><br class="gmail_msg">
<br class="gmail_msg">
So, instead of just implementing our XOR logical
comparison function with simple voltages that are not
physically very qualitatively different lets use
neurotransmitter molecule comparisons like between
glutamate and glycine. Let's implement our XOR
function with a comparison neuron that fires if two of
it's input synapses are chemically the same and not fire
if they are different. In effect, this comparison
neuron is a good glutamate detector. If glutamate is
being fed to one of it's input synapses, nothing but
glutamate in the other will cause it to fire.<br class="gmail_msg">
<br class="gmail_msg">
So, the complete XOR neural setup is composed of 3
significant neurons. There are two input neurons that
can dump different nero transmitters into the two input
synapses. and the third comparison neuron that fires,
if the two input synapses are chemically the same. So
let's perform the neural substitution on this xor gate.
We first replace one of the input neurons with a
silicone system that can function identically. When it
outputs a positive voltage, it is considered as
representing what glutamate is chemically like.
Outputting a zero voltage is considered to represent
dumping something chemically different than glutamate
into the synapse of the comparitor neuron. At this
point, you have to add a physical translator between
this first silicone neuron substitutuion and the real
comparitor neuron. So when the silicone neuron outputs
a positive voltage, the translation mechanism feeds
glutamate to the comparison neuron. Obviously, since
the real neuron is receiving glutamate, it is happy, and
it fires since it's two inputs are chemically or
qualitatively the same. Now, obviously, in order to
replace the comparitor neuron, you also need to replace
the other input with a translator system. This system
translates glutamate, coming from the second input
neuron, into a positive voltage being fed into the newly
artificial comparitor neuron. So, this simple XOR gate
is functioning identically to the comparitor neuron. It
fires if the two inputs are the same, but doesn't fire
if they are different.<br class="gmail_msg">
<br class="gmail_msg">
With that, you should be able to see the flaw in this
neural substitution logic. The physical qualities being
compared between these two functionally identical XOR
systems is critically important when it comes to our
consciousness. That is why Thomas Nagel is wanting to
know what the two comparison systems are physically and
qualitatively like. The two inputs being compared, and
what they are physically, chemichally, and qualitatively
like is important to understanding the nature of
physical qualitative comparison. The two systems can be
thought of as functionally the same, but the qualities
of what they are comparing is physically very different.<span class="m_-8225378424083781406gmail-m_2088364404348181593HOEnZb gmail_msg"></span></div>
</blockquote>
<div class="gmail_msg"><br class="gmail_msg">
</div>
<div class="gmail_msg">Well, I don't see the flaw. If just one of the input
neurons in the XOR system is changed, but it behaves in
the same way, then the system behaves in the same way. The
artificial neuron detects glutamate when the original
neuron would have and sends output to the comparator
neuron when the the original neuron would have. That is
what "functionally identical" means. <br class="gmail_msg">
</div>
</div>
<br class="gmail_msg">
</div>
</div>
</blockquote>
</span><span class="m_-8225378424083781406gmail- gmail_msg"> And in another e-mail you said:<br class="gmail_msg">
<br class="gmail_msg">
"You are using several terms that are confusing, and might be seen
as begging the question: "representation", "qualities", "awareness".
We can agree on what behaviour is: it is that which is observable
from the outside. We can agree on what qualia are: they are private
experiences that, unlike behaviour, can only be guessed at by an
external observer. I pointed out in my previous post that by
"function" I meant behaviour, while you perhaps took it as also
including qualia. So you see, it can quickly get confusing."<br class="gmail_msg">
<br class="gmail_msg">
<br class="gmail_msg"></span>
Yes it can get confusing, and I am just not yet communicating
adequately, as you are completely missing and abstracting away the
functionality I'm trying to talk about. You do this when you say:
"The artificial neuron detects glutamate when the original neuron
would have." This is incorrect as it does not detect real physical
glutamate nor it's real qualitative functionality, it is only
detecting an abstracted representation of glutamate, represented by
something physically very different, and only working the way it
does (so you can think of it as if it is behaving like glutamate),
because of a hardware translation system.<br class="gmail_msg"></div></blockquote><div class="gmail_msg"><br class="gmail_msg"></div></div></div></div></div><div><div class="gmail_msg"><div class="gmail_extra gmail_msg"><div class="gmail_quote gmail_msg"><div class="gmail_msg">The job of the original neuron is to fire when it detects a certain concentration of glutamate in the synapse, so the job of the artificial neuron is to do the same. This is "real physical glutamate" that it is detecting; otherwise, it wouldn't work.</div></div></div></div></div><div><div class="gmail_msg"><div class="gmail_extra gmail_msg"><div class="gmail_quote gmail_msg"><div class="gmail_msg"> </div><blockquote class="gmail_quote gmail_msg" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div bgcolor="#FFFFFF" class="gmail_msg">
Remember, that it is glutamate that has the redness quality.</div></blockquote><div class="gmail_msg"><br class="gmail_msg"></div></div></div></div></div><div><div class="gmail_msg"><div class="gmail_extra gmail_msg"><div class="gmail_quote gmail_msg"><div class="gmail_msg">That is what we are debating. I assume for the sake of argument that this is so in order to see where it leads. But whether glutamate has the redness quality or not does not have any bearing on our ability to detect its presence and measure its concentration, for example by chromatography techniques.</div></div></div></div></div><div><div class="gmail_msg"><div class="gmail_extra gmail_msg"><div class="gmail_quote gmail_msg"><div class="gmail_msg"> </div><blockquote class="gmail_quote gmail_msg" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div bgcolor="#FFFFFF" class="gmail_msg"> The
physical behavior of glutamate is the only thing in the simplified
example world that physically behaves or functions like redness.
So, when you are "detecting" real glutamate, you are detecting the
physical qualities or functionality of redness. But when you swap
this out, with something different, you are replacing it with some
physical device that behaves in a very different functional way
that, by definition, is not the functionality or physical quality of
real glutamate. It is some different physical function that has
some different translation hardware which enables you to think of it
as if it was behaving like real glutamate, but it is not at all real
glutamate, nor is there any real redness functionality going on in
the artificial system. The artificial system doesn't have redness,
any more than the word red does. But you can think of whatever is
representing it, that doesn't have redness, as if it did, only if
you have adequate translation hardware.</div></blockquote><div><br></div><div>If the artificial neuron detects real glutamate, the whole brain will behave normally. Do you disagree with this? Do you disagree that the artificial neuron can detect real glutamate, or do you disagree that even if it does the whole brain will behave normally?</div><div><br></div><blockquote class="gmail_quote gmail_msg" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div bgcolor="#FFFFFF" class="gmail_msg">
In the case of qualia, "functionally identical" means the same
functionality of whatever is the physical or detectable attributes
of redness, which you are abstracting away when you do this kind of
substitution. You are ignoring and swapping out the important
functionality that is the functionality of a redness experience.<span class="m_-8225378424083781406gmail-HOEnZb gmail_msg"><font color="#888888" class="gmail_msg"><br></font></span></div></blockquote></div></div></div><div class="gmail_msg"><div class="gmail_extra gmail_msg"><br></div><div class="gmail_extra gmail_msg">Yes - I'm assuming that you're right for the sake of argument and the redness experience is eliminated by putting in artificial neurons. But if the artificial neurons fire at the same time the original neurons would, as is the design requirement, the whole brain will behave normally and the muscles it controls will behave normally. So the subject will say, "I see red strawberries, they look exactly the same as they did before my brain implant". Here is the problem: if the artificial neurons can behave the same, but lack qualia, how is it that the subject cannot notice?<br class="gmail_msg"><br clear="all" class="gmail_msg"><div class="gmail_msg"><br class="gmail_msg"></div>-- <br class="gmail_msg"><div class="m_-8225378424083781406gmail_signature gmail_msg">Stathis Papaioannou</div>
</div></div></div><div dir="ltr">-- <br></div><div data-smartmail="gmail_signature">Stathis Papaioannou</div>