<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p><br>
</p>
<br>
<div class="moz-cite-prefix">On 1/21/2017 6:21 PM, Stathis
Papaioannou wrote:<br>
</div>
<blockquote
cite="mid:CAH=2ypVvtT0Xha3wmGfp-PrXXS-d=18byz4rztmag8hHUZy7Bg@mail.gmail.com"
type="cite">
<div dir="ltr"><br>
<div class="gmail_extra"><br>
<div class="gmail_quote">On 23 December 2016 at 06:23, Brent
Allsop <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:brent.allsop@gmail.com" target="_blank">brent.allsop@gmail.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"><span class=""> <br>
<div class="m_6647584519020618404moz-cite-prefix">On
12/21/2016 4:21 PM, Stathis Papaioannou wrote:<br>
</div>
<blockquote type="cite">Your intuition is that in
order to reproduce consciousness it may not be
sufficient to just reproduce the behaviour of the
human brain, because consciousness might reside in
the actual brain substance. This, I think, is what
Brent is claiming. He further claims that one day we
may be able to work out the exact correlates of
experience - glutamate for red experiences for
example (for illustrative purposes - it wouldn't be
as simple as this). But there is an argument due to
philosopher David Chalmers that assumes this common
intuition to be true and shows that it leads to
absurdity:
<div><br>
</div>
<div>
<p
style="margin:0px;font-size:12px;line-height:normal;font-family:Helvetica"><span
style="font-size:12pt"><a
moz-do-not-send="true"
href="http://consc.net/papers/qualia.html"
target="_blank">http://consc.net/papers/<wbr>qualia.html</a></span></p>
</div>
<br>
</blockquote>
<br>
</span> and<span class=""><br>
<br>
<div class="m_6647584519020618404moz-cite-prefix">On
12/22/2016 1:31 AM, Stathis Papaioannou wrote:<br>
</div>
<blockquote type="cite">The theory of mind called
"functionalism" holds that consciousness results
from the brain carrying out its business of
cognition, rather than from the actual substrate of
the brain. This would mean that if the function of
the brain could be reproduced using another
substrate, such as a digital computer, the
associated consciousness would also be reproduced.
The paper by Chalmers I cited is a reductio ad
absurdum starting with the assumption that
consciousness is substrate-dependent, thus
establishing functionalism as the better theory. <br>
</blockquote>
<br>
</span> Thanks for bringing this up! This neural
substitution argument for functionalism was around way
before Chalmers used the argument in his paper. For
example Hans Moravec made this same argument way back in
1988, in his book Mind Children.<br>
<br>
<a moz-do-not-send="true"
class="m_6647584519020618404moz-txt-link-freetext"
href="https://www.amazon.com/Mind-Children-Future-Robot-Intelligence/dp/0674576187"
target="_blank">https://www.amazon.com/Mind-<wbr>Children-Future-Robot-<wbr>Intelligence/dp/0674576187</a><br>
<br>
So at least Stathis Papaioannou, Hans Moravec, David
Chalmers, James Carroll (CC-ed), and a bunch of others
think this argument is sound, causing them to think
"functionalism is the better theory" resulting in the
apparent "hard problem" conundrum. I think all these
people are world leading, understanding wise, in this
field, so we need to take this argument seriously. But,
despite this, it seems obvious to me that this so called
"hard" problem is a simple misunderstanding of how
phenomenal computation works below the abstracted layer
- at the hardware quality dependent layer.<br>
</div>
</blockquote>
<div><br>
</div>
<div>The "hard problem" and functionalism are not really
related. The "hard problem" can still be stated if
consciousness is substrate dependent or if it is due to an
immortal soul.<br>
</div>
</div>
</div>
</div>
</blockquote>
<br>
The so called "hard problem" has lots of possible meanings. I was
referring to the problem that Chalmers refereed to and used the
neural substitution to argue for functionality. I'd love to know
what you mean by "hard problem" and how it can be stated in a
substrate dependent way.<br>
<br>
<blockquote
cite="mid:CAH=2ypVvtT0Xha3wmGfp-PrXXS-d=18byz4rztmag8hHUZy7Bg@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">
<div> </div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"> Let me describe the
hardware quality dependent layer in today's computers in
a slightly qualitatively advanced way to illustrate how
this misunderstanding results. One of the fundamental
operations of a computation device is comparisons: Is a
1 different than a zero? So fundamentally, today's
computer circuits are composed of lots of such
comparison gates that let you know if the voltage on one
wire is the same as the voltage on another wire. In
other words, we are talking about a simple exclusive or
functional operation:<br>
<br>
<a moz-do-not-send="true"
class="m_6647584519020618404moz-txt-link-freetext"
href="https://en.wikipedia.org/wiki/XOR_gate"
target="_blank">https://en.wikipedia.org/wiki/<wbr>XOR_gate</a><br>
<br>
So, instead of just implementing our XOR logical
comparison function with simple voltages that are not
physically very qualitatively different lets use
neurotransmitter molecule comparisons like between
glutamate and glycine. Let's implement our XOR
function with a comparison neuron that fires if two of
it's input synapses are chemically the same and not fire
if they are different. In effect, this comparison
neuron is a good glutamate detector. If glutamate is
being fed to one of it's input synapses, nothing but
glutamate in the other will cause it to fire.<br>
<br>
So, the complete XOR neural setup is composed of 3
significant neurons. There are two input neurons that
can dump different nero transmitters into the two input
synapses. and the third comparison neuron that fires,
if the two input synapses are chemically the same. So
let's perform the neural substitution on this xor gate.
We first replace one of the input neurons with a
silicone system that can function identically. When it
outputs a positive voltage, it is considered as
representing what glutamate is chemically like.
Outputting a zero voltage is considered to represent
dumping something chemically different than glutamate
into the synapse of the comparitor neuron. At this
point, you have to add a physical translator between
this first silicone neuron substitutuion and the real
comparitor neuron. So when the silicone neuron outputs
a positive voltage, the translation mechanism feeds
glutamate to the comparison neuron. Obviously, since
the real neuron is receiving glutamate, it is happy, and
it fires since it's two inputs are chemically or
qualitatively the same. Now, obviously, in order to
replace the comparitor neuron, you also need to replace
the other input with a translator system. This system
translates glutamate, coming from the second input
neuron, into a positive voltage being fed into the newly
artificial comparitor neuron. So, this simple XOR gate
is functioning identically to the comparitor neuron. It
fires if the two inputs are the same, but doesn't fire
if they are different.<br>
<br>
With that, you should be able to see the flaw in this
neural substitution logic. The physical qualities being
compared between these two functionally identical XOR
systems is critically important when it comes to our
consciousness. That is why Thomas Nagel is wanting to
know what the two comparison systems are physically and
qualitatively like. The two inputs being compared, and
what they are physically, chemichally, and qualitatively
like is important to understanding the nature of
physical qualitative comparison. The two systems can be
thought of as functionally the same, but the qualities
of what they are comparing is physically very different.<span
class="HOEnZb"></span></div>
</blockquote>
<div><br>
</div>
<div>Well, I don't see the flaw. If just one of the input
neurons in the XOR system is changed, but it behaves in
the same way, then the system behaves in the same way. The
artificial neuron detects glutamate when the original
neuron would have and sends output to the comparator
neuron when the the original neuron would have. That is
what "functionally identical" means. <br>
</div>
</div>
<br>
</div>
</div>
</blockquote>
<br>
And in another e-mail you said:<br>
<br>
"You are using several terms that are confusing, and might be seen
as begging the question: "representation", "qualities", "awareness".
We can agree on what behaviour is: it is that which is observable
from the outside. We can agree on what qualia are: they are private
experiences that, unlike behaviour, can only be guessed at by an
external observer. I pointed out in my previous post that by
"function" I meant behaviour, while you perhaps took it as also
including qualia. So you see, it can quickly get confusing."<br>
<br>
<br>
Yes it can get confusing, and I am just not yet communicating
adequately, as you are completely missing and abstracting away the
functionality I'm trying to talk about. You do this when you say:
"The artificial neuron detects glutamate when the original neuron
would have." This is incorrect as it does not detect real physical
glutamate nor it's real qualitative functionality, it is only
detecting an abstracted representation of glutamate, represented by
something physically very different, and only working the way it
does (so you can think of it as if it is behaving like glutamate),
because of a hardware translation system.<br>
<br>
Remember, that it is glutamate that has the redness quality. The
physical behavior of glutamate is the only thing in the simplified
example world that physically behaves or functions like redness.
So, when you are "detecting" real glutamate, you are detecting the
physical qualities or functionality of redness. But when you swap
this out, with something different, you are replacing it with some
physical device that behaves in a very different functional way
that, by definition, is not the functionality or physical quality of
real glutamate. It is some different physical function that has
some different translation hardware which enables you to think of it
as if it was behaving like real glutamate, but it is not at all real
glutamate, nor is there any real redness functionality going on in
the artificial system. The artificial system doesn't have redness,
any more than the word red does. But you can think of whatever is
representing it, that doesn't have redness, as if it did, only if
you have adequate translation hardware.<br>
<br>
In the case of qualia, "functionally identical" means the same
functionality of whatever is the physical or detectable attributes
of redness, which you are abstracting away when you do this kind of
substitution. You are ignoring and swapping out the important
functionality that is the functionality of a redness experience.<br>
<br>
Brent Allsop<br>
<br>
<br>
<br>
</body>
</html>