<div dir="ltr"><br><br><br><br>Stathis Said:<br><br><<<<<br>A volume of neural tissue in the visual cortex is replaced with a<br>black box that reproduces the I/O behaviour at the interface with the<br>surrounding neurons. Do you see how "reproduces the I/O behaviour at<br>
the interface with the surrounding neurons" means the subject must<br>behave normally?<br>>>>><br><br>Exactly. But if one box is representing the strawberries and leaves with inverted red green qualities, even though it is behaving exactly the same, this qualitative difference is all important to consciousness.<br>
<br><br>I’ve attempted to describe one final key point, but you guys are providing lots of evidence that you still don’t get this one very important thing.<br><br>This evidence includes when Kelly replied to my saying:<br>
<br><<<<<br>The prediction is, you will not be able to replace any single neuron, or even large sets of neurons that are the neural correlates of a redness quality, without also replacing significant portions of the rest of the system that is aware of what that redness experience is like. <br>
>>>><br><br>With:<br><br><<<<<br>Replacing a single neuron is going to change the qualia of redness? Really? You can't replace a single neuron without losing something? You better not play soccer, you risk losing your consciousness.<br>
<br>Saying something like this undercuts your credibility Brent.<br><br>You absolutely can replace small parts of the brain without changing how the person feels. Ask anyone with a cochlear implant. This is a silly claim.<br>
>>>><br><br>I also don’t think Stathis is fully understanding this. The following could be evidence for this when he responded to Spike with:<br><br><<<<<br>It's difficult if you try to define or explain qualia. If you stick to<br>
a minimal operational definition - you know you have an experience<br>when you have it - qualia are stupidly simple. The question is, if a<br>part of your brain is replaced with an electronic component that<br>reproduces the I/O behaviour of the biological tissue (something that<br>
engineers can measure and understand), will you continue to have the<br>same experiences or not? If not, that would lead to what Brent has<br>admitted is an absurd situation. Therefore the qualia, whatever the<br>hell they are, must be reproduced if the observable behavior of the<br>
neural tissue is reproduced. No equations, but no complex theories of<br>consciousness or attempts to define the ineffable either.<br>>>>><br><br>So let me try to explain it in more detail to see if that helps.<br>
<br>Let’s just imagine how the transmigration experiment would work in an idealized material property dualism theoretical world, even though reality is likely something different and more complex.<br><br>In this idealized theoretical world, it is glutamate that has a redness quality. And this glutamate behaves the way it does, because of this redness quality. Also, imagine that there are multiple other neuro transmitters in this world that are like different color crayons. Brains in this word use these colorful neurotransmitters to paint qualitative conscious knowledge with.<br>
<br>In a simplified way. Let’s also imagine that it is a single large neuron that is binding all these synapses representing voxel elements in a 3D space, so we can be aware of all of the colors all at once. If the upstream neurons fire glutamate, for that 3D element, this large binding neuron knows there is a redness quality at that particular location in 3D space. When another upstream neuron fires with another neurotransmitter it will know there is a leaf there, represented with its greenness quality, at the 3D pixel element representing a point on the surface of the leaf. In other words, all these crayons are just standing alone, unless there is also some system that is binding them all together, so we can be aware of all of them and their qualitative differences, at one time.<br>
<br>When we look at only the behavior of this large binding neuron, and only think of it abstractly, this neuron which is able to tell you whether a particular neuro transmitter has a redness quality or not, will simply look like a high fidelity glutamate detector. Nothing but the glutamate will result in the neuron firing with the ‘yes that is my redness quality’ result.<br>
<br>Now, within this theoretical world, think of the transmigration process when it replaces this one large binding neuron. Of course the argument admits that the original neuron can’t deal with being presented with ones and zeros. So, when it replaces the glutamate, with anything else, it specifies that you also need to replace the neuron detecting the glutamate, with something else that includes the translation hardware, that is interpreting the specific set of ones and zeros that is representing glutamate, as the real thing. And this virtual neuron only gives the ‘yes that is my redness’ when this predefined set of ones and zeros is present.<br>
<br>In other words, when people think about this transmigration argument of replacing one neuron at a time in this way, they are explicitly throwing out and ignoring what is important to the ‘that is real glutamate’ detecting system. They are ignoring the additional hardware system that is required that binds all this stuff together, so we can be aware of redness, at the same time as we are aware of greenness, so we can say, yes they are the same, or no, they are qualitatively different.<br>
<br>If a single neuron is what our brain uses to detect glutamate (or whatever it is that is the neural correlate of redness), then you can see the obvious fallacy in the transmigration thought experiment. And it is also theoretically possible, that it is more than just a single neuron that is involved in the process of detecting the causal properties of glutamate, so that this system only says “That is my redness”, only if it is real glutamate (or whatever it is that really is responsible for a redness quality). And not until you replace the entire binding system, which is the complex process of detecting real glutamate, with an abstracted version which can interpret a specific set of ones and zeros, as if it were the same as glutamate, will it finally start behaving the same. And of course, there will be lots of fading quale, as lots of single neurons are placed in between these two states. Additionally, unlike the real thing, you will never be able to ‘eff’ to know if the abstracted stuff, which is something very different than redness, only being interpreted as redness, really has a redness experience – unlike the real thing.<br>
<br>That’s at least how I think about it. Does this help you guys at all?<br><br>Brent Allsop<br><br></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Sun, Apr 28, 2013 at 11:54 PM, Stathis Papaioannou <span dir="ltr"><<a href="mailto:stathisp@gmail.com" target="_blank">stathisp@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im">On Mon, Apr 29, 2013 at 3:25 AM, Brent Allsop<br>
<<a href="mailto:brent.allsop@canonizer.com">brent.allsop@canonizer.com</a>> wrote:<br>
<br>
> Hi Stathis,<br>
><br>
> Thanks for putting so much effort towards this, and I apologize that I<br>
> still, despite significant progress thanks to everyone's help, have so much<br>
> difficultly communicating.<br>
><br>
> Yes, I don't believe that "qualia can fade without you noticing" and that it<br>
> will not be possible for you to notice, without changing your behavior, or<br>
> any other way qualitative natures are disconnected form consciousness, and<br>
> it's underlying neural correlates.<br>
<br>
</div>Good.<br>
<div><div class="h5"><br>
> You still believe that my real problem is that I still don't understand why<br>
> our behavior can't change due to the replacement. I fully understand all<br>
> this, and you're still jumping back to a straw man that I also do not accept<br>
> and agree with you. There is yet another option that you aren't fully<br>
> getting yet, that is not anything like any of these epiphenomal qualities<br>
> that are disconnected from reality. In this way, the qualities are very<br>
> real, and they have very real causal properties. These causal properties<br>
> are properties we already know, abstractly, all about it's behavior. We<br>
> just don't know about it qualitative nature. We will think the system is<br>
> detecting glutamate, merely because of it's causal behavior, when in fact,<br>
> it is detecting it because of the qualitative nature, of which the causal<br>
> behavior is a mere symptom, and all we know are abstracted interpretations<br>
> of the same.<br>
><br>
> Let me try to put it this way. James has admitted that an abstracted<br>
> representation of the causal properties of glutamate is just something very<br>
> different than the causal properties of glutamate, configured in a way so<br>
> that these very different causal properties, can be interpreted as real<br>
> glutamate. In other words, he has admitted that the map is not like the<br>
> territory, other than it can be interpreted as such. Do you agree with<br>
> that? And if you do, is it not a very real theoretical possibility, that<br>
> the reason glutamate is behaving the way that it is, is because of it's<br>
> redness quality. And also is not a very real possibility that the real<br>
> glutamate has this ineffable quality (blind to abstracting observation or<br>
> requires mapping back to the real thing) for which the abstracted<br>
> representation of the same, though it can be interpreted as having it,<br>
> doesn't really have it.<br>
<br>
</div></div>A volume of neural tissue in the visual cortex is replaced with a<br>
black box that reproduces the I/O behaviour at the interface with the<br>
surrounding neurons. Do you see how "reproduces the I/O behaviour at<br>
the interface with the surrounding neurons" means the subject must<br>
behave normally?<br>
<span class="HOEnZb"><font color="#888888"><br>
<br>
--<br>
Stathis Papaioannou<br>
</font></span></blockquote></div><br></div>