<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Apr 23, 2023 at 9:03 PM Brent Allsop <<a href="mailto:brent.allsop@gmail.com">brent.allsop@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><br><div>Hi Jason,</div><div><br></div><div>Yes, I thought I replied to this already. But maybe I never finished it. Stathis Papuainu (I CC'd him. He is a brilliant former member of this list, I think everyone here would agree is almost as cool, calm, and collected as you ;), who is another functionalist, pointed me to that paper, over a decade ago. </div></div></blockquote><div><br></div><div>Hi Stathis. :-)</div><div><br></div><div>I know him from the everything list.</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div> We've been going at it, ever since. And that paper is derivative of previous works by Hans Moravec. I first read Morovec's description of that <a href="https://canonizer.com/topic/79-Neural-Substitn-Argument/1-Agreement" target="_blank">neural substitution argument</a> in <a href="https://www.hup.harvard.edu/catalog.php?isbn=9780674576186" target="_blank">his book</a>, back in the 90s. I've been thinking about it ever since. </div></div></blockquote><div><br></div><div>A sketch of the neural substitution argument was introduced by Moravec in his 1988, but Chalmers paper I think goes much deeper, by asking, what would happen during the gradual replacement, and considering the space of possibilities, and further, why functional various strongly suggests qualia must also be preserved (the dancing qualia part of his thought experiment).</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div> Chalmers admits, in that paper, that one possibility is that the substitution will fail. This is what we are predicting, that when you get to the first pixel, which has a redness quality, when you try to substitute it with something that does not have a redness quality, you will not be able to progress beyond that point. Kind of a tautology, actually.</div></div></blockquote><div><br></div><div>Okay, this is great progress. I think it may indicate a departure between you and Gordon. I *think* Gordon believes it is possible for a computer to perfectly replicate the behavior of a person, but that it would not be conscious. This position was called "Weak AI" by Searle. Searle believes AI in principle can do anything a human can, but that without the right causal properties it would not be conscious.</div><div><br></div><div>From the above, it sounds to me as if you are in the camp of Penrose, the non-computable physics camp: what the brain does cannot be explained in terms of finite, describable, computable rules. The brain's range of behaviors transcends what can be computed. Is this an accurate description of your view?</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><br></div><div>Also, I have pointed out to Stathis, and other functionalists, a gazillion times over the years, that even a substrate independent function, couldn't be responsible for redness, for the same mistaken (assumes the substitution will succeed) reasoning.</div></div></blockquote><div><br></div><div>You have said it a gazillion times, yes, but what is the reason that a substrate independent function couldn't be responsible for redness? I know you believe this, but why do you believe it? What is your argument or justification?</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div> The neural substitution argument proves it can't be functionalism, either.</div></div></blockquote><div><br></div><div>I think you might mean: The assumption that organizationally invariant neural substitution is not possible, implies some functions are not substrate independent (which implies computationalism is false). But this does not follow from the argument, it follows from an assumption about the outcome of the experiment described in the argument.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div> All the neural substitution proves is that NOTHING can have a redness quality, which of course, is false. </div></div></blockquote><div><br></div><div>What do you think you would feel as neurons in your visual cortex were replaced one by one with artificial silicon ones? Would you notice things slowly start to change in your perception? Would you mention the change out loud and seek medical attention? How would this work mechanistically? Do you see it as a result of the artificial neurons having different firing patterns which are different from the biological ones (and which cannot be replicated)?</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div> So this proves the thought experiment must have a bad assumption. </div></div></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div> Which of course is the false assumption that the substitution will succeed.</div></div></blockquote><div><br></div><div>1. Do you think everything in the brain operates according to the laws of physics?</div><div>2. What laws or objects in physics cannot be simulated by a computer?</div><div>3. How are the items (if any) mentioned in 2 related to the functions of the brain?</div><div> </div><div>Jason</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><br></div><div>All this described in the <a href="https://canonizer.com/topic/79-Neural-Substitn-Argument/2-Neural-Substtn-Fallacy" target="_blank">Neural Substitution Fallacy camp</a>. Which, for some reason, has no competing camp.</div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Apr 23, 2023 at 7:40 PM Jason Resch <<a href="mailto:jasonresch@gmail.com" target="_blank">jasonresch@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Apr 23, 2023 at 8:27 PM Brent Allsop via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Apr 23, 2023 at 4:43 PM Stuart LaForge via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
Quoting Brent Allsop via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>>:<br>
<br>
> This is so frustrating. I'm asking a simple, elementary school level<br>
> question.<br>
<br>
So you think that the Hard Problem of Consciousness reframed as a your <br>
so-called "Colorness Problem" is an elementary school level question? <br>
Then maybe you should quit bugging us about it and seek the advice of <br>
elementary school children.<br></blockquote><div><br></div><div>I am working with those people that do get it. Now, more than 40 of them, including leaders in the field like <a href="https://canonizer.com/topic/81-Mind-Experts/4-Steven-Lehar" target="_blank">Steven Lehar</a>, are supporting the camp that says so. Even Dennett's <a href="https://canonizer.com/topic/88-Theories-of-Consciousness/21-Dennett-s-PBC-Theory" target="_blank">Predictive Bayesian coding Theory</a> is a supporting sub camp, demonstrating the progress we are making. Gordon, would you be willing to support <a href="https://canonizer.com/topic/88-Theories-of-Consciousness/6-Representational-Qualia" target="_blank">RQT</a>? The elementary school kids are telling us, plug things into the brain, till you find what it is that has a redness quality. So, we are collecting the signature, and once we get enough, experimentalists will finally get the message and then start doing this, and eventually be able to demonstrate to everyone what it is that has a <img src="cid:ii_lgu4g3u21" alt="red_border.png" width="24" height="25"> property. To my understanding, that is how science works.</div><div><br></div><div><br></div><div>The reason I am bugging you functionalists is because I desperately want to understand how everyone thinks about consciousness, especially the leading popular consensus functionalism camps. Giovani seems to be saying that in this functionalist view, there is no such thing as color qualities, but to me, saying there is no color in the world is just insane. You seem to be at least saying something better than that, but as far as I can see, your answers are just more interpretations of interpretations, no place is there any grounding. You did get close to a grounded answer when I asked how the word 'red' can be associated with <img src="cid:ii_lgu47ozk0" alt="green_border.png" width="24" height="25">. Your reply was "at some point during the chatbot's training the English word red was associated with <b>the picture in question</b>." But "<b>the picture in question</b>" could be referring to at least 4 different things. It could be associated with the LEDs emitting the 500 nm light. It could be the 500 nm light, which "the picture" is emitting, or it could be associated with your knowledge of
<img src="cid:ii_lgu47ozk0" alt="green_border.png" width="24" height="25">. in which case it would have the same quality as your knowledge of that, or it could be associated with someone that was engineered to be your inverted knowledge (has a red / green signal inverter between its retina and optic nerve), in which case, it would be like your knowledge of <img src="cid:ii_lgu4g3u21" alt="red_border.png" width="24" height="25">. So, if that is indeed your answer, which one of these 4 things are you referring to? Is it something else?</div><div><br></div><div><br></div><div>You guys accuse me of being non scientific. But all I want to know is how would a functionalist demonstrate, or falsify functionalist claims about color qualities, precisely because I want to be scientific. Do you believe you have explained how functionalism predictions about color qualities could be falsified or demonstrated, within functionalist doctrines? If so, I haven't seen it yet. </div></div></div></blockquote><div><br></div><div>I've suggested several times that you read Chalmers Fading/Dancing qualia thought experiment. Have you done this? What is your interpretation of it?</div><div><br></div><div><a href="https://consc.net/papers/qualia.html" target="_blank">https://consc.net/papers/qualia.html</a><br></div><div><br></div><div>Jason</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div> So please help, as all I see is you guys saying, over and over again, that you don't need to provide an unambiguous way to demonstrate what it is that has this quality: <img src="cid:ii_lgu4g3u21" alt="red_border.png" width="24" height="25">, or even worse functionalism is predicting that color doesn't exist. As if saying things like that, over and over again, makes them true?</div><div><br></div><div></div></div></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div></div>
</blockquote></div>
</blockquote></div></div>