<div dir="auto"><div><br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Apr 13, 2023, 3:13 AM Gordon Swobe <<a href="mailto:gordon.swobe@gmail.com">gordon.swobe@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><br></div><div><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Apr 12, 2023 at 3:54 PM Jason Resch via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a>> wrote:<br></div><div dir="ltr" class="gmail_attr"><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;padding-left:1ex;border-left-color:rgb(204,204,204)"><div dir="auto"><div dir="auto"><br></div></div></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;padding-left:1ex;border-left-color:rgb(204,204,204)"><div dir="auto"><div dir="auto">Let's see if when we agree on a premise that we can reach the same conclusion: </div><div dir="auto"></div></div></blockquote><div dir="auto"><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;padding-left:1ex;border-left-color:rgb(204,204,204)"><div dir="auto"><div dir="auto">If we assume there's not something critical which we have yet to discover about the brain and neurons, would you agree that the inputs to the brain from the external world are ultimately just nerve firings from the senses, and from the brain's point of view, the only information it has access to is the timings of which nerves fire when? If you agree so far, then would you agree the only thing the brain could use as a basis of learning about the external world are the correlations and patterns among the firing…<span style="font-family:Söhne,ui-sans-serif,system-ui,-apple-system,"Segoe UI",Roboto,Ubuntu,Cantarell,"Noto Sans",sans-serif,"Helvetica Neue",Arial,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol","Noto Color Emoji";font-size:16px;white-space:pre-wrap;color:rgb(209,213,219)"> </span></div></div></blockquote><div dir="auto"><br></div><div dir="auto">I’m not sure about “correlations and patterns,” but yes only if I reluctantly make that assumption that there is nothing more to the brain and mind then I can agree. Recall that you already asked me this question and I replied that I am not a strict empiricist.</div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto">Okay this is some progress, we have identified the point where our assumptions differed which explains our disagreement.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="gmail_quote"><div dir="auto"><br></div><div dir="auto">Also, even if that is all there is to the brain and mind, in my view and in agreement with Nagel, no *objective* description of these neural processes in the language of science or computation can capture the facts of conscious experience which exist not objectively, but only from a particular point of view. </div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">We agree on this. The reason we can agree on and describe physical facts is because we have shared referents in the physical world and a shared understanding of math. We can both point to a meter stick and hold it and see how long it is. This is why colors and sounds as they feel to us are not describable, I cannot see into your head anymore than you can see into mine. We have no common reference points on which to establish understanding.</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="gmail_quote"><div dir="auto"><br></div><div dir="auto">You might want to argue that my position here leads to dualism, but that is not necessarily the case. The dualist asserts a kind of immaterial mind-substance that exists separate from the material, but that supposed mind-substance is thought to exist objectively. The dualist makes the same mistake as the physicalist.</div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Did you see my thread on how computationalism leads to and recovers many aspects of consciousness that have traditionally been ascribed to souls? I wrote that in the hopes it might help serve as a bridge to connect our two world views. Below, I think I can offer another:</div><div dir="auto"><br></div><div dir="auto">That first person (non objective) properties emerge, counterintuitively does not imply they cannot emerge from a system that is ultimately objectively describable. As I understand it, this is your main motivation for supposing there must be more going on than our objective accounts can explain. In a sense you are right. There are first person properties that we cannot access from our vantage point looking at the system from the outside.</div><div dir="auto"><br></div><div dir="auto">But recently it's been shown, somewhat technically, how for certain complex recursive systems, these first person properties naturally emerge. This happens without having to add new neuroscience, physics, or math, just applying our existing understanding of the mathematical notion of incompleteness.</div><div dir="auto"><br></div><div dir="auto">See: <a href="https://www.eskimo.com/~msharlow/firstper.htm">https://www.eskimo.com/~msharlow/firstper.htm</a></div><div dir="auto"><br></div><div dir="auto"><div dir="auto">“In this paper I have argued that human brains can have logical properties which are not directly accessible to third-person investigation but nevertheless are accessible (at least in a weak sense) to the brain itself. It is important to remember that these properties are not metaphysically mysterious in any way; they are simply logical properties of neural systems. They are natural properties, arising entirely from the processing of information by various subsystems of the brain. The existence of such properties can pose no threat to the scientific understanding of the mind.”</div><div dir="auto">“The existence of these logical properties contradicts the widespread feeling that information processing in a machine cannot have features inaccessible to objective observers. But despite this offense against intuition, these findings support a view of first-person access which may be far more congenial to a scientific understanding of the mind than the alternative views that first-person character is either irreducible or unreal. Our conclusion suggests a way to bypass an important obstacle to a reductionistic account of consciousness. Indeed, it suggests that consciousness may be reducible to information processing even if experience does have genuine first-person features.”</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto">I hope this paper might show that we can keep our inaccessible, irreducible, real first person properties *and* have a rational description of the brain and it's objectively visible behavior. We don't have to give up one to have the other.</div></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="gmail_quote"><div dir="auto"><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;padding-left:1ex;border-left-color:rgb(204,204,204)"><div dir="auto"><div dir="auto">I agree the human brain is not akin to a LLM.</div><div dir="auto"><br></div><div dir="auto">But this is separate from the propositions you also have disagreed with:</div><div dir="auto">1. That a digital computer (or LLM) can have understanding.</div><div dir="auto">2. That a digital computer (or LLM) can be conscious.</div></div></blockquote><div dir="auto"><br></div><div dir="auto">Yes, and it is best to combine the two. I disagree that LLMs have conscious understanding.</div><div dir="auto"><br></div><div dir="auto"><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;padding-left:1ex;border-left-color:rgb(204,204,204)"><div dir="auto"><div dir="auto">I give the LLM some instructions. It follows them. I concluded from this the LLM understood my instructions. You conclude it did not.<br></div><div dir="auto"><br></div><div dir="auto">I must wonder: what definition of "understand" could you possibly be using that is consistent with the above paragraph?</div></div></blockquote><div dir="auto"><br></div><div dir="auto">As above, LLMs have no *conscious* understanding, and this LLM called GPT-4 agrees.</div><div dir="auto"><br></div><div dir="auto">As I’ve written, the sort of unconscious understanding to which you refer is trivial and uninteresting. My smart doorbell “understands” when there is motion outside my door. I am not impressed. </div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Let's say you update your understanding after reading the above page I linked, and you decide that the LLM has the necessary recursive logical structures to have internally inaccessible first person properties that are inaccessible from the outside. Would this change your opinion on whether the LLM could be consciousness?</div><div dir="auto"><br></div><div dir="auto">Jason </div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
</blockquote></div></div></div>