<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Apr 13, 2023 at 3:14 PM Gordon Swobe <<a href="mailto:gordon.swobe@gmail.com">gordon.swobe@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Thu, Apr 13, 2023 at 4:23 AM Jason Resch via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br><br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div dir="auto">But recently it's been shown, somewhat technically, how for certain complex recursive systems, these first person properties naturally emerge. This happens without having to add new neuroscience, physics, or math, just applying our existing understanding of the mathematical notion of incompleteness.</div><div dir="auto"><br></div><div dir="auto">See: <a href="https://www.eskimo.com/~msharlow/firstper.htm" target="_blank">https://www.eskimo.com/~msharlow/firstper.htm</a></div></div></blockquote><div><br>Thank you for this. I have spent several hours studying this paper. As you say, it is somewhat technical. I used GPT-4 as a research partner (a fantastic tool even if it has no idea what it is saying).</div></div></div></blockquote><div><br></div><div>Great idea. I'll have to try that.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div> I conclude that while it is interesting and might aid in understanding the brain and mind, and how subjectivity works in objective terms, it does not overcome the explanatory gap. Even if this author is correct on every point, it is still the case that for a reductionist account of consciousness like this to be successful, it must provide an explanation of how subjective experience arises from objective processes. </div></div></div></blockquote><div><br></div><div>Indeed. I don't know if it was the author's goal to fully answer that problem (which I think might require a different answer for each possible mind), but rather I think he was trying to show that the reductionists who deny there is any room for subjectivity, as well as those who say the existence of subjectivity proves the mind can't be described objectively, are both missing an important piece of the puzzle. Namely, that objective processes can have properties which are not accessible to external analysis. This understanding, if accepted as true (if it is true), should close the gap between computatinalists and weak-AI theorists (such as Searle) </div><div><br></div><div>The author's examples were hard to follow, but I think I can come up with a simpler example:</div><div>Imagine a machine that searches for a counterexample to <a href="https://en.wikipedia.org/wiki/Goldbach%27s_conjecture">Goldbach's conjecture</a> (an even number > 2 that's not the sum of two primes), and once it finds it, it turns itself off. The program that does this can be defined in just 4 or 5 lines of code, it's behavior is incredibly simple when looked at objectively. But let's say we want to know: does this machine have the property that it runs forever? We have no way to determine this objectively given our present mathematical knowledge (since it's unknown, and may not be provable under existing mathematical theories, whether there is or isn't any such counter example). Then, even if we know everything there is to know objectively about this simple machine and simple computer program, there remain truths and properties about it which exist beyond our capacity to determine.</div><div><br></div><div>Example code below:</div><div>Step 1: Set X = 4<br>Step 2: Set R = 0<br>Step 3: For each Y from 1 to X, if both Y and (X – Y) are prime, set R = 1<br>Step 4: If R = 1, Set X = X + 2 and go to Step 2<br>Step 5: If R = 0, print X and halt<br></div><div><br></div><div>Note that around the year 2000, $1,000,000 was offered to anyone who could prove or disprove the Goldbach conjecture. This is equivalent to determining whether or not the above program ever reaches step 5. It's an incredibly simple program, but no one in the world was able to figure out whether it ever gets to step 5. So, we arguably have a property here which is true for the program: it either halts or doesn't, but one which is inaccessible to us even when we know everything there is to know about the code itself.</div><div><br></div><div>I think "What is it like" questions concerning other people's quality are also inaccessible in the same sense, even when we know every neuron in their brain, the what is it like property of their subjectivity, is unknowable to we who are not them (and who have different brains from the person whose subjectivity is in question). It's much like two different mathematical systems being able to prove, or not probe, certain things about the other or themselves. If you had System A, and System B, A could prove "This sentence cannot consistently be proved by B", but B could not prove that. Likewise, I can consistently accept as true the sentence:</div><div><br></div><div>"People named Gordon Swobe cannot consistently believe this sentence is true."</div><div><br></div><div>Others, not named Gordon Swobe, can also consistently believe it is true, but those named Gordon Swobe cannot consistently believe that sentence is true. This is only to illustrate that from different vantage points (different conscious minds, or different mathematical systems), certain things are not knowable or provable. This opens up the door to there being subjective truths for one subject, which remain unknowable or unprovable to those who are not that subject.</div><div><br></div><div><br></div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div dir="auto"><div dir="auto">I hope this paper might show that we can keep our inaccessible, irreducible, real first person properties *and* have a rational description of the brain and it's objectively visible behavior. We don't have to give up one to have the other.</div></div></div></blockquote><div><br>I suppose the real question is about one *or* the other. If the latter does not explain the former then I would say it is incomplete, and I think it is.</div></div></div></blockquote><div><br></div><div>I can agree with that, there's still a lot more to answer. This was just a demonstration of the possibility of the compatibility between those two views.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br></div><div>I would like to revisit a topic we discussed when I first (re)-entered this forum a few weeks ago: <br><br>You were making the argument that because GPT can "understand" English words about mathematical relationships and translate them into the language of mathematics and even draw diagrams of houses and so on, that this was evidence that it had solved the grounding problem for itself with respect to mathematics. Is that still your contention? </div></div></div></blockquote><div><br></div><div>I wouldn't say that it <b><i>solved</i></b> the symbol grounding problem. It would be more accurate to say it demonstrates that it has <b><i>overcome</i></b> the symbol grounding problem. It shows that it has grounded the meaning of English words down to objective mathematical structures (which is about as far down as anything can be grounded to). So it is no longer trading symbols for symbols, it is converting symbols into objective mathematical structures (such as connected graphs).</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div>My thought at the time was that you must not have the knowledge to understand the problem, and so I let it go, but I've since learned that you are very intelligent and very knowledgeable. I am wondering how you could make what appears, at least to me, an obvious mistake.</div></div></div></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div> Perhaps you can tell me why you think I am mistaken to say you are mistaken.<br><br></div></div></div></blockquote><div><br></div><div>My mistake is not obvious to me. If it is obvious to you, can you please point it out?</div><div><br></div><div>Jason </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
</blockquote></div></div>