<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Mar 21, 2023 at 8:41 PM Brent Allsop via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><br><div>Thanks, Jason, for this great thread, and the various expert views you provided.</div></div></blockquote><div><br></div><div>Thank you. I am glad you liked it. :-)</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>It'd be so great to see if we could get all those experts to support a concise statement about what they all agree consciousness is, so we could track that, and see how much consensus there was for the best ideas they all agree on, and track how much consensus the various competing ideas achieve over time, and so on.</div><div>THAT is exactly what we are doing in the Representational Qualia Theory petition / emerging consensus camp statement, a growing number of experts are now supporting and helping to improve, recruit new supporters and so on.</div></div></blockquote><div><br></div><div>Aren't consciousness researchers already doing this, through papers, conferences, journals, books, etc.?</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><br></div><div>Basically, all the 45 supporters of <a href="https://canonizer.com/topic/88-Theories-of-Consciousness/6-Representational-Qualia" target="_blank">Representational Qualia Theory</a> are putting forth the idea that what all those experts are saying about consciousness is missing the point, since everything they are talking about applies to both abstract systems (which do computational binding of information via discrete logic gates in a CPU) and a phenomenal system that is like something, since it is running directly on computationally bound phenomenal qualities.</div><div><br></div><div>If one first understands how conscious awareness of color works, how 'redness' is not a quality of the strawberry, it is a property of our knowledge of the strawberry, then you can take that basic qualitative understanding and better understand the rest of consciousness, even though all the rest of consciousness and thinking (which all the experts you referenced are talking about) is quite different than just the perception of color. If you can understand the basic idea of how our knowledge of red things is represented by a redness quality, and you can clearly understand how this is very different than the way an abstract system just uses the word 'red' (requires a dictionary) to represent knowledge of red things with, then you can take the general idea of conscious knowledge being like something, </div></div></blockquote><div><br></div><div><div>Have you read "Color for Philosophers: Unweaving the Rainbow" by Clyde Hardin?</div><div><br></div></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>and this is what is most important about what consciousness is, and how this is very different than the kind of abstract computation computers do.</div></div></blockquote><div><br></div><div>Have you written a computer program before? How would you characterize the limits of computation or the limits of machines?</div><div><br></div><div>If this is a topic that is outside your domain of expertise I would recommend the book "Pattern on the Stone", it is written to explain computers and computation to non-computer scientists. Here are some passages of particular interest to the current topic:</div><div><br></div></div><blockquote style="margin:0 0 0 40px;border:none;padding:0px"><div class="gmail_quote"><div> The theoretical limitations of computers provide no useful dividing line between human beings and machines. As far as we know, the brain is a kind of computer, and thought is just a complex computation. Perhaps this conclusion sounds harsh to you, but in my view it takes away nothing from the wonder of human thought. The statement that thought is a complex computation is like the statement sometimes made by biologists that life is a complex chemical reaction: both statements are true, and yet they still may be seen as incomplete. They identify the correct components but they ignore the mystery. To me, life and thought are both made all the more wonderful by the realization that they emerge from simple, understandable parts. I do not feel diminished by my kinship to Turing's machine. [...]</div></div><div class="gmail_quote"><div> Most people are interested in not so much the practical moral questions of a hypothetical future as the philosophical issues that the mere possibility of an artificial intelligence raises about ourselves. Most of us do not appreciate being likened to machines. This is understandable: we ought to be insulted to be likened to stupid machines, such as toasters and automobiles, or even today's computers. Saying that the mind is a relative of a current-generation computer is as demeaning as saying that a human being is related to a snail. Yet both statements are true, and both can be helpful. Just as we can learn something about ourselves by studying the neural structure of the snail, we can learn something about ourselves by studying the simple caricature of thought within today's computers. We may be animals, but in a sense our brain is a kind of machine.</div></div></blockquote><blockquote style="margin:0 0 0 40px;border:none;padding:0px"><div class="gmail_quote"><div> Many of my religious friends are shocked that I see the human brain as a machine and the mind as a computation. On the other hand, my scientific friends accuse me of being a mystic because I believe that we may never achieve a complete understanding of the phenomenon of thought. Yet I remain convinced that neither religion nor science has everything figured out. I suspect consciousness is a consequence of the action of normal physical laws, and a manifestation of a complex computation, but to me this makes consciousness no less mysterious and wonderful--if anything, it makes it more so. Between the signals of our neurons and the sensations of our thoughts lies a gap so great that it may never be bridged by human understanding. So when I say that the brain is a machine, it is not meant as an insult to the mind but as an acknowledgement of the potential of a machine. I do not believe that a human mind is less than we imagine it to be, but rather that a machine can be much, much more.</div></div><div class="gmail_quote"><div>-- Danny Hillis in "<a href="https://archive.org/details/patternonstonesc00wdan">Pattern on the Stone</a>" - (1998)</div></div></blockquote><div class="gmail_quote"><div><br></div><div>Jason</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><br></div><div>If anyone disagrees with this, or thinks there is a better way to think about, and/or define what is or isn't conscious, they should start a competing camp stating such, so other experts can chime in. May the best theories achieve the most consensus.</div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Mar 21, 2023 at 9:22 AM Gadersd via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><blockquote type="cite"><div dir="ltr">ChatGPT and I even came up with a scheme on how to do that and making different instances analyze the output and correct or improve it. It would be relatively easy to create such self recurrence. I did even some simple experiments to achieve that. For example you can ask ChatGPT to create a room of philosophers and debate themselves. </div></blockquote><div><br></div>Anthropic AI actually had their own language model generate some of its own training data. Claude critiqued its own responses thereby improving their truthfulness and alignment. The technique of using AI to train AI is already underway.<br><div><br><blockquote type="cite"><div>On Mar 21, 2023, at 2:05 AM, Giovanni Santostasi via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:</div><br><div><div dir="ltr"><div>Spike,</div>I actually had this discussion with chatGPT about having not even different AI but different instances of ChatGPT itself interacting and regulating each other. <br>ChatGPT and I even came up with a scheme on how to do that and making different instances analyze the output and correct or improve it. It would be relatively easy to create such self recurrence. I did even some simple experiments to achieve that. For example you can ask ChatGPT to create a room of philosophers and debate themselves. <br>Notice that the version of LaMDA that Lemoine (the Google engineer that claimed LaMDA is conscious) tested and discussed was a meta version that is charged with coordinating all the different personalities of LaMDA. That is exactly what is needed for AGI, the Strange Loop, it is ripe for emergent phenomena like consciousness. <br>Giovanni </div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Mar 19, 2023 at 12:01 PM spike jones via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div lang="EN-US"><div><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal"><u></u> <u></u></p><div style="border-right:none;border-bottom:none;border-left:none;border-top:1pt solid rgb(225,225,225);padding:3pt 0in 0in"><p class="MsoNormal"><b>From:</b> extropy-chat <<a href="mailto:extropy-chat-bounces@lists.extropy.org" target="_blank">extropy-chat-bounces@lists.extropy.org</a>> <b>On Behalf Of </b>Jason Resch via extropy-chat<br><b>…</b><u></u><u></u></p></div><div><div><div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">>…We see recurring themes of information, recursion, computation, and machines and logic. I think these are likely key to any formal definition of consciousness. …Jason<u></u><u></u></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal">Jason, there is a reason I stopped worrying in the past coupla weeks that ChatGPT was going to cause the singularity. I am a big Hofstader fan, read Escher Godel Bach twice, cover to cover, invested a lot of time into that marvelous work. He convinced me that machine consciousness (or any other sentience or self-awareness) requires a type of recursion. Hofstadter goes on at length about recursion and self-reference, the importance of Godel’s work to understanding ourselves.<u></u><u></u></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal">I tried to convince myself that two or more ChatGPTs could train each other on their own time, which is a form of recursion and self-reference, and that process could perhaps spring into a human-level AGI with a will, with self-awareness, of all the stuff we think of as us.<u></u><u></u></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal">Now after studying GPT^2 discussions and GPT^3 discussions, they all seem to devolve to nothing. The technology I think is still coming for that process: two or more AIs training each other using background compute cycles, but I now don’t think ChatGPT is that technology or is capable of it.<u></u><u></u></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal">If you know of examples of GPT-GPT discussions or GPT-any chatbot that became interesting, do share please.<u></u><u></u></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal">That belief was behind my comment last week that ChatGPT is not really thinking, but rather is working language models. <u></u><u></u></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal">I currently don’t think ChatGPT is the technology capable of causing the singularity. I am losing no sleep, not one minute of sleep over ChatGPT.<u></u><u></u></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal">Oops, partially retract that last comment but in a good way: I am losing some sleep over ChatGPT, by staying up late to goof with it. It is the coolest software tool to come along in a long time.<u></u><u></u></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal">spike<u></u><u></u></p></div></div></div></div></div></div>_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</div></blockquote></div>
_______________________________________________<br>extropy-chat mailing list<br><a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br><a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br></div></blockquote></div><br></div>_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div></div>