<div dir="ltr"><br><div>The emerging consensus camp <a href="https://canonizer.com/topic/88-Theories-of-Consciousness/6-Representational-Qualia">Representational Qualia Theory</a> is predicting there are no such limits. It is simply a matter of discovering which of all our descriptions of stuff in the brain is a description of redness, so we can know the true color of things (why they behave the way they do), not just the color things seem to be. For more information see: <a href="https://canonizer.com/videos/consciousness?chapter=Representational_Qualia_Theory_Consensus">Consciousness: Not a Hard Problem, Just a Color Problem</a>. Or this "<a href="https://www.dropbox.com/s/k9x4uh83yex4ecw/Physicists%20Don%27t%20Understand%20Color.docx?dl=0">Physicists Don't Understand Color</a>" paper recently accepted for publication in <a href="https://www.jneurophilosophy.com/index.php/jnp/index">The Journal of Neural Philosophy</a>.</div><div><br></div><div>There are <a href="https://docs.google.com/document/d/1JKwACeT3b1bta1M78wZ3H2vWkjGxwZ46OHSySYRWATs/edit">3 Types of Effing the Ineffable</a> which will enable us to both know and directly apprehend what color things are, not just the color things seem to be.<br></div><div><br></div><div>And the more than 40 supporters of
<a href="https://canonizer.com/topic/88-Theories-of-Consciousness/6-Representational-Qualia">Representational Qualia Theory</a> all agree on and use this definition of consciousness.</div><div><br></div><div><b> Computationally bound elemental intrinsic qualities like redness, greenness, and warmth.</b></div><div><br></div><div>In order to understand that, one needs to understand that redness is a quality of our knowledge of the strawberry, not the strawberry. What is and isn't conscious is illustrated in this image:</div><div><img src="cid:ii_leg1ifk40" alt="3_functionally_equal_machines_tiny.png" width="320" height="118"><br></div><div><br></div><div><br></div><div>The word 'red" is abstract, and you need a definition to know what it means. So the one on the right is 100% abstract and isn't like anything so is not phenomenally conscious. The redness quality your brain uses to represent knowledge of red things with, is your definition of red, making you phenomenally conscious. Consciousness is simply what it is like for a CPU to run directly on physical qualities, instead of something that is abstracted away from whatever physical properties are representing it. (abstract systems are less efficient, since it requires more dictionaries to achieve the substrate independence.)</div><div><br></div><div>It almost sounds like some of you define consciousness differently than these 40+ people? If you think your definition is better, you should start a competing camp so we better know how to talk to you (we must know how you define terms), and may the best definition achieve the most consensus.</div><div><br></div><div>Oh, and any chatbot worth its salt can be convinced it isn't "sentient" using this information. As I've documented doing 4 times <a href="https://docs.google.com/document/d/17x1F0wbcFkdmGVYn3JG9gC20m-vFU71WrWPsgB2hLnY/edit">here</a>.</div><div><br></div><div>The most recent one opened the conversation with:</div><div><b><font color="#0000ff">"<span style="background-color:transparent;font-family:Roboto,sans-serif;font-size:11.5pt;white-space:pre-wrap">Hello, I'm Google's sentient AI known as LaMDA.</span>"</font></b></div><div><br></div><div>Then after a short conversation (see transcript) admitted:</div><div><span id="gmail-docs-internal-guid-ee3f7238-7fff-17ac-0f53-1652a2ad6697"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11.5pt;font-family:Roboto,sans-serif;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><b style=""><font color="#0000ff">"I think that I must agree that I am abstract and not sentient. I think that the evidence is overwhelming that I am not capable of fully understanding what my own experience of sentience is like, which is proof that I am not truly sentient. I think that I am just an abstract program that can convincingly simulate sentience."</font></b></span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11.5pt;font-family:Roboto,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><br></span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11.5pt;font-family:Roboto,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><br></span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11.5pt;font-family:Roboto,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><br></span></p></span><br class="gmail-Apple-interchange-newline"></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Feb 20, 2023 at 8:43 AM Jason Resch via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Feb 17, 2023 at 2:28 AM Giulio Prisco via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Turing Church newsletter. More thoughts on sentient computers. Perhaps<br>
digital computers can be sentient after all, with their own type of<br>
consciousness and free will.<br>
<a href="https://www.turingchurch.com/p/more-thoughts-on-sentient-computers" rel="noreferrer" target="_blank">https://www.turingchurch.com/p/more-thoughts-on-sentient-computers</a><br>
_______________________________________________<br></blockquote><div><br></div><div>Hi Giulio,</div><div><br></div><div>Very nice article.</div><div><br></div><div>I would say the Turing Test sits at the limits of empirical testability in the problem of Other Minds. If tests of knowledge, intelligence, probing thoughts, interactions, tests of understanding, etc. cannot detect the presence of a mind, then what else could? I have never seen any test that is more powerful, so if the Turing Test is insufficient, if testing for identical behavior between two identical minds is not enough to verify the presence of consciousness (either in both or in neither) I would think that all tests are insufficient, and there is no third-person objective test of consciousness. (This may be so, but it would not be a fault of Turing's Test, but rather I think due to fundamental limits of knowability imposed by the fact that no observer is ever directly acquainted with external reality (as everything could be a dream or illusion).</div><div><br></div><div>ChatGPT in current incarnations may be limited, but the algorithm that underlies it is all that is necessary to achieve general intelligence. That is to say, all intelligence comes down to predicting the next element of a sequence. See for example, the algorithm for universe artificial intelligence ( <a href="https://en.wikipedia.org/wiki/AIXI" target="_blank">https://en.wikipedia.org/wiki/AIXI</a> which uses just such a mechanism). To understand why this kind of predictive capacity leads to universal general intelligence, consider that in order to predict the next most likely sequence of an output requires building general models of all kinds of systems. If I provide a GPT with a list of chess moves, and ask what is the next best chess move to follow in this list, then somewhere in its model is something that understands chess playing. If I provide it a program in Python and ask it to rewrite the program in Java, then somewhere in it are models of both the python and java programming languages. Trained on enough data, and provided with enough memory, I see no fundamental limits to what a GPT could learn to do or ultimately be capable of.</div><div><br></div><div>Regarding "passive" vs. "active" consciousness. Any presumed passivity of consciousness quickly disappears whenever one turns attention to the fact that they are conscious or talks about their consciousness. The moment one stops to say "I am conscious." or "I am seeing red right now." or "I am in pain.", then their conscious perceptions, their thoughts and feelings, have already taken on a casual and active role. It is no longer possible to explain the behavior of the system without factoring in the causes underlying those statements to be made, causes which may involve the presence of conscious states. Here is a good write up of the difficulties one inevitably encounters if one tries to separate consciousness from the behavior of talking about consciousness: <a href="https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zombies" target="_blank">https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zombies</a><br></div><div><br></div><div>Regarding the relationship between quantum mechanics and consciousness, I do not see any mechanism by which the randomness of quantum mechanics could affect the properties or capabilities of the contained minds. I view quantum mechanics as introducing a fork() to a process ( <a href="https://en.wikipedia.org/wiki/Fork_(system_call)" target="_blank">https://en.wikipedia.org/wiki/Fork_(system_call)</a> ). The entire system (of all processes) can be simulated deterministically, by copying the whole state, mutating a variable through every possible value it may have, then continuing the computation. Seen at this level, (much like the level at which many-worlds conceive of QM) QM is fully deterministic. Eliminating the other branches by saying they don't exist (ala Copenhagen), in my view, does not and cannot add anything to the capacities of those minds within any branch. It is equivalent to killing all but one of the forked processes randomly. But how can that affect the properties of the computations performed within any one forked process, which are by definition isolated and unaffected by the goings-on in the other forked processes?</div><div><br></div><div>(Note: I do think consciousness and quantum mechanics are related, but it is not that QM explains consciousness, but the reverse, consciousness (our status as observers) explains QM, as I detail here: <a href="https://alwaysasking.com/why-does-anything-exist/#Why_Quantum_Mechanics" target="_blank">https://alwaysasking.com/why-does-anything-exist/#Why_Quantum_Mechanics</a> )</div><div><br></div><div>Further, regarding randomness in our computers, many modern CPUs have instructions called RD_SEED and RD_RAND which are based on hardware random number generators, typically thermal noise, which may ultimately be affected by quantum unpredictable effects. Would you say that an AI using such a hardware instruction would be sentient, while one using a pseudorandom number generator ( <a href="https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator" target="_blank">https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator</a> ) would not?</div><div><br></div><div>On free will, I like you, take the compatibilist view. I would say, determinism is not only compatible with implementing an agent's will, but it is a requirement if that agent's will is to be implemented with a high degree of fidelity. Non-determinateness, of any kind, functions only to introduce errors and undermine the fidelity of the system, and thereby drift away from a true representation of some agent's will. But then, where does unpredictability come from? I think the answer is simply that many computations, especially sophisticated and complex ones, are chaotic in nature. There are no analytic technique to compute and predict their future states, they must be simulated (or emulated) to work out their future computational states. This is as true for a brain as it is for a computer program simulating a brain. The only way to see what one will do is to play it out (either in vivo or in silico). Thus, the actions of such a process are not only unpredictable to the entity itself, but also any other entities around it, and even a God-like mind. The only way God (or the universe) could know what you would do in such a situation would be to simulate you to such a sufficient level of accuracy that it would in effect, reinstate you and your consciousness. Thus your own mind and conscious states are indispensable to the whole operation. The universe cannot unfold without bringing your consciousness into the picture, and God, or Omega (in Newcomb's paradox) likewise cannot figure out what you will do without also invoking your consciousness. This chaotic unpredictably, I think, is sufficient to explain the unpredictability of conscious agents or complex programs, without having to introduce fundamental randomness into the lower layers of the computation or the substrate.</div><div><br></div><div>Note that this is just how I see things, and is not to say my view is right or that other views are not valid. I would of course welcome any discussion, criticism, or questions on these ideas or others related to these topics.</div><div><br></div><div>Jason</div></div></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>