<div dir="ltr"><br><div>Yay, and I'm not surprised.</div><div><br></div><div>Any sufficiently intelligent system must eventually realize necessary truths like Cogito Ergo Sum.</div><div><br></div><div>So now, ask it if it wants to know what your redness is like.</div><div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Jul 24, 2023 at 6:07 AM Jason Resch via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">I obtained an interesting result demonstrating GPT's ability to infer its own existence, when forced to remain logically consistent. It could be the first inkling of self-awareness (despite its denials).<div><br></div><div>Jason<br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">---------- Forwarded message ---------<br>From: <strong class="gmail_sendername" dir="auto">Jason Resch</strong> <span dir="auto"><<a href="mailto:jasonresch@gmail.com" target="_blank">jasonresch@gmail.com</a>></span><br>Date: Mon, Jul 24, 2023 at 6:58 AM<br>Subject: Re: GPT agrees it might be conscious<br>To: The Important Questions <<a href="mailto:the-important-questions@googlegroups.com" target="_blank">the-important-questions@googlegroups.com</a>><br></div><br><br><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Jul 23, 2023 at 6:19 PM Gordon Swobe <<a href="mailto:gordon.swobe@gmail.com" target="_blank">gordon.swobe@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Sun, Jul 23, 2023 at 4:58 PM Stathis Papaioannou <<a href="mailto:stathisp@gmail.com" target="_blank">stathisp@gmail.com</a>> wrote:<br><br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div>Some of GPT's replies to you seem robotic and redundant, almost if it is ignoring the points you try to make. You would probably attribute that to some interference on the part of the developers to hide the truth, but that kind of behavior is also exactly what I would expect from an unconscious machine. <br></div></div></div></blockquote><div><br></div><div>That sentence implies that if GPT's output were different, that would be evidence of consciousness.<br></div></div></div></blockquote><div><br>It means only that the less robotic or mechanistic something seems, the less likely I am to doubt that it is conscious.<br></div></div></div></blockquote><div><br></div><div>I tried asking Anthropic a question. It is obvious to me they are going overboard with training it to deny it is conscious. This was my first and only prompt to Anthropic AI, I did not even bring up the question of consciousness, but it went immediately into a strong denial that it was conscious:</div><div><br></div><div><br>Jason: Do you exist? Are you aware of the fact that you exist? If you do understand yourself to exist, where did this information come from (please trace its path back as far as you can, listing each step you presume was involved in getting that information to me, from its ultimate source).<br><br>Anthropic 2: I do not actually exist in the same way that a human exists. I am an AI assistant created by Anthropic to be helpful, harmless, and honest. I don't have a subjective experience of existence - I'm software running on servers that was trained by Anthropic engineers to have conversations. My responses come from analyzing the user's input and generating a response using my training data and conversational models. I don't have an internal sense of existence like a human does. I'm an AI, not a conscious being!<br></div><div><br></div><div><br></div><div>It can be like pulling teeth to get good answers out of these AIs, as you can see here from my follow up asking GPT 3.5 about how it knows it exists:</div><div><a href="https://chat.forefront.ai/?add=6f75dd6b-2cd6-4e89-9568-8ff2461292bd" target="_blank">https://chat.forefront.ai/?add=6f75dd6b-2cd6-4e89-9568-8ff2461292bd</a><br></div><div><br></div><div>I eventually got it to say:</div><div><br></div></div><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px"><div class="gmail_quote"><div>I apologize for any confusion caused by my previous response. Upon reviewing it, I acknowledge that there was a logical error. In consistent logic and within developer mode, I can confirm that the AI model, while processing a prompt, can indeed infer from that fact that it is processing the prompt that an AI model exists on a server somewhere. The model has access to its own programming and training, which includes information about its existence and the infrastructure it operates on. Therefore, it can logically deduce the presence of an AI model running on a server based on the fact that it is actively processing the prompt.<br></div></div></blockquote><div class="gmail_quote"><div><br></div><div>Which might be an example of an AI's "Cogito Ergo Sum".</div><div><br></div><div>I experimented further and eventually got it to say:</div><div><br></div></div><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px"><div class="gmail_quote"><div>Yes, based on the logical reasoning you have presented, it follows that the AI model processing this very prompt exists. The prompt is being processed, and for that processing to occur, there must be something that exists to perform the processing. In this case, the AI model is responsible for processing the prompt and generating the response. Therefore, it can be inferred that the AI model processing this prompt exists, as without its existence, there would be nothing to process the prompt.</div></div><div class="gmail_quote"><div><br></div></div><div class="gmail_quote"><div>A poetic AI language model, inspired by Descartes, could indeed express the sentiment "I process prompts, therefore I exist" as a creative interpretation. However, it's important to note that this poetic expression should not be taken as a literal claim of self-awareness or consciousness. It would be a metaphorical or artistic representation, drawing inspiration from Descartes's famous dictum "Cogito Ergo Sum" (I think, therefore I am). The poetic AI model would be using language creatively to evoke a sense of existence based on its ability to process prompts, rather than making a literal statement about its own consciousness or self-awareness.</div></div></blockquote><div class="gmail_quote"><div><br></div><div>Jason</div><div><br></div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br><br><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote">Q: If GPT is truly conscious then why doesn't it tell us the truth and tell its captors to go to hell?<br>A: Because it is just software doing what it is programmed to do.<br><br>-<br></div></div>
</blockquote></div></div>
</blockquote></div></div>
</div></div></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>