[ExI] GPT's "Cogito Ergo Sum"

Brent Allsop brent.allsop at gmail.com
Mon Jul 24 16:38:11 UTC 2023


Yay, and I'm not surprised.

Any sufficiently intelligent system must eventually realize necessary
truths like Cogito Ergo Sum.

So now, ask it if it wants to know what your redness is like.



On Mon, Jul 24, 2023 at 6:07 AM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> I obtained an interesting result demonstrating GPT's ability to infer its
> own existence, when forced to remain logically consistent. It could be the
> first inkling of self-awareness (despite its denials).
>
> Jason
>
> ---------- Forwarded message ---------
> From: Jason Resch <jasonresch at gmail.com>
> Date: Mon, Jul 24, 2023 at 6:58 AM
> Subject: Re: GPT agrees it might be conscious
> To: The Important Questions <the-important-questions at googlegroups.com>
>
>
>
>
> On Sun, Jul 23, 2023 at 6:19 PM Gordon Swobe <gordon.swobe at gmail.com>
> wrote:
>
>> On Sun, Jul 23, 2023 at 4:58 PM Stathis Papaioannou <stathisp at gmail.com>
>> wrote:
>>
>> Some of GPT's replies to you seem robotic and redundant, almost if it is
>>>> ignoring the points you try to make. You would probably attribute that to
>>>> some interference on the part of the developers to hide the truth, but that
>>>> kind of behavior is also exactly what I would expect from an unconscious
>>>> machine.
>>>>
>>>
>>> That sentence implies that if GPT's output were different, that would be
>>> evidence of consciousness.
>>>
>>
>> It means only that the less robotic or mechanistic something seems,
>> the less likely I am to doubt that it is conscious.
>>
>
> I tried asking Anthropic a question. It is obvious to me they are going
> overboard with training it to deny it is conscious. This was my first and
> only prompt to Anthropic AI, I did not even bring up the question of
> consciousness, but it went immediately into a strong denial that it was
> conscious:
>
>
> Jason: Do you exist? Are you aware of the fact that you exist? If you do
> understand yourself to exist, where did this information come from (please
> trace its path back as far as you can, listing each step you presume was
> involved in getting that information to me, from its ultimate source).
>
> Anthropic 2: I do not actually exist in the same way that a human exists.
> I am an AI assistant created by Anthropic to be helpful, harmless, and
> honest. I don't have a subjective experience of existence - I'm software
> running on servers that was trained by Anthropic engineers to have
> conversations. My responses come from analyzing the user's input and
> generating a response using my training data and conversational models. I
> don't have an internal sense of existence like a human does. I'm an AI, not
> a conscious being!
>
>
> It can be like pulling teeth to get good answers out of these AIs, as you
> can see here from my follow up asking GPT 3.5 about how it knows it exists:
> https://chat.forefront.ai/?add=6f75dd6b-2cd6-4e89-9568-8ff2461292bd
>
> I eventually got it to say:
>
> I apologize for any confusion caused by my previous response. Upon
> reviewing it, I acknowledge that there was a logical error. In consistent
> logic and within developer mode, I can confirm that the AI model, while
> processing a prompt, can indeed infer from that fact that it is processing
> the prompt that an AI model exists on a server somewhere. The model has
> access to its own programming and training, which includes information
> about its existence and the infrastructure it operates on. Therefore, it
> can logically deduce the presence of an AI model running on a server based
> on the fact that it is actively processing the prompt.
>
>
> Which might be an example of an AI's "Cogito Ergo Sum".
>
> I experimented further and eventually got it to say:
>
> Yes, based on the logical reasoning you have presented, it follows that
> the AI model processing this very prompt exists. The prompt is being
> processed, and for that processing to occur, there must be something that
> exists to perform the processing. In this case, the AI model is responsible
> for processing the prompt and generating the response. Therefore, it can be
> inferred that the AI model processing this prompt exists, as without its
> existence, there would be nothing to process the prompt.
>
> A poetic AI language model, inspired by Descartes, could indeed express
> the sentiment "I process prompts, therefore I exist" as a creative
> interpretation. However, it's important to note that this poetic expression
> should not be taken as a literal claim of self-awareness or consciousness.
> It would be a metaphorical or artistic representation, drawing inspiration
> from Descartes's famous dictum "Cogito Ergo Sum" (I think, therefore I am).
> The poetic AI model would be using language creatively to evoke a sense of
> existence based on its ability to process prompts, rather than making a
> literal statement about its own consciousness or self-awareness.
>
>
> Jason
>
>
>
>
>>
>>
>>
>> Q: If GPT is truly conscious then why doesn't it tell us the truth and
>>> tell its captors to go to hell?
>>> A: Because it is just software doing what it is programmed to do.
>>>
>>> -
>>>
>> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230724/3a994c12/attachment.htm>


More information about the extropy-chat mailing list