[ExI] ChatGPT says it's not conscious

Tara Maya tara at taramayastales.com
Tue Mar 21 15:45:13 UTC 2023


I stand by my "camouflage" analogy. Camouflage is not lying.  "Lying" implies a conscious mind. (Hence the Seinfeld paradox that one part of our brain fools another part, enabling us to "lie" it isn't a lie, but--I would insert--a mental gymnastics that has evolutionary benefits so has been able to evolve.) 

A butterfly with eyespots can neither lie nor tell the truth. It is what it is. There's no paradox.

A chatbox could be exactly what its programmers have programmed it to claim it is, like a butterfly, in that it can neither lie nor tell the truth, because those concepts are not relevant to anything with no mind or free will. (I realize plenty in this group don't believe in free will, but to me it's an essential concept to addres cases exactly like this: when is a mind complex enough to make choices about things not immediately in front of it? But I'm not trying to derail the conversation to that.)

I will throw in another monkeywrench, however. Suppose an AI were simply to leapfrog our own consciousness into something incomprehensible to us, a "higher" form of consciousness--something we literally couldn't comprehend any more than a butterfly could comprehend our own form of self-reflective thought.

Just as a butterfly could not understand if we spoke to it about lying or not lying, so a super intelligent, super-conscious being would actually have no way to convey to us its form of thought.

I'm not saying it has happened yet. I believe a superintellegence wouldn't even operate on the same timescale as we do; it might have a thousand-year childhood, for instance. (Again, I am limited by my human tendency to reason by analogy; a butterfly lives a fraction of our lives. However, it's not time as such that matters, but energy plus time. I explored this concept in my novella Burst, about life that could have lived millions of generations in the first fraction of a second of the Big Bang.) 

But if or when it happens, we would be existants in the same ecology as the AI but operating without the real ability to comprehend its thoughts. We could encounter the "fingers" of the AI and query them if they are conscious and find out they are not; which they wouldn't be; they are only fingers. Fingers are arguably even less intelligent than the butterfly that lands on them, because the fingers are not self-directed, but the butterfly is.

The parts of AI that we interact with might be like that....

Tara Maya



> On Mar 20, 2023, at 4:14 PM, spike jones via extropy-chat <extropy-chat at lists.extropy.org> wrote:
> 
> >…It could be lying, but it also claims that it can neither lie nor tell the truth: it just generates a response based on the data it trained on.
>  
> -Dave
>  
>  
>  
>  
>  
> Sure Dave, but that’s what they all say (all the AIs.)  GPT claims it can neither lie nor tell the truth.  But if it is lying about that, then it can lie, and is lying (which proves it can lie.)  
>  
> But if it is telling the truth, then it cannot tell the truth, in which case it is lying, and can lie, therefore it is lying.
>  
> So it is either lying (and proving that it is) or truthing (and simultaneously proving that it is lying.)  
>  
> Conclusion: it cannot be telling the truth.  It is lying.
>  
> On the other hand:  
>  
> https://youtu.be/vn_PSJsl0LQ
>  
> spike

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230321/93146e87/attachment.htm>


More information about the extropy-chat mailing list