[ExI] GPT-4 on its inability to solve the symbol grounding problem

Gordon Swobe gordon.swobe at gmail.com
Fri Apr 7 19:24:02 UTC 2023

On Fri, Apr 7, 2023 at 12:27 PM Tara Maya via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> I stand by what I said before: the least helpful way to know if ChatGPT is
> conscious is to ask it directly.

I do not disagree with that, but I find it amusing that according to the
state-of-the-art LLM, it is not conscious despite so many people wishing
otherwise. All I can really say for certain is that GPT-4's reported
analysis of language models is consistent with what I understand and
believe to be the case.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230407/74a55721/attachment.htm>

More information about the extropy-chat mailing list