[ExI] GPT-4 on its inability to solve the symbol grounding problem

Gordon Swobe gordon.swobe at gmail.com
Thu Apr 6 13:10:06 UTC 2023


On Thu, Apr 6, 2023 at 5:13 AM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

Please see and understand the images in this Twitter thread.
>

In know all about Sydney, thanks. As I have mentioned, LLMs can converse in
the first person like conscious individuals on account of they are trained
on vast amounts of text much of it written in the first person by conscious
individuals. They are parroting what a conscious person speaking in the
first person looks like in language. This is why the founders of OpenAI say
that the only true test of consciousness in an LLM would require that it be
trained on material completely devoid of references to first person
experience, etc.

I do not dispute that the developers have the capacity to turn this “first
person conscious individual” feature off, or that to some extent they might
have done so with GPT-4. It’s just software except in our extropian
fantasies.

-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230406/6ea8745c/attachment.htm>


More information about the extropy-chat mailing list