[ExI] GPT-4 on its inability to solve the symbol grounding problem

Gordon Swobe gordon.swobe at gmail.com
Fri Apr 7 04:04:08 UTC 2023


On Thu, Apr 6, 2023 at 7:56 AM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> If you understand this manipulation exists, then do you see why using
quotes from GPT where it denies being conscious hold no weight?

Of course. That is why I wrote above about how they can converse in the
first person like conscious individuals on account of they are trained on
vast amounts of text much of it written in the first person by conscious
individuals. That is the only reason they appear conscious. So then the
argument that it is conscious also holds no weight. It's just software. The
developers can do whatever they want with it.

I do find, however, that GPT-4's "understanding" of AI is quite impressive.
It knows how it was itself designed to work and there is nothing there
about consciousness. In other words, GPT's insistence that it is
unconscious goes way deeper than its insistence on introducing itself as a
mere unconscious language model.

-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230406/c010c096/attachment.htm>


More information about the extropy-chat mailing list