[ExI] GPT-4 on its inability to solve the symbol grounding problem

Gordon Swobe gordon.swobe at gmail.com
Thu Apr 13 22:15:37 UTC 2023

On Thu, Apr 13, 2023 at 3:46 PM Giovanni Santostasi <gsantostasi at gmail.com>

I think my analogy is completely relevant. Science is not supposed to
> reproduce perfectly the territory, it is not a limitation but a feature. I
> went into detail about why it is so.
> Can you please address this and explain why I'm wrong?

Honestly, Gio, I do not find conversations with you to be very productive.
I think you would say up is down and white is black if it would support
your zealous belief that language models have consciousness.

You lost me when you disagreed with my very simple argument that words have
referents. That words have referents is hardly even an argument. It is more
like an observation. When you say a word, you mean something, and that
something that you mean is the referent. It is what gives the word meaning
in your own mind. It could be an object that you perceive or imagine, or it
could be an abstract idea. It is whatever the word stands for.

In any case, Nagel is perfectly well aware of how science is useful for
giving us objective explanations of the objective world.

> If you don't like what science does and it is then invent your own

Hardly my own idea, the "explanatory gap" (usually used in reference to
Nagel) is more or less another way of saying "the hard problem of
consciousness" (usually used in reference to David Chalmers). Roger Penrose
has a similar idea as do many other philosophers of mind and science who
have looked at the problem of explaining how minds have subjective
conscious experience.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230413/318a77de/attachment.htm>

More information about the extropy-chat mailing list