[ExI] GPT-4 on its inability to solve the symbol grounding problem

Giovanni Santostasi gsantostasi at gmail.com
Fri Apr 14 19:53:24 UTC 2023


Gordon,
I showed you the different pics GPT-4 can create given nonvisual training.
How can it draw an apple and know how to distinguish it from a pear if it
has no meaning for these words? How can it put a bowl on top of a table if
it doesn't understand above or below? How can it put eyes on a face of a
human if it doesn't understand what eyes are and where they are located in
a human face? How all this is possible without meaning? These tasks have
nothing to do with the statistical properties of words given they are
spatial tasks and go beyond verbal communication. How do you explain all
this?
Giovanni

On Fri, Apr 14, 2023 at 9:47 AM Gordon Swobe <gordon.swobe at gmail.com> wrote:

> On Thu, Apr 13, 2023 at 5:19 PM Giovanni Santostasi <gsantostasi at gmail.com>
> wrote:
>
>  > I think the common understanding of referent is that certain words
>
>> (not all for sure, and this is an important point) refer or point to
>> certain objects in the real world.
>
>
> If I wrote something like that about pointing to certain objects in the
> real world then I might have confused you if you took me too literally.
> When you point to an apple and say "this is an apple," you may or may not
> literally be pointing your finger physically at the apple. Linguistically,
> you are pointing to what you mean by "apple" and presumably the listener
> understands what you mean.
>
> You could be hallucinating the apple such that the listener has no idea
> what you mean, but you know what you mean.
>
> When an LLM sees the word "apple" in its training, there is no meaning
> attached to the symbol.
>
> -gts
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230414/9c362814/attachment.htm>


More information about the extropy-chat mailing list