[ExI] e: GPT-4 on its inability to solve the symbol grounding problem

Giovanni Santostasi gsantostasi at gmail.com
Mon Apr 17 20:25:49 UTC 2023


Gordon,
What we forget is the Model in LLM.
It doesn't matter (up to a point) what they trained GPT-4 on. Language is a
good thing to train on given its richness of content, the incredible
relations between the different words and concepts. There is a lot of
structure and regularities in language. That was the input. The output was
the weights of the ANN that are supposed to be an acceptable solution to
understand language as judged by a human observer (this is where the
Reinforced Supervised Learning component came into place).  Now feed any
other input, some request in a given language (GPT-4 knows many) and GPT-4
output is supposed to be a contextual coherent, informed, and aware (yes
aware of the context for sure) piece of conversation.
This was achieved not just using stats (even if that was the starting
point) but a MODEL of how language works. The model is what counts !!!
Why a model? Because it is impossible combinatorically to take in account
of all the possible combinations a word comes with, and it is not just a
word but a cluster of 2, 3 or even several words (not sure what is the
limit that is considered but it is up to many words). So to address the
issue of combinatorial explosion a model of the world (as you said language
is the entire universe for a LLM) had to be created. It is not a model the
programmer put in but the LLM created this model by the recursive training
(based on just adjusting the weight in the ANN) it received.
This model is a model of an entire universe. It is pretty universal it
seems because it can also work to solve problems also somehow related but
not directly related to language. It is not super good in solving math
problems (probably because more specific training in math is needed) but it
does a decent job with the right prompts (like checking order of operation
for example), it can resolve problems related to the theory of mind (that
is somehow there in understanding language but not exactly), it can
understand spatial relationships and so on. All this is because there is a
MODEL of the universe inside GPT-4.
The MODEL is what counts.
Do you understand how different this is from what you thnk a LLM does?
Giovanni



On Mon, Apr 17, 2023 at 12:58 PM Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On 17/04/2023 20:22, Gordon Swobe wrote:
> > Let us say that the diagram above with a "myriad of other concepts
> > etc" can accurately model the brain/mind/body with links extending to
> > sensory organs and so on. Fine. I can agree with that at least
> > temporarily for the sake of argument, but it is beside the point.
>
> Why are you saying it's beside the point? It is exactly the point. If
> you can agree with that simplified diagram, good, so now, in terms of
> that diagram, or extending it any way you like, how do we show what
> 'grounding' is? I suppose that's what I want, a graphical representation
> of what you mean by 'grounding', incorporating these links.
>
> Never mind LMMs, for the moment, I just want an understanding of this
> 'grounding' concept, as it applies to a human mind, in terms of the
> brain's functioning. Preferably in a nice, simplified diagram similar to
> mine.
>
> Ben
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230417/1f4c7c9d/attachment.htm>


More information about the extropy-chat mailing list