[ExI] e: GPT-4 on its inability to solve the symbol grounding problem
Ben Zaiboc
ben at zaiboc.net
Mon Apr 17 19:56:46 UTC 2023
On 17/04/2023 20:22, Gordon Swobe wrote:
> Let us say that the diagram above with a "myriad of other concepts
> etc" can accurately model the brain/mind/body with links extending to
> sensory organs and so on. Fine. I can agree with that at least
> temporarily for the sake of argument, but it is beside the point.
Why are you saying it's beside the point? It is exactly the point. If
you can agree with that simplified diagram, good, so now, in terms of
that diagram, or extending it any way you like, how do we show what
'grounding' is? I suppose that's what I want, a graphical representation
of what you mean by 'grounding', incorporating these links.
Never mind LMMs, for the moment, I just want an understanding of this
'grounding' concept, as it applies to a human mind, in terms of the
brain's functioning. Preferably in a nice, simplified diagram similar to
mine.
Ben
More information about the extropy-chat
mailing list