[ExI] e: GPT-4 on its inability to solve the symbol grounding problem

Keith Henson hkeithhenson at gmail.com
Wed Apr 19 12:51:13 UTC 2023

On Tue, Apr 18, 2023 at 11:45 PM Gordon Swobe via extropy-chat
<extropy-chat at lists.extropy.org> wrote:

There are valid objections to LLM AI.

However, the question you should be considering is will these
objections exist in a few more iterations?


> Ben, I can't locate the message, but you asked my thoughts on the difference between a language model solving what you have called the word association problem and its solving the symbol grounding problem. In my view, the difference lies in the fact that understanding statistical associations between words does not require knowledge of their meanings. While this distinction might not make a practical difference, it becomes important if the question is whether the model genuinely understands the content of its inputs and outputs or merely simulates that understanding.
> -gts
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat

More information about the extropy-chat mailing list