[ExI] e: GPT-4 on its inability to solve the symbol grounding problem

Gordon Swobe gordon.swobe at gmail.com
Tue Apr 18 19:19:22 UTC 2023


All this gets away from the main point of this thread, which is that an LLM
has no access to the referents from which words acquire meanings, i.e., no
access to the experiences that allow for the grounding of symbols. And not
only do I think so, but GPT-4 itself reports this to be the case.

An advanced LLM like GPT can nonetheless do a fine job of faking it, which
to me is evidence of an important achievement in software engineering, not
evidence that the application somehow has a conscious mind of its own.

-gts




On Tue, Apr 18, 2023 at 12:37 PM Gordon Swobe <gordon.swobe at gmail.com>
wrote:

>
>
> On Tue, Apr 18, 2023 at 3:55 AM Ben Zaiboc via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On 18/04/2023 06:59, Gordon Swobe wrote:
>>
>> There is no direct perception of anything.
>>
>
>
> I did not mean that as a philosophical statement about what is called
> Direct or Naive Realism — just making the distinction between an apple in
> your hand and something entirely imaginary.
>
>
> We don't perceive apples, we construct them.
>>
>
> Your argument about how we construct mental objects is perfectly fine with
> me, but the word “perception” still has meaning.
>
> When an ordinary person not busy doing philosophy about how we construct
> mental models refers to an apple that he sees, he is referring to his
> perception of it and not the physical apple itself, which is a distinction
> I should have but did not make clear at the outset.
>
>
> -gts
>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230418/aa5aefab/attachment.htm>


More information about the extropy-chat mailing list