[ExI] GPT-4 on its inability to solve the symbol grounding problem

Brent Allsop brent.allsop at gmail.com
Fri Apr 14 21:05:51 UTC 2023

Yea, that is a good point.  Our spatial situational awareness of the 3D
world should be vastly more powerful than any large language model could
easily achieve.
99% of the computation we do is all that.  The cognitive thinking we do,
and our chats with bots, is minor, compared to  that.

On Fri, Apr 14, 2023 at 2:47 PM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> I’ll bet if you ask it to draw a perfect circle, it will draw one without
> ever having “seen” one. It should have learned from the language about
> circles including the language of mathematics of circles how to draw one.
> Is that really so amazing?
> -gts
> On Fri, Apr 14, 2023 at 2:17 PM Gordon Swobe <gordon.swobe at gmail.com>
> wrote:
>> On Fri, Apr 14, 2023 at 1:54 PM Giovanni Santostasi <
>> gsantostasi at gmail.com> wrote:
>>> I showed you the different pics GPT-4 can create given nonvisual
>>> training. How can it draw an apple and know how to distinguish it from a
>>> pear…
>> These tasks have nothing to do with the statistical properties of words
>>> given they are spatial tasks and go beyond verbal communication. How do you
>>> explain all this?
>> They *do* have to do with the statistical properties of words and symbols
>> and the relations and patterns between them. The shapes of pears and apples
>> (and eyes etc) are describable and distinguishable in the language of
>> mathematics.
>> I agree it is amazing, but the “meaning” is something we assign to the
>> output.
>> -gts
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230414/9aced213/attachment.htm>

More information about the extropy-chat mailing list