[ExI] GPT-4 on its inability to solve the symbol grounding problem
brent.allsop at gmail.com
Fri Apr 14 21:05:51 UTC 2023
Yea, that is a good point. Our spatial situational awareness of the 3D
world should be vastly more powerful than any large language model could
99% of the computation we do is all that. The cognitive thinking we do,
and our chats with bots, is minor, compared to that.
On Fri, Apr 14, 2023 at 2:47 PM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> I’ll bet if you ask it to draw a perfect circle, it will draw one without
> ever having “seen” one. It should have learned from the language about
> circles including the language of mathematics of circles how to draw one.
> Is that really so amazing?
> On Fri, Apr 14, 2023 at 2:17 PM Gordon Swobe <gordon.swobe at gmail.com>
>> On Fri, Apr 14, 2023 at 1:54 PM Giovanni Santostasi <
>> gsantostasi at gmail.com> wrote:
>>> I showed you the different pics GPT-4 can create given nonvisual
>>> training. How can it draw an apple and know how to distinguish it from a
>> These tasks have nothing to do with the statistical properties of words
>>> given they are spatial tasks and go beyond verbal communication. How do you
>>> explain all this?
>> They *do* have to do with the statistical properties of words and symbols
>> and the relations and patterns between them. The shapes of pears and apples
>> (and eyes etc) are describable and distinguishable in the language of
>> I agree it is amazing, but the “meaning” is something we assign to the
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat