[ExI] e: GPT-4 on its inability to solve the symbol grounding problem
jasonresch at gmail.com
Mon Apr 17 16:48:44 UTC 2023
On Mon, Apr 17, 2023, 11:27 AM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> On Sun, Apr 16, 2023 at 7:32 PM Giovanni Santostasi via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>> *Nowhere in the process is the word "chair" directly linked to an
>> actualchair. There is no 'grounding', there are multiple associations.*
>> It is mind-blowing that somebody as smart as Gordon doesn't understand
>> what you explained.
> It is mind-blowing that even after all my attempts to explain Linguistics
> 101, you guys still fail to understand the meaning of the word "referent."
You must feel about as frustrated as John Searle did here:
Searle: “The single most surprising discovery that I have made in
discussing these issues is that many AI workers are quite shocked by my
idea that actual human mental phenomena might be dependent on actual
physical-chemical properties of actual human brains. [...]
The mental gymnastics that partisans of strong AI have performed in their
attempts to refute this rather simple argument are truly extraordinary.”
Dennett: “Here we have the spectacle of an eminent philosopher going around
the country trotting out a "rather simple argument" and then marveling at
the obtuseness of his audiences, who keep trying to show him what's wrong
with it. He apparently cannot bring himself to contemplate the possibility
that he might be missing a point or two, or underestimating the opposition.
As he notes in his review, no less than twenty-seven rather eminent people
responded to his article when it first appeared in Behavioral and Brain
Sciences, but since he repeats its claims almost verbatim in the review, it
seems that the only lesson he has learned from the response was that there
are several dozen fools in the world.”
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat