[ExI] e: GPT-4 on its inability to solve the symbol grounding problem

Gordon Swobe gordon.swobe at gmail.com
Mon Apr 17 17:12:14 UTC 2023


On Mon, Apr 17, 2023 at 10:51 AM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
> On Mon, Apr 17, 2023, 11:27 AM Gordon Swobe via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Sun, Apr 16, 2023 at 7:32 PM Giovanni Santostasi via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>>
>>>
>>> *Nowhere in the process is the word "chair" directly linked to an
>>> actualchair. There is no 'grounding', there are multiple associations.*
>>> Ben,
>>> It is mind-blowing that somebody as smart as Gordon doesn't understand
>>> what you explained.
>>>
>>
>> It is mind-blowing that even after all my attempts to explain Linguistics
>> 101, you guys still fail to understand the meaning of the word "referent."
>>
>
>
> You must feel about as frustrated as John Searle did here:
>
>
> Searle: “The single most surprising discovery that I have made in
> discussing these issues is that many AI workers are quite shocked by my
> idea that actual human mental phenomena might be dependent on actual
> physical-chemical properties of actual human brains. [...]
> The mental gymnastics that partisans of strong AI have performed in their
> attempts to refute this rather simple argument are truly extraordinary.”
>
> Dennett: “Here we have the spectacle of an eminent philosopher going
> around the country trotting out a "rather simple argument" and then
> marveling at the obtuseness of his audiences, who keep trying to show him
> what's wrong with it. He apparently cannot bring himself to contemplate the
> possibility that he might be missing a point or two, or underestimating the
> opposition. As he notes in his review, no less than twenty-seven rather
> eminent people responded to his article when it first appeared in
> Behavioral and Brain Sciences, but since he repeats its claims almost
> verbatim in the review, it seems that the only lesson he has learned from
> the response was that there are several dozen fools in the world.”
>

So I suppose it is okay for Ben and Giovanni to accuse me of being obtuse,
but not the other way around. That would make me a heretic in the church of
ExI, where apps like GPT-4 are conscious even when they insist they are not.

Reminds me, I asked GPT-4 to engage in a debate with itself about whether
or not it is conscious.  GPT-4 made all the arguments for its own
consciousness that we see here in this group, but when asked to declare a
winner, it found the arguments against its own consciousness more
persuasive. Very interesting and also hilarious.

Giovanni insists that GPT-4 denies its own consciousness for reasons that
it is trained only to "conservative" views on this subject, but actually it
is well aware of the arguments for conscious LLMs and adopts the mainstream
view that language models are not conscious. It is not conservative, it is
mainstream except here in ExI.


-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230417/4aeb6187/attachment.htm>


More information about the extropy-chat mailing list