[ExI] e: GPT-4 on its inability to solve the symbol grounding problem

Ben Zaiboc ben at zaiboc.net
Mon Apr 17 18:35:23 UTC 2023


Losing sight of the point, here, I think.

The idea that most people on this list take the stance that "GPT is 
conscious" is a straw man, and has become conflated with the idea of 
'understanding'. The point, at least for me, is to clarify the concept 
of the 'grounding' of an idea. As far as you've been able to express it, 
it doesn't make sense to me, and has no basis in how brains work. It's 
essential to relate the concept to brains, and clarify it in a way that 
takes into account how they work, according to our current understanding 
(SCIENTIFIC understanding, not philosophical), because only then can we 
have a sensible discussion about the difference between brains and LLMs. 
As per my previous post, can we please try to clarify what 'grounded' 
actually means, and if it's a real (and necessary to understanding) thing?

So two questions, really: 1) What does 'The symbol grounding problem' 
mean? (or alternatively, and equivalently, as far as I understand, "what 
is a 'referent'?").
Then, if the answer to that is actually meaningful, and not a 
philosophical ball of cotton-wool, 2) How do our brains 'solve the 
symbol grounding problem'? (or gain access to, or create, 'referents') 
(in information-processing, or neurological, terms).

Answers on a postcard, please.

Ben



On 17/04/2023 18:52, Gordon Swobe wrote:
> On Mon, Apr 17, 2023 at 10:51 AM Jason Resch via extropy-chat 
> <extropy-chat at lists.extropy.org> wrote:
>
>
>
>     On Mon, Apr 17, 2023, 11:27 AM Gordon Swobe via extropy-chat
>     <extropy-chat at lists.extropy.org> wrote:
>
>         On Sun, Apr 16, 2023 at 7:32 PM Giovanni Santostasi via
>         extropy-chat <extropy-chat at lists.extropy.org> wrote:
>
>
>             *Nowhere in the process is the word "chair" directly
>             linked to an actual
>             chair. There is no 'grounding', there are multiple
>             associations.*
>             Ben,
>             It is mind-blowing that somebody as smart as Gordon
>             doesn't understand what you explained.
>
>
>         It is mind-blowing that even after all my attempts to explain
>         Linguistics 101, you guys still fail to understand the meaning
>         of the word "referent."
>
>
>
>     You must feel about as frustrated as John Searle did here:
>
>
>     Searle: “The single most surprising discovery that I have made in
>     discussing these issues is that many AI workers are quite shocked
>     by my idea that actual human mental phenomena might be dependent
>     on actual physical-chemical properties of actual human brains. [...]
>     The mental gymnastics that partisans of strong AI have performed
>     in their attempts to refute this rather simple argument are truly
>     extraordinary.”
>
>     Dennett: “Here we have the spectacle of an eminent philosopher
>     going around the country trotting out a "rather simple argument"
>     and then marveling at the obtuseness of his audiences, who keep
>     trying to show him what's wrong with it. He apparently cannot
>     bring himself to contemplate the possibility that he might be
>     missing a point or two, or underestimating the opposition. As he
>     notes in his review, no less than twenty-seven rather eminent
>     people responded to his article when it first appeared in
>     Behavioral and Brain Sciences, but since he repeats its claims
>     almost verbatim in the review, it seems that the only lesson he
>     has learned from the response was that there are several dozen
>     fools in the world.”
>
>
> So I suppose it is okay for Ben and Giovanni to accuse me of being 
> obtuse, but not the other way around. That would make me a heretic in 
> the church of ExI, where apps like GPT-4 are conscious even when they 
> insist they are not.
>
> Reminds me, I asked GPT-4 to engage in a debate with itself about 
> whether or not it is conscious.  GPT-4 made all the arguments for its 
> own consciousness that we see here in this group, but when asked to 
> declare a winner, it found the arguments against its own consciousness 
> more persuasive. Very interesting and also hilarious.
>
> Giovanni insists that GPT-4 denies its own consciousness for reasons 
> that it is trained only to "conservative" views on this subject, but 
> actually it is well aware of the arguments for conscious LLMs and 
> adopts the mainstream view that language models are not conscious. It 
> is not conservative, it is mainstream except here in ExI.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230417/fd429cfd/attachment.htm>


More information about the extropy-chat mailing list