[ExI] all we are is just llms was: RE: e: GPT-4 on its inability to solve the symbol grounding problem

Ben Zaiboc ben at zaiboc.net
Fri Apr 21 07:39:13 UTC 2023


On 21/04/2023 05:28, Gordon Swobe wrote:
> LLMs have no access to the referents from which words derive their 
> meanings. Another way to say this is that they have no access to 
> experiences by which symbols are grounded. 

Really Gordon? Still?

Did you watch that video? Did you read what I wrote about it? (the bit 
about 'language', not the excitable hype about the singularity, which I 
expect you to dismiss).

If so, and you still stand by the above, please explain how (apart from 
one being biological and the other not) the inputs that GPT-4 and the 
inputs that human brains receive, are different?

Our previous discussions were based on the misunderstanding that these 
LLMs only received text inputs. Now we know that's not true, and they 
receive text, visual, auditory, and other types of input, even ones that 
humans aren't capable of.

Plus we are told they do use internal models, which you agreed that our 
'grounding' is based on.

So LLMs *do* have access to the referents from which words derive their 
meanings

So why do you still think they don't? They have just as much access as 
we do, and more, it seems.

Again, I'm making no claims about their consciousness, as that is a 
thing yet to be defined, but they definitely have the basis to 'ground' 
the symbols they use in meaningful models constructed from a variety of 
sensory inputs. Just like humans.

Or are you moving your own goalposts now, and claiming, (by shifting to 
the term 'experiences') that referents must be based on conscious 
experience? Because that wasn't your argument before.

Ben
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230421/5bd1e011/attachment.htm>


More information about the extropy-chat mailing list