<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<br>
<div class="moz-cite-prefix">On 21/04/2023 05:28, Gordon Swobe
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:mailman.493.1682051296.847.extropy-chat@lists.extropy.org">LLMs
have no access to the referents from which words derive their
meanings. Another way to say this is that they have no access to
experiences by which symbols are grounded. </blockquote>
<br>
Really Gordon? Still?<br>
<br>
Did you watch that video? Did you read what I wrote about it? (the
bit about 'language', not the excitable hype about the singularity,
which I expect you to dismiss).<br>
<br>
If so, and you still stand by the above, please explain how (apart
from one being biological and the other not) the inputs that GPT-4
and the inputs that human brains receive, are different?<br>
<br>
Our previous discussions were based on the misunderstanding that
these LLMs only received text inputs. Now we know that's not true,
and they receive text, visual, auditory, and other types of input,
even ones that humans aren't capable of.<br>
<br>
Plus we are told they do use internal models, which you agreed that
our 'grounding' is based on.<br>
<br>
So LLMs <b>do</b> have access to the referents from which words
derive their meanings<br>
<br>
So why do you still think they don't? They have just as much access
as we do, and more, it seems.<br>
<br>
Again, I'm making no claims about their consciousness, as that is a
thing yet to be defined, but they definitely have the basis to
'ground' the symbols they use in meaningful models constructed from
a variety of sensory inputs. Just like humans.<br>
<br>
Or are you moving your own goalposts now, and claiming, (by shifting
to the term 'experiences') that referents must be based on conscious
experience? Because that wasn't your argument before.<br>
<br>
Ben<br>
</body>
</html>