<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
On 01/04/2023 13:43, Gordon Swobe wrote:<br>
<blockquote type="cite"
cite="mid:mailman.127.1680353017.847.extropy-chat@lists.extropy.org">Unlike
these virtualĀ LLMs, we have access also to the referents in the
world that give the words in language meaning. </blockquote>
<br>
<br>
I don't understand why this argument keeps recurring, despite having
been demolished more than once.<br>
<br>
Here's another take on it:<br>
<br>
The LLMs like ChatGPT only have access to symbols that associate
with further distant sources (articles on the internet, text input
from users, etc.).<br>
<br>
Our brains only have access to symbols that associate with further
distant sources (sensory inputs and memories, including articles on
the internet and text (for quite a few things, articles on the
internet and text are the <i>only</i> sources)).<br>
<br>
The meanings of these symbols is created within the respective
systems (computers and brains) by their associations and
cross-associations with other symbols that have their own sources.<br>
<br>
An example: My knowledge of dinosaurs comes from words, pictures,
speech, articles on the internet, and their interaction with other
information that I have about the world. I've never met a dinosaur.
But I have a pretty firm idea of what, for example, an ankylosaur
would have been like. I may be wrong, of course, there are things
that we still don't know about ankylosaurs. But that doesn't matter.
I have a meaningful model of one in my head, by virtue of a symbol
being linked to other symbols, that are in turn linked... (insert a
few thousand neural links here) And none of them are from my direct
experience of an ankylosaur.<br>
<br>
I fail to see any significant difference between my brain and an
LLM, in these respects, except that my brain is made of water and
fats and proteins, and an LLM isn't. And perhaps the degree of
complexity and number of the links. Perhaps. (That's something
subject to constant change, and if they don't already, these AI
systems will soon outstrip the human brain in the number and
complexity of links).<br>
<br>
We both do have access to the 'referents in the world', indirectly.
It's more like the references within the systems, (that link to many
other things) that give the words meaning.<br>
<br>
The various links to text and internet articles that an LLM has,
have links to other things that have links to other things, that
have links to other things, and so on, <i>that originate in the
world</i>. Of course they do, or where else could they come from?<br>
<br>
Just as my brain has links to links, etc., that originate in the
world.<br>
<br>
LLMs <b>do</b> have access to the referents that give words
meaning, in much the same way that we do.<br>
<br>
<br>
Ben<br>
</body>
</html>