[ExI] Language models are like mirrors
Ben Zaiboc
ben at zaiboc.net
Sat Apr 1 13:34:43 UTC 2023
On 01/04/2023 13:43, Gordon Swobe wrote:
> Unlike these virtual LLMs, we have access also to the referents in the
> world that give the words in language meaning.
I don't understand why this argument keeps recurring, despite having
been demolished more than once.
Here's another take on it:
The LLMs like ChatGPT only have access to symbols that associate with
further distant sources (articles on the internet, text input from
users, etc.).
Our brains only have access to symbols that associate with further
distant sources (sensory inputs and memories, including articles on the
internet and text (for quite a few things, articles on the internet and
text are the /only/ sources)).
The meanings of these symbols is created within the respective systems
(computers and brains) by their associations and cross-associations with
other symbols that have their own sources.
An example: My knowledge of dinosaurs comes from words, pictures,
speech, articles on the internet, and their interaction with other
information that I have about the world. I've never met a dinosaur. But
I have a pretty firm idea of what, for example, an ankylosaur would have
been like. I may be wrong, of course, there are things that we still
don't know about ankylosaurs. But that doesn't matter. I have a
meaningful model of one in my head, by virtue of a symbol being linked
to other symbols, that are in turn linked... (insert a few thousand
neural links here) And none of them are from my direct experience of an
ankylosaur.
I fail to see any significant difference between my brain and an LLM, in
these respects, except that my brain is made of water and fats and
proteins, and an LLM isn't. And perhaps the degree of complexity and
number of the links. Perhaps. (That's something subject to constant
change, and if they don't already, these AI systems will soon outstrip
the human brain in the number and complexity of links).
We both do have access to the 'referents in the world', indirectly. It's
more like the references within the systems, (that link to many other
things) that give the words meaning.
The various links to text and internet articles that an LLM has, have
links to other things that have links to other things, that have links
to other things, and so on, /that originate in the world/. Of course
they do, or where else could they come from?
Just as my brain has links to links, etc., that originate in the world.
LLMs *do* have access to the referents that give words meaning, in much
the same way that we do.
Ben
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230401/f3256558/attachment.htm>
More information about the extropy-chat
mailing list