<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<br>
<div class="moz-cite-prefix">On 21/04/2023 23:39, Gordon Swobe
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CAJvaNPmVveWtW=c-UtjToiCQODQeDCgwp7B6i=FN1wNz7QUm8g@mail.gmail.com">
<div dir="auto"> I find it obvious that words point to things that
are not themselves words; that the referents exist outside of
language. It is basic linguistics and has nothing to do with
LLMs or AI. </div>
<div dir="auto"><br>
</div>
<div dir="auto">Some Paleolithic ancestors discovered that
uttering certain noises to represent things is more efficient
than pointing fingers at those things. On that day, language was
born. </div>
<div dir="auto"><br>
</div>
</blockquote>
(you think that pointing is not a language? I suspect many deaf
people would disagree)<br>
<br>
This is why referring to linguistics is not helping. As I said
earlier, it's the wrong discipline here. It's like bringing in an
accountant to explain the workings of a fairground ride. All they
can do is talk about cashflow, but that's no help to understand the
mechanics, and thus infer what the ride is capable of doing.<br>
<br>
Forget the accounting, think about the mechanics.<br>
<br>
Referents, being internal conceptual models, <i>are made of
language</i>. They must be, because there's nothing else to work
with, in the brain.<br>
<br>
<br>
<blockquote type="cite"
cite="mid:CAJvaNPm7ds8h0hueNvZXtZ6TDo=3F+AzCBzLSv7W-gqOetz6+A@mail.gmail.com">Converting
digital images into language is exactly how I might also describe
it to someone unfamiliar with computer programming. The LLM is
then only processing more text similar in principle to English
text that describes the colors and shapes in the image. Each pixel
in the image is described in symbolic language as "red" or "blue"
and so on. The LLM then goes on to do what might be amazing things
with that symbolic information, but the problem remains that these
language models have no access to the referents. In the case of
colors, it can process whatever symbolic representation it uses
for "red" in whatever programming language in which it is written,
but it cannot actually see the color red to ground the symbol
"red."</blockquote>
<br>
Well, we use pictures to represent things that are not themselves
pictures, sound to represent things that are not themselves sounds,
and so-on. 'Language' doesn't mean just text or spoken words.
Musical notation is a language, we have sign language, body
language, a ton of chemical languages (I was just reading about
certain tadpoles that hatch with stronger jaws than usual if they
"sense prey in the water while they are still embryos". They are
getting a chemical signal from their environment that tells them
"food is near". What's that if not a communication in a language?<br>
<br>
Languages are about communication, and are not restricted to any
specific medium. In fact, we could replace the word "language" with
"means of communication", although it's a bit unwieldy. We could
call these AI systems "Large Means of Communication Models" (LMCMs),
then perhaps people wouldn't assume they can only deal with text
inputs.<br>
<br>
You know where this is going, right?<br>
<br>
Yes. The language of the brain.<br>
<br>
Our brains convert all our sensory inputs into a common language:
spike trains in axons. Every part of our sensorium is described in a
symbolic language as "|_||_|__|||_|___||" etc., in many parallel
channels, and this is the common language used throughout the brain.
Can't get more abstract that that, can you? It's effectively a type
of morse code, or binary. And this happens right up at the front, in
the retina, the cochlea, the pacinian corpuscles, olfactory bulbs,
etc. Right at the interface between the environment and our nervous
systems. These spike trains have no access to the referents, but
they don't need to, in fact the referents are constructed from them.
These internal models I keep mentioning are made of 'nothing more
than' circuits of neurons using this language. The referents <i>are
made of language</i>. Now I'm sure this is just so much recursive
nonsense to a linguist, but it's how the mechanics work. (remember
that our eyes do not "see a horse". They receive a mass of light
signals that are sorted out into many detailed features, that are
linked together, passed up a complex looping chain of signals to the
visual cortex and many other areas, eventually resulting in (or
contributing to) an internal conceptual model. THEN we 'see a
horse'. This becomes a referent for the word "horse". So it's
actually the complex associations between many many spike trains
that actually gives meaning to the word "Horse")<br>
<br>
What is it about the neural signal "|_||_|__|||_|___||" (etc.) that
results in the sensation of seeing a colour? There must be
something, because we undeniably do experience these sensations of
seeing colours, and the brain undeniably uses spike trains as its
way of processing information. We have our spike trains, LMCMs have
their AASCI codes, and both can output coherent utterances about
colours , horses, linguists, fairground rides and a whole host of
other things, that seem to indicate that the system in question
knows what it's talking about.<br>
<br>
So your argument can be applied to human brains, as well as to
LMCMs. You are effectively arguing that <b>we</b> don't understand
things because our brains are 'just' making correlations between
abstract, ungrounded streams of binary signals.<br>
<br>
Ben<br>
<br>
<br>
<br>
</body>
</html>