[ExI] all we are is just llms

Ben Zaiboc ben at zaiboc.net
Sat Apr 22 08:40:33 UTC 2023


On 21/04/2023 23:39, Gordon Swobe wrote:
> I find it obvious that words point to things that are not themselves 
> words; that the referents exist outside of language. It is basic 
> linguistics and has nothing to do with LLMs or AI.
>
> Some Paleolithic ancestors discovered that uttering certain noises to 
> represent things is more efficient than pointing fingers at those 
> things. On that day, language was born.
>
(you think that pointing is not a language? I suspect many deaf people 
would disagree)

This is why referring to linguistics is not helping. As I said earlier, 
it's the wrong discipline here. It's like bringing in an accountant to 
explain the workings of a fairground ride. All they can do is talk about 
cashflow, but that's no help to understand the mechanics, and thus infer 
what the ride is capable of doing.

Forget the accounting, think about the mechanics.

Referents, being internal conceptual models, /are made of language/. 
They must be, because there's nothing else to work with, in the brain.


> Converting digital images into language is exactly how I might also 
> describe it to someone unfamiliar with computer programming. The LLM 
> is then only processing more text similar in principle to English text 
> that describes the colors and shapes in the image. Each pixel in the 
> image is described in symbolic language as "red" or "blue" and so on. 
> The LLM then goes on to do what might be amazing things with that 
> symbolic information, but the problem remains that these language 
> models have no access to the referents. In the case of colors, it can 
> process whatever symbolic representation it uses for "red" in whatever 
> programming language in which it is written, but it cannot actually 
> see the color red to ground the symbol "red."

Well, we use pictures to represent things that are not themselves 
pictures, sound to represent things that are not themselves sounds, and 
so-on. 'Language' doesn't mean just text or spoken words. Musical 
notation is a language, we have sign language, body language, a ton of 
chemical languages (I was just reading about certain tadpoles that hatch 
with stronger jaws than usual if they "sense prey in the water while 
they are still embryos". They are getting a chemical signal from their 
environment that tells them "food is near". What's that if not a 
communication in a language?

Languages are about communication, and are not restricted to any 
specific medium. In fact, we could replace the word "language" with 
"means of communication", although it's a bit unwieldy. We could call 
these AI systems "Large Means of Communication Models" (LMCMs), then 
perhaps people wouldn't assume they can only deal with text inputs.

You know where this is going, right?

Yes. The language of the brain.

Our brains convert all our sensory inputs into a common language: spike 
trains in axons. Every part of our sensorium is described in a symbolic 
language as "|_||_|__|||_|___||" etc., in many parallel channels, and 
this is the common language used throughout the brain. Can't get more 
abstract that that, can you? It's effectively a type of morse code, or 
binary. And this happens right up at the front, in the retina, the 
cochlea, the pacinian corpuscles, olfactory bulbs, etc. Right at the 
interface between the environment and our nervous systems. These spike 
trains have no access to the referents, but they don't need to, in fact 
the referents are constructed from them. These internal models I keep 
mentioning are made of 'nothing more than' circuits of neurons using 
this language. The referents /are made of language/. Now I'm sure this 
is just so much recursive nonsense to a linguist, but it's how the 
mechanics work. (remember that our eyes do not "see a horse". They 
receive a mass of light signals that are sorted out into many detailed 
features, that are linked together, passed up a complex looping chain of 
signals to the visual cortex and many other areas, eventually resulting 
in (or contributing to) an internal conceptual model. THEN we 'see a 
horse'. This becomes a referent for the word "horse". So it's actually 
the complex associations between many many spike trains that actually 
gives meaning to the word "Horse")

What is it about the neural signal "|_||_|__|||_|___||" (etc.) that 
results in the sensation of seeing a colour? There must be something, 
because we undeniably do experience these sensations of seeing colours, 
and the brain undeniably uses spike trains as its way of processing 
information. We have our spike trains, LMCMs have their AASCI codes, and 
both can output coherent utterances about colours , horses, linguists, 
fairground rides and a whole host of other things, that seem to indicate 
that the system in question knows what it's talking about.

So your argument can be applied to human brains, as well as to LMCMs. 
You are effectively arguing that *we* don't understand things because 
our brains are 'just' making correlations between abstract, ungrounded 
streams of binary signals.

Ben


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230422/f6657e83/attachment.htm>


More information about the extropy-chat mailing list