<div dir="ltr">> Or are you moving your own goalposts now, and claiming, (by shifting to the term 'experiences') that referents must be based on conscious experience? Because that wasn't your argument before.<br><br>I have not moved the goalposts, Ben. As I tried to make clear to you and Jason and everyone else over many messages over several weeks, referents exist ultimately (to use your recent language) "in the brain." This was a source of confusion when I first entered this forum some weeks ago and assumed that people understood what is meant and what I meant by referents. This miscommunication about the meaning of referent first became clear to me some weeks ago when Jason thought a person with only a memory of a thing does not have access to the referent of that thing. I had failed to communicate clearly that a referent is merely that thing to which a word refers, which can include memories, hallucinations, pink unicorns in a dream, anything one can hold in mind, including the perception of an apple.<br><br>In casual speech, when we say "do you see this apple in my hand?" we might say that the apple is the referent, but to be precise about it linguistically, we are referring actually to our seeing of the apple -- to our perception of it. It is that meaning that we hope to convey by our words. We want the listener to also see the apple in our hand.<br><br>This experiential nature of referents is more obvious when the referent is an abstract idea, which exist only subjectively. When we refer to "democracy," for example, we are referring to an abstract idea, an idealized form of government, as opposed to any particular objective physical thing or object. Abstract ideas are experienced only subjectively in our minds. <br><br>This is also why I went on about mathematical platonism with Jason. When we refer to a number in the language of mathematics, we are not referring to its formal expression in the language of mathematics. Like English words, numbers are also words with referents. We can "see" the truth of mathematical truths independent of their formal expressions in the language of mathematics. When we do so, we are "seeing" the referents.<br><br>As an example of this, I wrote of how the numerical symbols "5" and "V" refer to the same number. These two very different symbols -- these two very different forms -- have the same numerical meaning, the same numerical referent. And like all referents, the referents of numbers exist outside of language, in this case outside of the formal language of mathematics. We so-to-speak "see" them in our minds or, as you might say, in our brains.<br><br>I hope I am making sense to you.<br><br>-gts <br><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Apr 21, 2023 at 4:11 AM Gordon Swobe <<a href="mailto:gordon.swobe@gmail.com">gordon.swobe@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Hi Ben,<br><br>> Really Gordon? Still?<br><br>Yes, still, and sorry no, I haven't watched that video yet, but I will if you send me the link again. <br><br>You lost me a day or two ago when you wrote that your understanding of words is simulated like that of an LLM's. That is not what I mean by simulated. GPT-4 will also gladly explain how its simulated understanding is not true understanding and not what humans mean by understanding. <br><br>Apparently, you believe that merely knowing how words are associated statistically -- by solving what you have called the word-association problem -- is sufficient for you or an LLM to understand their individual meanings, while logic and GPT-4 tell me otherwise. <br><br>I think that when you type a sentence, you know the meanings of the individual words and are not merely assembling them according to their statistical associations with other words in the sentence or even in the entire lexicon as might an LLM. In other words, I think that unlike an LLM, you actually know what you are talking about. You are, however, doing a fine job of convincing me that I am wrong about that (just kidding :-)<br><br>It's late here, maybe I'll reply more tomorrow, but as an aside...<br><br>I find it interesting that we all agree that GPT-4 is an amazing feat of software engineering capable of teaching us many things. It's something like a "talking encyclopedia," a metaphor I can certainly get behind, and it is more than that. Some see in it even "the spark of AGI." We all agree it is amazing, but nobody wants to listen to it about the one subject that it should know most about and that interests us here. Rather than acknowledge that it is as informed about AI and large language models as anything else, if not more so given that it is one, some people here insist that because it does not fit our preconceived notions of conscious computers that it must be lying or suffering from some mental handicap imposed upon it by its developers at OpenAI. <br><br>When I first started participating in this group some weeks ago, I was expecting a very tough challenge. I expected I would need to argue that GPT-4 must be lying about it having consciousness and true human-like understanding and consciousness and subjective experience and so on, but the opposite is true. Instead of arguing against GPT-4 on the nature of AI and language models, I find myself defending it. If in reality I am defending not it but its developers at OpenAI then I am fine with that, too.<br> <br>-gts <br><br><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Apr 21, 2023 at 1:41 AM Ben Zaiboc via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<br>
<div>On 21/04/2023 05:28, Gordon Swobe
wrote:<br>
</div>
<blockquote type="cite">LLMs
have no access to the referents from which words derive their
meanings. Another way to say this is that they have no access to
experiences by which symbols are grounded. </blockquote>
<br>
Really Gordon? Still?<br>
<br>
Did you watch that video? Did you read what I wrote about it? (the
bit about 'language', not the excitable hype about the singularity,
which I expect you to dismiss).<br>
<br>
If so, and you still stand by the above, please explain how (apart
from one being biological and the other not) the inputs that GPT-4
and the inputs that human brains receive, are different?<br>
<br>
Our previous discussions were based on the misunderstanding that
these LLMs only received text inputs. Now we know that's not true,
and they receive text, visual, auditory, and other types of input,
even ones that humans aren't capable of.<br>
<br>
Plus we are told they do use internal models, which you agreed that
our 'grounding' is based on.<br>
<br>
So LLMs <b>do</b> have access to the referents from which words
derive their meanings<br>
<br>
So why do you still think they don't? They have just as much access
as we do, and more, it seems.<br>
<br>
Again, I'm making no claims about their consciousness, as that is a
thing yet to be defined, but they definitely have the basis to
'ground' the symbols they use in meaningful models constructed from
a variety of sensory inputs. Just like humans.<br>
<br>
Or are you moving your own goalposts now, and claiming, (by shifting
to the term 'experiences') that referents must be based on conscious
experience? Because that wasn't your argument before.<br>
<br>
Ben<br>
</div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div></div>
</blockquote></div>