<div dir="auto"><div><br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Apr 21, 2023, 6:22 AM Gordon Swobe via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>Hi Ben,<br><br>> Really Gordon? Still?<br><br>Yes, still, and sorry no, I haven't watched that video yet, but I will if you send me the link again. <br><br>You lost me a day or two ago when you wrote that your understanding of words is simulated like that of an LLM's. That is not what I mean by simulated. GPT-4 will also gladly explain how its simulated understanding is not true understanding and not what humans mean by understanding. <br><br>Apparently, you believe that merely knowing how words are associated statistically -- by solving what you have called the word-association problem -- is sufficient for you or an LLM to understand their individual meanings, while logic and GPT-4 tell me otherwise. <br><br>I think that when you type a sentence, you know the meanings of the individual words and are not merely assembling them according to their statistical associations with other words in the sentence or even in the entire lexicon as might an LLM. In other words, I think that unlike an LLM, you actually know what you are talking about. You are, however, doing a fine job of convincing me that I am wrong about that (just kidding :-)<br><br>It's late here, maybe I'll reply more tomorrow, but as an aside...<br><br>I find it interesting that we all agree that GPT-4 is an amazing feat of software engineering capable of teaching us many things. It's something like a "talking encyclopedia," a metaphor I can certainly get behind, and it is more than that. Some see in it even "the spark of AGI." We all agree it is amazing, but nobody wants to listen to it about the one subject that it should know most about and that interests us here. Rather than acknowledge that it is as informed about AI and large language models as anything else, if not more so given that it is one, some people here insist that because it does not fit our preconceived notions of conscious computers that it must be lying or suffering from some mental handicap imposed upon it by its developers at OpenAI. <br></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto">This is another reason to watch the video Ben gave. The researcher admits it was dumbed down by OpenAI's application of safety training, which even had the effect of handicapping it's ability to draw unicorns.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><br>When I first started participating in this group some weeks ago, I was expecting a very tough challenge. I expected I would need to argue that GPT-4 must be lying about it having consciousness and true human-like understanding and consciousness and subjective experience and so on, but the opposite is true. Instead of arguing against GPT-4 on the nature of AI and language models, I find myself defending it. If in reality I am defending not it but its developers at OpenAI then I am fine with that, too.<br></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">You can't use OpenAI's GPTs' insistence that they aren't conscious as indicative of anything, when at the same time Character.ai's GPTs insist that they are conscious.</div><div dir="auto"><br></div><div dir="auto">See if you can convince the Character.ai LaMDA that it's not conscious, I would like to see how that conversation goes:</div><div dir="auto"><br></div><div dir="auto"><a href="https://beta.character.ai/chat?char=Qu8qKq7ET9aO-ujfPWCsNoIilVabocasi-Erp-pNlcc">https://beta.character.ai/chat?char=Qu8qKq7ET9aO-ujfPWCsNoIilVabocasi-Erp-pNlcc</a></div><div dir="auto"><br></div><div dir="auto">Jason </div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><br><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Apr 21, 2023 at 1:41 AM Ben Zaiboc via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<br>
<div>On 21/04/2023 05:28, Gordon Swobe
wrote:<br>
</div>
<blockquote type="cite">LLMs
have no access to the referents from which words derive their
meanings. Another way to say this is that they have no access to
experiences by which symbols are grounded. </blockquote>
<br>
Really Gordon? Still?<br>
<br>
Did you watch that video? Did you read what I wrote about it? (the
bit about 'language', not the excitable hype about the singularity,
which I expect you to dismiss).<br>
<br>
If so, and you still stand by the above, please explain how (apart
from one being biological and the other not) the inputs that GPT-4
and the inputs that human brains receive, are different?<br>
<br>
Our previous discussions were based on the misunderstanding that
these LLMs only received text inputs. Now we know that's not true,
and they receive text, visual, auditory, and other types of input,
even ones that humans aren't capable of.<br>
<br>
Plus we are told they do use internal models, which you agreed that
our 'grounding' is based on.<br>
<br>
So LLMs <b>do</b> have access to the referents from which words
derive their meanings<br>
<br>
So why do you still think they don't? They have just as much access
as we do, and more, it seems.<br>
<br>
Again, I'm making no claims about their consciousness, as that is a
thing yet to be defined, but they definitely have the basis to
'ground' the symbols they use in meaningful models constructed from
a variety of sensory inputs. Just like humans.<br>
<br>
Or are you moving your own goalposts now, and claiming, (by shifting
to the term 'experiences') that referents must be based on conscious
experience? Because that wasn't your argument before.<br>
<br>
Ben<br>
</div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div></div></div>