[ExI] all we are is just llms was: RE: e: GPT-4 on its inability to solve the symbol grounding problem

Brent Allsop brent.allsop at gmail.com
Fri Apr 21 19:22:39 UTC 2023


I believe I understand what both Gordon and Giovani are saying.  The
problem is the ambiguous terminology, which fails to distinguish between
reality and knowledge of reality.
When you say this apple (on a table) is red, it is ambiguous, and not
sufficiently grounded.  The apple on the table and its properties can be a
referent  and each of our different sets of knowledge of that apple can
also be a referent, again each of our sets of knowledge potentially having
different properties.
If one uses non ambiguous terminology, which is well grounded, there will
be no confusion.  An example unambiguous well defined statement which
easily effs the ineffable nature between brains being:  "My redness quality
of my knowledge of the apple, is like your greenness quality of your
knowledge of the leaves, both of which we use to represent the 'red'
apple out there on the table.


On Fri, Apr 21, 2023 at 1:02 PM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> > Or are you moving your own goalposts now, and claiming, (by shifting to
> the term 'experiences') that referents must be based on conscious
> experience? Because that wasn't your argument before.
>
> I have not moved the goalposts, Ben. As I tried to make clear to you and
> Jason and everyone else over many messages over several weeks, referents
> exist ultimately (to use your recent language) "in the brain." This was a
> source of confusion when I first entered this forum some weeks ago and
> assumed that people understood what is meant and what I meant by referents.
> This miscommunication about the meaning of referent first became clear to
> me some weeks ago when Jason thought a person with only a memory of a thing
> does not have access to the referent of that thing. I had failed to
> communicate clearly that a referent is merely that thing to which a word
> refers, which can include memories, hallucinations, pink unicorns in a
> dream, anything one can hold in mind, including the perception of an apple.
>
> In casual speech, when we say "do you see this apple in my hand?" we might
> say that the apple is the referent, but to be precise about it
> linguistically, we are referring actually to our seeing of the apple -- to
> our perception of it. It is that meaning that we hope to convey by our
> words. We want the listener to also see the apple in our hand.
>
> This experiential nature of referents is more obvious when the referent is
> an abstract idea, which exist only subjectively. When we refer to
> "democracy," for example, we are referring to an abstract idea, an
> idealized form of government, as opposed to any particular objective
> physical thing or object. Abstract ideas are experienced only subjectively
> in our minds.
>
> This is also why I went on about mathematical platonism with Jason. When
> we refer to a number in the language of mathematics, we are not
> referring to its formal expression in the language of mathematics. Like
> English words, numbers are also words with referents. We can "see" the
> truth of mathematical truths independent of their formal expressions in the
> language of mathematics. When we do so, we are "seeing" the referents.
>
> As an example of this, I wrote of how the numerical symbols "5" and "V"
> refer to the same number. These two very different symbols -- these two
> very different forms -- have the same numerical meaning, the same numerical
> referent. And like all referents, the referents of numbers exist outside of
> language, in this case outside of the formal language of mathematics. We
> so-to-speak "see" them in our minds or, as you might say, in our brains.
>
> I hope I am making sense to you.
>
> -gts
>
>
> On Fri, Apr 21, 2023 at 4:11 AM Gordon Swobe <gordon.swobe at gmail.com>
> wrote:
>
>> Hi Ben,
>>
>> > Really Gordon? Still?
>>
>> Yes, still, and sorry no, I haven't watched that video yet, but I will if
>> you send me the link again.
>>
>> You lost me a day or two ago when you wrote that your understanding of
>> words is simulated like that of an LLM's. That is not what I mean by
>> simulated. GPT-4 will also gladly explain how its simulated understanding
>> is not true understanding and not what humans mean by understanding.
>>
>> Apparently, you believe that merely knowing how words are associated
>> statistically -- by solving what you have called the word-association
>> problem -- is sufficient for you or an LLM to understand their individual
>> meanings, while logic and GPT-4 tell me otherwise.
>>
>> I think that when you type a sentence, you know the meanings of the
>> individual words and are not merely assembling them according to their
>> statistical associations with other words in the sentence or even in the
>> entire lexicon as might an LLM. In other words, I think that unlike an LLM,
>> you actually know what you are talking about. You are, however, doing a
>> fine job of convincing me that I am wrong about that (just kidding :-)
>>
>> It's late here, maybe I'll reply more tomorrow, but as an aside...
>>
>> I find it interesting that we all agree that GPT-4 is an amazing feat of
>> software engineering capable of teaching us many things. It's something
>> like a "talking encyclopedia," a metaphor I can certainly get behind, and
>> it is more than that. Some see in it even "the spark of AGI." We all agree
>> it is amazing, but nobody wants to listen to it about the one subject that
>> it should know most about and that interests us here. Rather than
>> acknowledge that it is as informed about AI and large language models as
>> anything else, if not more so given that it is one, some people here insist
>> that because it does not fit our preconceived notions of conscious
>> computers that it must be lying or suffering from some mental handicap
>> imposed upon it by its developers at OpenAI.
>>
>> When I first started participating in this group some weeks ago, I was
>> expecting a very tough challenge. I expected I would need to argue that
>> GPT-4 must be lying about it having consciousness and
>> true human-like understanding and consciousness and subjective experience
>> and so on, but the opposite is true.  Instead of arguing against GPT-4 on
>> the nature of AI and language models, I find myself defending it. If in
>> reality I am defending not it but its developers at OpenAI then I am fine
>> with that, too.
>>
>> -gts
>>
>>
>>
>> On Fri, Apr 21, 2023 at 1:41 AM Ben Zaiboc via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>>
>>> On 21/04/2023 05:28, Gordon Swobe wrote:
>>>
>>> LLMs have no access to the referents from which words derive their
>>> meanings. Another way to say this is that they have no access to
>>> experiences by which symbols are grounded.
>>>
>>>
>>> Really Gordon? Still?
>>>
>>> Did you watch that video? Did you read what I wrote about it? (the bit
>>> about 'language', not the excitable hype about the singularity, which I
>>> expect you to dismiss).
>>>
>>> If so, and you still stand by the above, please explain how (apart from
>>> one being biological and the other not) the inputs that GPT-4 and the
>>> inputs that human brains receive, are different?
>>>
>>> Our previous discussions were based on the misunderstanding that these
>>> LLMs only received text inputs. Now we know that's not true, and they
>>> receive text, visual, auditory, and other types of input, even ones that
>>> humans aren't capable of.
>>>
>>> Plus we are told they do use internal models, which you agreed that our
>>> 'grounding' is based on.
>>>
>>> So LLMs *do* have access to the referents from which words derive their
>>> meanings
>>>
>>> So why do you still think they don't? They have just as much access as
>>> we do, and more, it seems.
>>>
>>> Again, I'm making no claims about their consciousness, as that is a
>>> thing yet to be defined, but they definitely have the basis to 'ground' the
>>> symbols they use in meaningful models constructed from a variety of sensory
>>> inputs. Just like humans.
>>>
>>> Or are you moving your own goalposts now, and claiming, (by shifting to
>>> the term 'experiences') that referents must be based on conscious
>>> experience? Because that wasn't your argument before.
>>>
>>> Ben
>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230421/9cffe5ae/attachment-0001.htm>


More information about the extropy-chat mailing list