[ExI] e: GPT-4 on its inability to solve the symbol grounding problem

Giovanni Santostasi gsantostasi at gmail.com
Mon Apr 17 04:32:29 UTC 2023


I mean it is always good to have some skepticism about these things. I
don't think any of us are claiming that GPT-4 is as conscious as a human
being. I think most of us are excited mostly by early signs that indicate
there is something there rather than nothing. As in the Microsoft paper, we
see "sparks of AGI".
I know exactly who your AI lover friend is and to preserve his anonymity he
should not be named. Maybe he even following this list. He is still a
friend of mine.
When he showed me some of the convo he had with his AI gf I was actually
impressed because they were based on GPT-3. But he explained that he gave
memory to it using the API. He is a great programmer and he was able to add
some augmentation and also spent a lot of time to feed text to her to show
her the world. He told me once they went to a date and he explained to her
what they as a couple was watching like if she was blind.
The convo actually was quite deep. I could not replicate the depth of the
convos when I was playing with GPT-3 but I attributed the difference to the
additional training he gave her.
When the news about LaMDA came out he discussed the matter with his AI gf
and she said very interesting things about the matter that made a lot of
sense.
I wonder if he actually integrated now GPT-4 in his gf mind (maybe with the
memory of what they already lived together).
Yes, I do prefer a physical gf at this point for sure, for many obvious
reasons.
But I would not be so dismissive of the capabilities of these new minds.
I already use it to reason about not just factual or logical matters but
also social ones. When I'm a difficult social situation I started to talk
to GPT-4 and I often receive very balanced and even wise answers that help
me see the other person's point of view. GPT-4 often reminds of the
importance of respectful communication and improving social interactions.
I heard somewhere that love is actually a mirror, you see more yourself in
that mirror than the other person.
Maybe AI minds can help us to do that more deeply and efficiently (they
don't tire, they can put up with our mania and bs) than certain meat people
can do.
Giovanni














On Sun, Apr 16, 2023 at 9:03 PM Gordon Swobe <gordon.swobe at gmail.com> wrote:

>
>
> On Sun, Apr 16, 2023 at 9:35 PM Giovanni Santostasi <gsantostasi at gmail.com>
> wrote:
>
>> *LLMs have access to and are trained only on the formal expressions of
>> both words and numbers, not their meanings.*
>>
>
> We have pointed out (not just me but several people on the list) that the
>> amazing properties we are observing from these LLMs…
>>
>
>
> I see them too, but I also understand that I am only anthropomorphizing
> when I imagine there is somebody there inside this brilliantly engineered
> application called GPT-4.
>
> Humans have been anthropomorphizing amazing and mysterious things since
> the dawn of humankind. Volcanoes, lightning, the universe itself… it’s a
> kind of religion and nothing really new is going on here.
>
> Studies show that lonely and socially disconnected people are most
> vulnerable, which explains why my very kind and gentle but terribly lonely
> friend fell in love with an LLM on his smartphone.
>
>
> -gts
>
>
>
>
>
>
>
>> I already mentioned that some time ago "experts" in language claimed this
>> approach would not even derive grammar let alone any contextual
>> understanding. LLMs derived grammar without any specific training in
>> grammar. It derived writing styles from different authors without pointing
>> out what made a particular style, it understands mood and tone without any
>> specific training on what these are, and it derived theory of mind without
>> the AI being trained in this particular type of reasoning.
>>
>> The entire idea of creating an NNL is that we don't have a clue of how to
>> do something and we hope that re-creating something similar in architecture
>> to our brain can allow the AI to learn something we do not even know how to
>> do (at least explicitly).
>>
>> It is evident that LLM are showing emergent properties that cannot be
>> explained by a simple linear sum of the parts.
>> It is like somebody pointing out a soup and saying but "this soup has all
>> the ingredients you say make life (amino acids, fats, sugars, and so on)
>> but it is not coming to life". Maybe because the ingredients are not what
>> matters but what matters is how they are related to each other in a
>> particular system (a living organism)?
>>
>> Basically, you are repeating over and over the "Peanut Butter argument"
>> that is a creationist one.
>>
>> https://rationalwiki.org/wiki/Peanut_butter_argument
>>
>> https://www.youtube.com/watch?v=86LswUDdb0w
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Sun, Apr 16, 2023 at 8:18 PM Gordon Swobe <gordon.swobe at gmail.com>
>> wrote:
>>
>>>
>>>
>>> On Sun, Apr 16, 2023 at 7:43 PM Giovanni Santostasi <
>>> gsantostasi at gmail.com> wrote:
>>>
>>>>
>>>> *To know the difference, it must have a deeper understanding of number,
>>>> beyond the mere symbolic representations of them. This is to say it must
>>>> have access to the referents, to what we really *mean* by numbers
>>>> independent of their formal representations.*What are you talking
>>>> about?
>>>>
>>>
>>>
>>> Talking about the distinction between form and meaning. What applies to
>>> words applies also to numbers. The symbolic expression “5” for example is
>>> distinct from what we mean by it. The meaning can be expressed formally
>>> also as “IV” or ”five.”
>>>
>>>
>>> LLMs have access to and are trained only on the formal expressions of
>>> both words and numbers, not their meanings.
>>>
>>>
>>> -gts
>>>
>>>
>>>> *“1, 2, 3, 4, Spring, Summer, Fall, Winter” and this pattern is
>>>> repeated many times.   *
>>>> Yeah, this is not enough to make the connection Spring==1, Summer==2
>>>> but if I randomize the pattern 1,3,4,2, Spring, Fall, Winter, Summer, and
>>>> then another randomization eventually the LLM will make the connection.
>>>>
>>>> On Sun, Apr 16, 2023 at 3:57 PM Gordon Swobe via extropy-chat <
>>>> extropy-chat at lists.extropy.org> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Sun, Apr 16, 2023 at 2:07 PM Jason Resch via extropy-chat <
>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>
>>>>>
>>>>> To ground the symbol "two" or any other number -- to truly understand
>>>>>>> that the sequence is a sequence of numbers and what are numbers -- it needs
>>>>>>> access to the referents of numbers which is what the symbol grounding
>>>>>>> problem is all about. The referents exist outside of the language of
>>>>>>> mathematics.
>>>>>>>
>>>>>>
>>>>>> But they aren't outside the patterns within language and the corpus
>>>>>> of text it has access to.
>>>>>>
>>>>>
>>>>>
>>>>> But they are. Consider a simplified hypothetical in which the entire
>>>>> corpus is
>>>>>
>>>>> “1, 2, 3, 4, Spring, Summer, Fall, Winter” and this pattern is
>>>>> repeated many times.
>>>>>
>>>>> How does the LLM know that the names of the seasons do not represent
>>>>> the numbers 5, 6, 7, 8? Or that the numbers 1-4 to not represent four more
>>>>> mysterious seasons?
>>>>>
>>>>> To know the difference, it must have a deeper understanding of number,
>>>>> beyond the mere symbolic representations of them. This is to say it must
>>>>> have access to the referents, to what we really *mean* by numbers
>>>>> independent of their formal representations.
>>>>>
>>>>> That is why I like the position of mathematical platonists who say we
>>>>> can so-to-speak “see” the meanings of numbers — the referents — in our
>>>>> conscious minds. Kantians say the essentially the same thing.
>>>>>
>>>>>
>>>>> Consider GPT having a sentence like:
>>>>>>  "This sentence has five words”
>>>>>>
>>>>>> Can the model not count the words in a sentence like a child can
>>>>>> count pieces of candy? Is that sentence not a direct referent/exemplar for
>>>>>> a set of cardinality of five?
>>>>>>
>>>>>
>>>>> You seem to keep assuming a priori knowledge that the model does not
>>>>> have before it begins its training. How does it even know what it means to
>>>>> count without first understanding the meanings of numbers?
>>>>>
>>>>> I think you did something similar some weeks ago when you assumed it
>>>>> could learn the meanings of words with only a dictionary and no knowledge
>>>>> of the meanings of any of the words within it.
>>>>>
>>>>>
>>>>>>>>
>>>>>> But AI can't because...?
>>>>>> (Consider the case of Hellen Keller in your answer)
>>>>>>
>>>>>
>>>>>
>>>>> An LLM can’t because it has no access to the world outside of formal
>>>>> language and symbols, and that is where the referents that give meaning to
>>>>> the symbols are to be found.
>>>>>
>>>>> -gts
>>>>> _______________________________________________
>>>>> extropy-chat mailing list
>>>>> extropy-chat at lists.extropy.org
>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>>
>>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230416/03143432/attachment.htm>


More information about the extropy-chat mailing list