[ExI] Bender's Octopus (re: LLMs like ChatGPT)

Gordon Swobe gordon.swobe at gmail.com
Fri Mar 24 07:18:31 UTC 2023


You wrote to GPT: A friend of mine says NLM or any other AI cannot
understand anything without concrete referents. I disagree, which arguments
could you give to support my position?
That is not actually what I am saying. It makes no difference whether the
referent is "concrete" or not.

As GPT says in supposed defense of your position:

They can "simulate" understanding by leveraging their knowledge of how
certain words or concepts are used in different contexts.
That's right. Language models can *simulate* understanding. It is not real
understanding. It is simulated understanding. They mimic what understanding
looks like objectively.

I remember writing at length on this list about 15 years ago about how the
digital simulation of a thing is not the same as the thing simulated.

-gts



On Fri, Mar 24, 2023 at 12:57 AM Giovanni Santostasi <gsantostasi at gmail.com>
wrote:

> A friend of mine says NLM or any other AI cannot understand anything
> without concrete referents. I disagree, which arguments could you give to
> support my position? While it is true that NLMs and other AI models do
> not have direct experiences or access to concrete referents like humans do,
> they can still demonstrate a certain level of understanding of abstract
> concepts. Here are some arguments to support your position:
>
>    1.
>
>    Pattern recognition and statistical relationships: NLMs, like ChatGPT,
>    are trained on vast amounts of text data, and they learn by recognizing
>    patterns and statistical relationships within that data. They can generate
>    responses that demonstrate an understanding of abstract concepts based on
>    these patterns, even without direct experience of concrete referents.
>    2.
>
>    Contextual understanding: Through their training, NLMs develop an
>    ability to understand context and respond appropriately to various
>    situations. They can "simulate" understanding by leveraging their knowledge
>    of how certain words or concepts are used in different contexts.
>    3.
>
>    Flexibility and adaptability: NLMs can adapt to new information and
>    generate responses that take into account various factors, including the
>    context, prior knowledge, and the specific question or statement they are
>    responding to. This adaptability allows them to address abstract concepts
>    and ideas, even without direct access to concrete referents.
>    4.
>
>    Success in various tasks: NLMs have shown success in a range of
>    natural language processing tasks, such as question-answering, translation,
>    and summarization, which involve understanding and manipulation of abstract
>    concepts. Their ability to perform well on these tasks is evidence of their
>    capacity to work with abstract ideas without direct access to concrete
>    referents.
>
>
> On Thu, Mar 23, 2023 at 11:44 PM Gordon Swobe <gordon.swobe at gmail.com>
> wrote:
>
>> I can already hear someone saying "but we will include photographs of
>> objects in the training so they have referents," but this still does not do
>> the trick. These digital photographs can be displayed to the human operator
>> of the chatbot, but the bot itself sees only 1s and 0s, on's and off's. It
>> can detect colors by wavelength, but still this is only digital data. It
>> does not see the colors. Likewise with shapes. It is turtles (one's and
>> zero's) all the way down with no referents.
>>
>> -gts
>>
>> On Fri, Mar 24, 2023 at 12:18 AM Gordon Swobe <gordon.swobe at gmail.com>
>> wrote:
>>
>>> Nobody least of all me questions that GPT-4 will be capable of amazing
>>> feats, and that eventually these language models will surpass humans in
>>> terms of what we can call intelligence or what I might for sake of clarity
>>> prefer to call apparent intelligence. The question here is whether they
>>> will know what they are saying given that they are trained only on the
>>> forms of words with no access to the meanings or referents.
>>>
>>> Adrian has made the excellent point a couple of times that this is like
>>> the first contact problem in science fiction, and actually like the first
>>> contact problem between any two cultures with completely different
>>> languages. Q: When Kirk and Spock beam down to a new planet with
>>> intelligent alien life, how will they learn to communicate? A: With
>>> referents.
>>>
>>> Spock will point to himself and say "Spock." Kirk will point to himself
>>> and say "Kirk." Kirk will point to a rock and say "rock." Kirk and Spock
>>> use these kinds referents to initiate  communication. If our alien friend
>>> wants to communicate, he will point to the rock and "fwerasa" (or whatever
>>> is his word for rock). He will point to himself and say his name, and so
>>> on. Eventually, Spock and the alien will learn how to translate a few
>>> words, and from there the process of understanding begins.
>>>
>>> Now, what if they don't beam down to the planet and listen to only
>>> digital radio signals coming from the planet and send digital radio signals
>>> in return? No communication is possible as there are no referents. It's all
>>> noise.
>>>
>>> -gts
>>>
>>>>
>>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230324/89a54c68/attachment.htm>


More information about the extropy-chat mailing list