[ExI] Bard gets it
Brent Allsop
brent.allsop at gmail.com
Fri May 5 11:28:19 UTC 2023
[image: Subjective_CPU.png]
My way of thinking about things agrees with Jason's. Sense organs (eyes)
aren't required for a dreaming system to render the same models into our
subjective CPU of consciousness. And the referent to the word "redness" is
to a quality of the model, in the CPU. There is only an isomporhic
modeling relationship between this false colored knowledge of the
strawberry, and the much more complex strawberry out there. So, even
though we do know the meaning of "redness" the ability for us to know the
meaning of this subjective MODEL of the strawberry out there, is no
different from what a large language MODEL can have, by simply having a
different isomorphic model composed of only words instead of models
composed of subjective properties.
On Fri, May 5, 2023 at 4:06 AM Jason Resch <jasonresch at gmail.com> wrote:
>
>
> On Thu, May 4, 2023, 9:38 PM Gordon Swobe <gordon.swobe at gmail.com> wrote:
>
>> I’m hoping that you, Jason, can see now the difference between what GPT-4
>> and Bard and I call “simulated understanding,” (what I sometimes also call
>> “understanding” in scare quotes), and true understanding.
>>
>> Language models have no true understanding of words, or of the world in
>> general. They have only simulated understanding, what I have seen you call
>> the appearance of understanding. There is nobody there to have true
>> understanding.
>>
>> From this I see you wanting to draw some conclusions about p-zombies, but
>> LLMs do not even remotely resemble humans, zombified or otherwise. They
>> have no bodies or sense organs, to name the most obvious and relevant
>> differences. I think LLMs tell us nothing or next to nothing about
>> p-zombies.
>>
>> -gts
>>
>
>
> 1. What is required to have a sense organ?
>
> To me it simply requires some conduit of information from the environment.
> Does a LLM not have this?
>
> Jason
>
>
>>
>>
>>
>>
>>
>>
>> On Thu, May 4, 2023 at 6:41 PM Brent Allsop <brent.allsop at gmail.com>
>> wrote:
>>
>>>
>>> Hi Gordon,
>>> Yea, thanks for sending this.
>>> I agree with most of this, but think there might be some differences in
>>> my views.
>>> Studying what you are saying has helped me better develop and express my
>>> views.
>>> I'm working on a description, which I'll send on to find out if they are
>>> indeed different.
>>>
>>>
>>>
>>> On Thu, May 4, 2023 at 3:20 PM Gordon Swobe <gordon.swobe at gmail.com>
>>> wrote:
>>>
>>>> Brent and Jason,
>>>>
>>>> I value your insights and intellectual capacity, which is why I'm
>>>> sharing my thoughts with you both. When I presented my argument on ExI, it
>>>> seemed to elicit misunderstandings and hostility from others, rather than
>>>> the intelligent discussion I had hoped for. I could only take so much of it
>>>> and left.
>>>>
>>>> Bard is clearly not as well-trained and mature as GPT-4, but it
>>>> gets my argument. Like humble GPT-4, humble Bard understands there is an
>>>> important distinction to be made between true understanding and simulated
>>>> understanding that these virtual friends of ours have no true understanding
>>>> of words.
>>>>
>>>> Me:
>>>> Is this paragraph of mine true?
>>>>
>>>> To have true understanding of words, one must have some familiarity
>>>> with the things or ideas they refer to. These things to which words refer
>>>> are called referents. This familiarity can come from one's own experiences,
>>>> from one's knowledge of the world, or from one's interactions with others.
>>>> Referents can be real or imaginary, for example unicorns we read about in a
>>>> story. Some words refer to other words, but those words also have
>>>> referents. Ultimately, the referents of all words exist outside of
>>>> language.
>>>>
>>>> Bard:
>>>> Yes, the paragraph is true. The paragraph is about the relationship
>>>> between words and their referents. It states that in order to have true
>>>> understanding of words, one must have some familiarity with the things or
>>>> ideas they refer to. These things to which words refer are called
>>>> referents. The paragraph goes on to say that this familiarity can come from
>>>> one's own experiences, from one's knowledge of the world, or from one's
>>>> interactions with others. It also states that referents can be real or
>>>> imaginary. Finally, the paragraph concludes by saying that ultimately, the
>>>> referents of all words exist outside of language.
>>>>
>>>> Here are some examples to illustrate the paragraph's points:
>>>>
>>>> The word "dog" refers to a real animal that many people are familiar
>>>> with.
>>>> The word "unicorn" refers to an imaginary animal that some people are
>>>> familiar with because they have read about it in stories.
>>>> The word "love" refers to a feeling that many people have experienced.
>>>> The word "democracy" refers to a form of government that many people
>>>> are familiar with.
>>>> As the paragraph states, all of these referents exist outside of
>>>> language. They are not created by language, but rather language is used to
>>>> refer to them.
>>>>
>>>> Me:
>>>> If that paragraph is true and large language models have no access to
>>>> anything except words and language, how can large language models have true
>>>> understanding of words?
>>>>
>>>> Bard:
>>>> That is a very good question. Large language models are trained on
>>>> massive amounts of text data, but they do not have any direct experience
>>>> with the world. They cannot see, hear, smell, taste, or touch anything.
>>>> They cannot interact with the world in any way. All they have is text.
>>>>
>>>> *This means that large language models can only simulate understanding
>>>> of words.* They can learn to associate certain patterns of words with
>>>> certain referents, but they cannot truly understand what those referents
>>>> are. They do not have any personal experience with them.
>>>>
>>>> For example, a large language model might be able to learn to associate
>>>> the word "dog" with the image of a dog. However, it will never truly
>>>> understand what a dog is like. It will never know what it feels like to pet
>>>> a dog, to play with a dog, or to be bitten by a dog.
>>>>
>>>> *So, while large language models can be very impressive, it is
>>>> important to remember that they are only simulating understanding. *
>>>>
>>>> -gts
>>>>
>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230505/1a436ee4/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Subjective_CPU.png
Type: image/png
Size: 41980 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230505/1a436ee4/attachment-0001.png>
More information about the extropy-chat
mailing list