[ExI] Language models are like mirrors

Gordon Swobe gordon.swobe at gmail.com
Sun Apr 2 16:53:41 UTC 2023


Jason,

Presumably ChatGPT-4 has processed most if not all the literature on AI and
Linguistics Epistemology and philosophy in general. Why does it say this
about its own limitations compared to humans?

—
ChatGPT-4: The symbol grounding problem refers to the difficulty in
connecting abstract symbols (such as words or concepts) to their real-world
referents. While ChatGPT can simulate understanding by generating
human-like text based on its training data, it doesn't possess the direct
experience or sensory input that humans have to ground these symbols in
reality.

The significance of this limitation has several implications:

Lack of true understanding: ChatGPT doesn't have a genuine understanding of
the world, as it relies on patterns and associations found in the text it
was trained on. It can't form new connections based on direct experience,
which may result in limitations in problem-solving or abstract reasoning.
—-

[it continues with more implications of this limitation, but this lack of
true understanding is the first and in my view most important]

-gts




On Sun, Apr 2, 2023 at 5:24 AM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
> On Sun, Apr 2, 2023, 3:48 AM Gordon Swobe via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>>
>> On Sat, Apr 1, 2023 at 4:19 PM Ben Zaiboc via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> On 01/04/2023 21:08, Gordon Swobe wrote:
>>>
>>> On Sat, Apr 1, 2023 at 7:36 AM Ben Zaiboc via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>> On 01/04/2023 13:43, Gordon Swobe wrote:
>>>>
>>>> Unlike these virtual LLMs, we have access also to the referents in the
>>>> world that give the words in language meaning.
>>>>
>>>>
>>>>
>>>> I don't understand why this argument keeps recurring, despite having
>>>> been demolished more than once.
>>>>
>>>
>>> I has not been demolished in my opinion and incidentally, as I’ve
>>> mentioned, my view is shared by the faculty director of the masters program
>>> in computational linguistics at the University of Washington. This is what
>>> she and her fellow professors teach. Many others understand things the same
>>> way. Brent points out that the majority of those who participate in his
>>> canonizer share similar views, including many experts in the field.
>>>
>>>
>>> Ah, your opinion. You know what they say, "You're entitled to your own
>>> opinions..."
>>>
>>> And you're using 'argument from authority' again.
>>>
>>
>> Merely refuting your claim that my argument is “demolished.” Far from
>> demolished, it is quite widely accepted among other views.
>>
>
> An idea held broadly or even by a majority of experts is no guarantee
> against the belief being demolished.
>
> All it takes is one is one false premise, one logical inconsistency, or
> one new observation to completely destroy a theory. These can sometimes go
> unnoticed for decades or even centuries.
>
> Examples: Frege's set theory shown invalid by one inconsistcy pointed out
> by Bertrand Russell. Newton's theory of gravitation was shown invalid by
> observations of Mercury's orbit. Niels Bohr wave function collapse was
> shown to be an artifact of observation rather than a real physical
> phenomenon by Hugh Everett's PhD thesis.
>
>
> In this case, the argument that nothing can have "meaning" or "understand
> referents" if it only receives information is demolished by the single
> counter example of the human brain as it too receives only information (in
> the form of nerve impulses), and we agree humans have meaning and
> understanding.
>
> Jason
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230402/384de576/attachment.htm>


More information about the extropy-chat mailing list