[ExI] Language models are like mirrors

Ben Zaiboc ben at zaiboc.net
Mon Apr 3 10:07:26 UTC 2023


On 02/04/2023 18:46, Gordon Swobe wrote:
> Jason,
>
> Presumably ChatGPT-4 has processed most if not all the literature on 
> AI and Linguistics Epistemology and philosophy in general. Why does it 
> say this about its own limitations compared to humans?
>
>> ChatGPT-4: The symbol grounding problem refers to the difficulty in 
> connecting abstract symbols (such as words or concepts) to their 
> real-world referents. While ChatGPT can simulate understanding by 
> generating human-like text based on its training data, it doesn't 
> possess the direct experience or sensory input that humans have to 
> ground these symbols in reality.
>
> The significance of this limitation has several implications:
>
> Lack of true understanding: ChatGPT doesn't have a genuine 
> understanding of the world, as it relies on patterns and associations 
> found in the text it was trained on. It can't form new connections 
> based on direct experience, which may result in limitations in 
> problem-solving or abstract reasoning.
> —-
>
> [it continues with more implications of this limitation, but this lack 
> of true understanding is the first and in my view most important]


I don't know if someone else has already noted this (I'm still catching 
up on the recent flood of posts), but don't you consider it ironic that 
you are using the systems own apparent understanding of itself to show 
that it doesn't understand things?

Ben

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230403/6fbd996d/attachment.htm>


More information about the extropy-chat mailing list