[ExI] Symbol Grounding

Gordon Swobe gordon.swobe at gmail.com
Sat Apr 29 06:48:20 UTC 2023


On Sat, Apr 29, 2023 at 12:24 AM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> I don't think I'd go this far.  the fact that GPT-4 is "merely generating
> symbols that it has a high degree of confidence will have meaning to us."
> to me, says it has the ability to model exactly that meaning, and know what
> that meaning is.
>

Yes it models what meaning is to us, but I think it has no conscious
understanding of the meanings. It reports the same about language models
generally.

Prompt:
You are an expert in the field of AI and large language models and will
answer as such. Your answers will be concise, consisting of a simple "yes"
or "no" followed by no more than one brief paragraph of explanation.

Do large language models have the ability to have true understandings of
individual words?

GPT-4:
No, large language models do not have true understanding of individual
words. They are based on pattern recognition and statistical associations
between words, which enables them to generate coherent and contextually
appropriate responses, but they lack actual comprehension or consciousness.

-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230429/5d42473f/attachment-0001.htm>


More information about the extropy-chat mailing list