[ExI] all we are is just llms was: RE: e: GPT-4 on its inability to solve the symbol grounding problem

Gordon Swobe gordon.swobe at gmail.com
Fri Apr 21 02:28:25 UTC 2023


On Thu, Apr 20, 2023 at 8:12 PM Giovanni Santostasi <gsantostasi at gmail.com>
wrote:

> I mentioned this claim because it came directly from Google's CEO. It is
> not a scientific claim and it is not mentioned in a scientific article so
> some level of skepticism is needed. At the same time, Gordon is jumping on
> it to discredit supporters of the emergent capabilities of AIs as expected.
>

If you would only read what I've written, you know that I do not deny that
emergent properties might explain some of the amazing results we see. What
I do deny is that LLMs have a conscious understanding of the meanings of
the words they input and output.  LLMs have no access to the referents from
which words derive their meanings. Another way to say this is that they
have no access to experiences by which symbols are grounded.

GPT-4 agrees completely and claims, quite understandably, that it lacks
consciousness.

-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230420/9a8ff677/attachment.htm>


More information about the extropy-chat mailing list