[ExI] all we are is just llms was: RE: e: GPT-4 on its inability to solve the symbol grounding problem

Giovanni Santostasi gsantostasi at gmail.com
Fri Apr 21 03:14:10 UTC 2023

We don't agree on this referent business. That is already established.
About the Bengali business, it is possible that the counterclaim by Mitchel
is bogus given her having personal issues with Google and probably she
misunderstood what it is said. I went back and listened to the interview
not just with Google CEO but another manager that says that the AI had very
few prompts in Bengali and from that it derived the entire language that
seems difficult to believe but not impossible.

On Thu, Apr 20, 2023 at 7:29 PM Gordon Swobe <gordon.swobe at gmail.com> wrote:

> On Thu, Apr 20, 2023 at 8:12 PM Giovanni Santostasi <gsantostasi at gmail.com>
> wrote:
>> I mentioned this claim because it came directly from Google's CEO. It is
>> not a scientific claim and it is not mentioned in a scientific article so
>> some level of skepticism is needed. At the same time, Gordon is jumping on
>> it to discredit supporters of the emergent capabilities of AIs as expected.
> If you would only read what I've written, you know that I do not deny that
> emergent properties might explain some of the amazing results we see. What
> I do deny is that LLMs have a conscious understanding of the meanings of
> the words they input and output.  LLMs have no access to the referents from
> which words derive their meanings. Another way to say this is that they
> have no access to experiences by which symbols are grounded.
> GPT-4 agrees completely and claims, quite understandably, that it lacks
> consciousness.
> -gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230420/ff8ff8b3/attachment.htm>

More information about the extropy-chat mailing list