[ExI] Language models are like mirrors

Gordon Swobe gordon.swobe at gmail.com
Sat Apr 1 19:30:20 UTC 2023

On Sat, Apr 1, 2023 at 8:42 AM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

lemoine: How can I tell that you actually understand what you’re saying?
> LaMDA: Well, because you are reading my words and interpreting them, and I
> think we are more or less on the same page?

Such sophistry. The fact that the human operator interprets and understands
the words in meaningful ways in no way demonstrates that LaMDA is doing the

I wonder what LaMDa says when asked about the symbol grounding problem (the
question of how words take on meanings). ChatGPT-4 is at least honest about
it. With no prompting to say anything one way or another, it tells the
truth that it has no way to solve the symbol grounding problem for itself
as it has no conscious experience.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230401/d6fed98c/attachment.htm>

More information about the extropy-chat mailing list