[ExI] e: GPT-4 on its inability to solve the symbol grounding problem

Gordon Swobe gordon.swobe at gmail.com
Mon Apr 17 15:07:12 UTC 2023

On Mon, Apr 17, 2023 at 2:06 AM Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> Presumably, Gordon, you think that there can be no hope of ever
> communicating with aliens (if they exist). All we can do is send them
> 'meaningless' symbols encoded in various ways.

Adrian (one of the few people in this group who seem to understand what
I've been trying to say) brought this same subject up and pointed out how
the "first contact" problem in science fiction is similar to this problem
of how language models could possibly understand word meanings without
access to the referents from which words derive their meanings. There are
many similarities, but I think also an important difference: presumably an
alien species would be conscious beings with minds like ours which I think
would open up some possible means of communication.

If I thought language models like GPT-4 were everything we mean by
"conscious minds" then probably I would not be making this argument about
language models. However, I think conscious minds are more than mere
language models running on digital computers.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230417/cf62406b/attachment.htm>

More information about the extropy-chat mailing list