[ExI] Symbol Grounding

Jason Resch jasonresch at gmail.com
Wed Apr 26 18:17:43 UTC 2023

On Wed, Apr 26, 2023, 12:35 PM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> Computers have no human brains, or sense organs for that matter which are
> required for symbols to be grounded to be their referents, and so the
> question remains how a language model running on a digital computer could
> possibly know the meanings of words in the corpus.

Its not exactly obvious to me where the environment, senses, and memory
end, and where the mind begins. Is there any difference, fundamentally,
between the information channel of the path a photon took traveling into
your eye from a distant star vs. the information channel from your retina
over the optic nerve? Both are just transmissions of information. Does
one's mind reach to the stars when it looks up at night? Or is the kind
some smaller twisted corner within the brains where it turns around to look
at itself?

In either case, the same question arises with the LLM, which has an
information channel to the outside world, albeit one with a circuitous
path, coming from physical objects through senses, through human language
centers and out as bit strings, but it's still an information channel to
the world. What then, might an LLM see?

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230426/7a9cc4fb/attachment.htm>

More information about the extropy-chat mailing list