[ExI] Symbol Grounding

Gordon Swobe gordon.swobe at gmail.com
Fri Apr 28 05:10:07 UTC 2023


On Thu, Apr 27, 2023 at 7:58 PM Stuart LaForge via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> Ok, wise ones. Interpret this experiment with ChatGPT:
>
> -------------begin--------------
> Stuart: Create and define a word that means "to turn your mind inside out."
>
> ChatGpt: The word I would suggest for "to turn your mind inside out"
> is "mentivert."
>
> Mentivert can be defined as the act of flipping or reversing one's
> thought process or mental state, causing a complete change in
> perspective or mindset. It implies a deep introspection or
> self-reflection, leading to a thorough examination and re-evaluation
> of one's beliefs and values.
>
> For example, "After experiencing a series of setbacks, he had to
> mentivert and question his approach to life in order to move forward."
> ---------end----------
>
> Does ChatGPT understand the meaning of the word/symbol "menitivert"?
> Is it grounded in its understanding of the word it created?


I think you should ask GPT if it thinks that an AI creating this
portmanteau "mentivert" is evidence of that AI having solved the symbol
grounding problem, or of having solved the problem of how language models
have no access to the referents from which words derive their meanings. And
if you don't like its answer, argue with it. :)

-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230427/2bba4e39/attachment.htm>


More information about the extropy-chat mailing list