[ExI] GPT-4 on its inability to solve the symbol grounding problem

Adrian Tymes atymes at gmail.com
Sat Apr 8 18:14:06 UTC 2023

On Sat, Apr 8, 2023 at 10:51 AM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> I keep showing this image, attempting to communicate something:
> [image: 3_functionally_equal_machines_tiny.png]
> Sure, our elementary school teacher told us the one on the left is red,
> the one in the middle is green, and the one on the right is just the word
> 'Red'.
> But it is evident from all these conversations, that nobody here
> understands the deeper meaning I'm attempting to communicate.
> Some people seem to be getting close, which is nice, but they may not yet
> be fully there.
> If everyone fully understood this, all these conversations would be
> radically different.
> Even if you disagree with me, can anyone describe the deeper meaning I'm
> attempting to communicate with this image?
> What does this image say about qualities, different ways of representing
> information, and different ways of doing computation?
> How about this, I'll give $100 worth of Ether, or just USD, to anyone who
> can fully describe the meaning attempting to be portrayed with this image.

I'll give it a shot.


The physical mechanism by which different entities encode the experience of
seeing a red thing (such as a red apple) can differ even though they mean
the same thing.  For instance, the exact chemical composition and energy
state in one human that encodes "red" might, in some other human, encode
"green".  Meanwhile, a synthetic intelligence might not use neurons with
their electrical balances at all, but instead encode its experience in
something far more analogous to a written word.

Thus, the image is an illustration of substrate independence: the same
meaning can be encoded in multiple different ways.

The impact is not limited to mere physicality.  The experience can differ:
for instance, the emotional cues one person links to red, another person
might link to green - and the robot might have no such emotional cues
linked.  (A more visceral example of this might be: a NSFW image that
arouses one person, reminds another person of past trauma and thus disturbs
or frightens them, while a third person might not understand the reference
and thus be confused why the other two have any significant reaction to it
unless and until it is explained.)  But these are all experiences of seeing
the same object.

(An aspect I don't think you mean: ...even if the apple has actually been
painted dark grey but all three observers mistakenly think it has color,
perhaps because they are seeing it in a low-light situation where their
color vision would not engage.)


That said, I suspect that to "fully describe" it to your satisfaction may
rely on meanings that only you see in the image, which are only in your
associations to the image and not inherent in the image.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230408/ed731fd1/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 3_functionally_equal_machines_tiny.png
Type: image/png
Size: 26214 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230408/ed731fd1/attachment-0001.png>

More information about the extropy-chat mailing list