[ExI] GPT-4 on its inability to solve the symbol grounding problem

Gordon Swobe gordon.swobe at gmail.com
Fri Apr 7 03:23:55 UTC 2023


On Thu, Apr 6, 2023 at 11:09 AM Will Steinberg via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

Hi Will,

 This here is the part where I strongly disagree.  Where even is the
> boundary between the referent and our word for it?
>

The word is something you utter or type. It is nothing more than an
abstract symbol or sound and means nothing in and of itself.
However, unless it is meaningless gibberish, it will evoke in the other
person's mind the image of an object or idea that the word stands for. That
is the referent.

In perfect communication, the referents are absolutely identical in the
speaker's and listener's minds. In actual practice, people can mean
slightly different things by the same word, but usually words still get the
point across.


> You could use some kind of magnet to generate a red quale in our brains.
> Where is the referent?
>

If you were the subject of that experiment and saw the color red and said
to yourself or others, "I see red" then your experience of red is the
referent and your word "red" is the word that corresponds to that referent.

You could use the same magnet in theory to just make us believe we had seen
> red.  How about then?
>

Hmm, I don't know what is the difference between seeing red and believing
that I see red. In any case, whatever it is that you see or thought you saw
is the referent. It makes no difference even if what you see is an
hallucination. From your point of view, whatever you see is the referent
and whatever you call it is the word that corresponds to that referent.


> You say GPT doesn't understand what red 'is'.  Maybe it doesn't have the
> same qualia we do
>

GPT knows how the word red relates to other words and so in that sense it
"understands" red, but I see zero reason to think it experiences anything.
It also denies that it has experience.

, but what about a being that could see color in billions of times more
> resolution than ourselves?  We could still discuss 'red' with it, but it
> would believe that we don't really understand it, that we haven't seen it,
> we have no referent for it.  It doesn't mean we aren't conscious.
>

I think we could discuss red with a complete alien with nothing like
eyesight, assuming of course that it was conscious and we limited our
communications to be about the objective physical properties of red,
its wavelength and so on, but there would be no common experience of red.

That's the part I'm having a tough time understanding about your argument.
> I think one part of your argument is obviously true (GPT doesn't have the
> same qualia for red that we do)
>

I'm not making even that argument. I believe GPT is a machine with no
sensations whatsoever. It does not have even a camera to attempt to see
red, let alone the means to translate the signals from that camera into an
experience of seeing red.

I actually had a conversation with GPT-4 about this (it "knows" a great
deal about AI and language models). It says it would be unable to ground
the symbol "red" even with digital signals delivered to it by a camera,
though such a mult-modal apparatus would expand its ability to know how the
word "red" relates to other words in human speech.

 > Helen Keller or Mary can talk about red and we don't say they are just
mimicry machines

Have you seen how she talked about colors? She basically imagined them in
terms of her other senses like touch and temperature. It was a beautiful
thing, but remember she was conscious.

>  I just don't get what the machine's experience of red being different
from ours has to do with it all.

I just don't get why you think machines experience anything at all. Does a
hammer feel pain when it's driving a nail? I'm joking a bit there, but the
belief that mindless machines have conscious experiences just does not
compute with me. :)

I would understand if the machine had NO possible referents for anything,
> but that's not true.  It does have the innate mathematical referents you
> discussed.
>

I think conscious *minds* have innate mathematical referents, yes. I say
this because I believe we discover mathematical truths and do not invent
them. If we discover them and recognize them as true then we must have the
referents in our minds. I think most of us actually can "see" those
referents in an abstract way.

But not so with mindless machines. That is a giant leap of faith that you
seem to take for granted as true.


> Another thought experiment:
>
> Imagine a being which can read the physical state of a brain and can also
> snap its fingers and reproduce any physical state in a brain.  So it can
> read a red quale in your brain and also make you see red.  It has never
> 'seen red'.
>
> 1) If this could simulate an entire brain seeing red, is that not
> identical to seeing red?
>

I'm sorry, but I don't understand the question. If the being snaps his
fingers and makes you see red then yes, you see red.

> And I think the utter, utter lack of understanding of qualia by everyone
ever at least means we should all be humble about the situation and, if not
agnostic, at least admit that agnosticism is technically the most rational
viewpoint right now.

I agree. I've been trying to stay out of debates about qualia which is
probably why you thought I was ignoring you. Thanks for writing.

-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230406/cd2df4b0/attachment-0001.htm>


More information about the extropy-chat mailing list