[ExI] GPT-4 on its inability to solve the symbol grounding problem

Will Steinberg steinberg.will at gmail.com
Thu Apr 6 17:06:51 UTC 2023


On Thu, Apr 6, 2023 at 8:56 AM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
> On Thu, Apr 6, 2023 at 6:16 AM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
> How is it the brain derives meaning when all it receives are nerves
>> signals? Even if you do not know, can you at least admit it stands as a
>> counterexample as its existence proves that at least somethings (brains)
>> *can* derive understanding from the mere statistical correlations of their
>> inputs?
>>
>
> I have answered this. We don’t know how the brain does it, but we do know
> that form is not meaning, i.e., that the form of a word does not contain
> its meaning. GPT-4 knows this also. It will tell you that it does not know
> the meanings of individual words as it has no conscious experience. It
> knows only how to assemble words together in patterns that the users find
> meaningful.
>

 This here is the part where I strongly disagree.  Where even is the
boundary between the referent and our word for it?

You could use some kind of magnet to generate a red quale in our brains.
Where is the referent?

You could use the same magnet in theory to just make us believe we had seen
red.  How about then?

You say GPT doesn't understand what red 'is'.  Maybe it doesn't have the
same qualia we do, but what about a being that could see color in billions
of times more resolution than ourselves?  We could still discuss 'red' with
it, but it would believe that we don't really understand it, that we
haven't seen it, we have no referent for it.  It doesn't mean we aren't
conscious.

That's the part I'm having a tough time understanding about your argument.
I think one part of your argument is obviously true (GPT doesn't have the
same qualia for red that we do) but the qualia isn't the meaning.  You
could switch around the qualia and everything you think and say and know
about red would be identical.  The quale is just a placeholder, even if
it's objectively determined by our physical states.  Helen Keller or Mary
can talk about red and we don't say they are just mimicry machines, no more
than we are, at least.  I just don't get what the machine's experience of
red being different from ours has to do with it all.

I would understand if the machine had NO possible referents for anything,
but that's not true.  It does have the innate mathematical referents you
discussed.  It knows, for example, that is means equals; that and means
union.  And we already know that all of mathematics can be built on equals,
and, and not.  Including geometry, i.e. form.  Everything in our world is
form, and we understand that form through a private inner world by using
qualia.

You say that it cannot know what it is talking about because it doesn't
have access to the physical world, but it can certainly understand
relations in the physical world, including geometric relations, so I reckon
it can at least know the shape of things, which is the important part.
Also, who's to say that understanding the fine-detail geometry of the
universe doesn't give an understanding of qualia?  If qualia are indeed
objectively bound to physical states, mustn't there be some level of
mathematical understanding that lets you generate the same qualia on demand?

Another thought experiment:

Imagine a being which can read the physical state of a brain and can also
snap its fingers and reproduce any physical state in a brain.  So it can
read a red quale in your brain and also make you see red.  It has never
'seen red'.

1) If this could simulate an entire brain seeing red, is that not identical
to seeing red?

2) If this could read your brain asking "what color is an apple?" and
generate the answer "red" directly as a red quale in your brain, how is
this different from your own brain's separate modules?  For example, if you
say in your head "let me try to remember that time I went to Disney World"
and then you think and remember it, is that initial goal process without
meaning?  All qualia in our minds are preceded by a physical process that
ends in us experiencing those particular qualia.   If ChatGPT says "red",
*I* see red--so why is it not considered like the parts of our mind that
lead to generate qualia but are not those qualia themselves?  Might ChatGPT
be a similar part of *our* minds?

If you had a system which only experienced pure qualia, would you say it
understands them?  If for example a boltzmann brain sees red in its mind's
eye, does it understand "red" better than ChatGPT?  Is a philosophical
eibmoz any more sentient than a philosophical zombie?

Like I said before I think the bounds of the question are larger than you
say.  And I think the utter, utter lack of understanding of qualia by
everyone ever at least means we should all be humble about the situation
and, if not agnostic, at least admit that agnosticism is technically the
most rational viewpoint right now.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230406/462d2130/attachment-0001.htm>


More information about the extropy-chat mailing list