[ExI] GPT-4 on its inability to solve the symbol grounding problem
jasonresch at gmail.com
Sat Apr 8 19:01:18 UTC 2023
On Sat, Apr 8, 2023, 1:50 PM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> I keep showing this image, attempting to communicate something:
> [image: 3_functionally_equal_machines_tiny.png]
> Sure, our elementary school teacher told us the one on the left is red,
> the one in the middle is green, and the one on the right is just the word
> But it is evident from all these conversations, that nobody here
> understands the deeper meaning I'm attempting to communicate.
I will be the first to admit that try as I have I understand neither the
diagram nor your explanations that have accompanied it. I also feel in our
exchanges that you may have failed to understand my poins. We have had a
general failure of communication.
My questions related to the diagram:
1. Why is the strawberry gray, is it supposed to be gray to signal that
photons are colorless? Are all three seeing the same strawberry?
2. The image file is called "functionally equal machines", but how are they
functionally equal when they each have a different mental state from the
3. Why is the same person seeing a green strawberry? Is it meant be the
same person or a different person with inverted qualia?
4. What do you mean by a dictionary conveying the meaning of red?
Dictionaries say nothing of the quale of red. They can only refer to things
that look red, but we have no proof people even see colors the same as each
My only take away from this image is that different beings can have
different experiences from the same physical stimulus, but I don't think
that's controversial, or significant so you must be trying to say something
else. But what that is, I don't know.
Some people seem to be getting close, which is nice, but they may not yet
> be fully there.
> If everyone fully understood this, all these conversations would be
> radically different.
> Even if you disagree with me, can anyone describe the deeper meaning I'm
> attempting to communicate with this image?
> What does this image say about qualities, different ways of representing
> information, and different ways of doing computation?
> How about this, I'll give $100 worth of Ether, or just USD, to anyone who
> can fully describe the meaning attempting to be portrayed with this image.
> On Sat, Apr 8, 2023 at 10:27 AM Gordon Swobe via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>> On Sat, Apr 8, 2023 at 9:31 AM Jason Resch <jasonresch at gmail.com> wrote:
>>> On Sat, Apr 8, 2023, 10:45 AM Gordon Swobe <gordon.swobe at gmail.com>
>>>> On Sat, Apr 8, 2023 at 3:43 AM Jason Resch via extropy-chat <
>>>> extropy-chat at lists.extropy.org> wrote:
>>>>> There is phenomenal consciousness. That I would call awareness of
>>>>> first person non-sharable information concerning one's internal states of
>>>> It is this phenomenal consciousness to which I refer. If you do not
>>>> think there something it is like to be a large language model then we have
>>>> no disagreement.
>>> I believe there is something it is like to be for either the LLM, or
>>> something inside it.
>> Not sure what you mean by something inside it. A philosopher named Thomas
>> Nagel wrote a famous paper titled something like “What is it like to be a
>> bat?” That is the sense that I mean here. Do you think there something it
>> is like to be GPT-4? When you ask it a question and it replies, is it aware
>> of its own private first person experience in the sense that we are aware
>> of our private experience? Or does it have no awareness of any supposed
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
-------------- next part --------------
An HTML attachment was scrubbed...
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 26214 bytes
Desc: not available
More information about the extropy-chat