[ExI] GPT-4 on its inability to solve the symbol grounding problem
brent.allsop at gmail.com
Sat Apr 8 21:44:58 UTC 2023
Yay, Darin, You got some of the core ideas. Thanks.
You are thinking about qualia in the popular, something produces redness,
This is similar to the way everyone talks about the "neural correlate" of
redness, and so on.
But this all separates qualities from physical reality.
Even if redness is produced by something this is still a physical fact.
Redness would still be a property of whatever system is producing it.
It's a fundamental assumption of reality about what is more fundamental.
Is redness what is fundamental, and it behaves the way it does, because it
Or is the function, what is fundamental. It looks red, because of the
particular red function (whatever that could be.) from which redness arises.
The philosophical zombie problem also separates qualities from physical
reality. A description of a zombie which (doesn't have redeness) is
defined to be physically identical to one that does.
But that of course is absurd. A zombie is simply an abstract system that
is physically different. It represents red information with an abstract
On Sat, Apr 8, 2023 at 3:30 PM Darin Sunley via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> A bit late to the party, but I'll take my swing at it:
> The phenomenal conscious experience of redness is a thing our brain does,
> not a thing 700 nm light does.
> Not only this, but there is no actual causal link between any specific
> phenomenal conscious experience that we have been taught to label
> "redness", and photons of 700 nm light. Different neural architectures can,
> and may very well do generate different phenomenal conscious experiences
> (qualia) in response to 700 nm light, and many neural architectures, while
> capable of detecting 700 nm light striking their visual sensors, may
> generate no phenomenal conscious experience in response thereto at all.
> The question of what a phenomenal conscious experience is, what generates
> it, how it is generated in response to photons of a specific energy
> striking a sensor, and what causes it to be one thing and not something
> else, is all under the umbrella of Chalmers' "hard problem" of
> The first hard thing about the hard problem of consciousness is convincing
> some people that it exists. Or as someone (it may have been Yudkowskyor
> Scott Alexander) pointed out, p-zombies are indistinguishable from normal
> humans, /except/ in the specific case where they happen to be philosophers
> writing about phenomenal conscious experience and qualia.. :)
> On Sat, Apr 8, 2023 at 11:51 AM Brent Allsop via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>> I keep showing this image, attempting to communicate something:
>> [image: 3_functionally_equal_machines_tiny.png]
>> Sure, our elementary school teacher told us the one on the left is red,
>> the one in the middle is green, and the one on the right is just the word
>> But it is evident from all these conversations, that nobody here
>> understands the deeper meaning I'm attempting to communicate.
>> Some people seem to be getting close, which is nice, but they may not yet
>> be fully there.
>> If everyone fully understood this, all these conversations would be
>> radically different.
>> Even if you disagree with me, can anyone describe the deeper meaning I'm
>> attempting to communicate with this image?
>> What does this image say about qualities, different ways of representing
>> information, and different ways of doing computation?
>> How about this, I'll give $100 worth of Ether, or just USD, to anyone who
>> can fully describe the meaning attempting to be portrayed with this image.
>> On Sat, Apr 8, 2023 at 10:27 AM Gordon Swobe via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>> On Sat, Apr 8, 2023 at 9:31 AM Jason Resch <jasonresch at gmail.com> wrote:
>>>> On Sat, Apr 8, 2023, 10:45 AM Gordon Swobe <gordon.swobe at gmail.com>
>>>>> On Sat, Apr 8, 2023 at 3:43 AM Jason Resch via extropy-chat <
>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>> There is phenomenal consciousness. That I would call awareness of
>>>>>> first person non-sharable information concerning one's internal states of
>>>>> It is this phenomenal consciousness to which I refer. If you do not
>>>>> think there something it is like to be a large language model then we have
>>>>> no disagreement.
>>>> I believe there is something it is like to be for either the LLM, or
>>>> something inside it.
>>> Not sure what you mean by something inside it. A philosopher named
>>> Thomas Nagel wrote a famous paper titled something like “What is it like to
>>> be a bat?” That is the sense that I mean here. Do you think there something
>>> it is like to be GPT-4? When you ask it a question and it replies, is it
>>> aware of its own private first person experience in the sense that we are
>>> aware of our private experience? Or does it have no awareness of any
>>> supposed experience?
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
-------------- next part --------------
An HTML attachment was scrubbed...
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 26214 bytes
Desc: not available
More information about the extropy-chat