[ExI] GPT-4 on its inability to solve the symbol grounding problem
Darin Sunley
dsunley at gmail.com
Sat Apr 8 22:26:49 UTC 2023
I think I'm prepared to bite the bullet on qualia being immaterial and
aphysical.There's no physical reason why a particular set of
neurotransmitter densities scattered in particular patterns across a
particular set of neurons should be experienced via one quale, rather than
another. Certainly no reason intrinsic to the molecular structures of the
neurotransmitters or the connectome of the neurons.
And if qualia are indeed nonphysical, while there are almost certainly
facts of the matter that govern their behavior, those will definitionally
not be "physical facts", and it is not even obvious that they would be
constrained by mathematically expressible logic subvenient to the Peano
axioms.
On Sat, Apr 8, 2023 at 3:54 PM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>
>
> Yay, Darin, You got some of the core ideas. Thanks.
> You are thinking about qualia in the popular, something produces redness,
> way.
> This is similar to the way everyone talks about the "neural correlate" of
> redness, and so on.
> But this all separates qualities from physical reality.
> Even if redness is produced by something this is still a physical fact.
> Redness would still be a property of whatever system is producing it.
> It's a fundamental assumption of reality about what is more fundamental.
> Is redness what is fundamental, and it behaves the way it does, because it
> is red.
> Or is the function, what is fundamental. It looks red, because of the
> particular red function (whatever that could be.) from which redness arises.
>
> The philosophical zombie problem also separates qualities from physical
> reality. A description of a zombie which (doesn't have redeness) is
> defined to be physically identical to one that does.
> But that of course is absurd. A zombie is simply an abstract system that
> is physically different. It represents red information with an abstract
> word red.
>
>
>
>
> On Sat, Apr 8, 2023 at 3:30 PM Darin Sunley via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> A bit late to the party, but I'll take my swing at it:
>>
>> The phenomenal conscious experience of redness is a thing our brain does,
>> not a thing 700 nm light does.
>>
>> Not only this, but there is no actual causal link between any specific
>> phenomenal conscious experience that we have been taught to label
>> "redness", and photons of 700 nm light. Different neural architectures can,
>> and may very well do generate different phenomenal conscious experiences
>> (qualia) in response to 700 nm light, and many neural architectures, while
>> capable of detecting 700 nm light striking their visual sensors, may
>> generate no phenomenal conscious experience in response thereto at all.
>>
>> The question of what a phenomenal conscious experience is, what generates
>> it, how it is generated in response to photons of a specific energy
>> striking a sensor, and what causes it to be one thing and not something
>> else, is all under the umbrella of Chalmers' "hard problem" of
>> consciousness.
>>
>> The first hard thing about the hard problem of consciousness is
>> convincing some people that it exists. Or as someone (it may have been
>> Yudkowskyor Scott Alexander) pointed out, p-zombies are indistinguishable
>> from normal humans, /except/ in the specific case where they happen to be
>> philosophers writing about phenomenal conscious experience and qualia.. :)
>>
>>
>>
>> On Sat, Apr 8, 2023 at 11:51 AM Brent Allsop via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>>
>>> I keep showing this image, attempting to communicate something:
>>>
>>> [image: 3_functionally_equal_machines_tiny.png]
>>> Sure, our elementary school teacher told us the one on the left is red,
>>> the one in the middle is green, and the one on the right is just the word
>>> 'Red'.
>>>
>>> But it is evident from all these conversations, that nobody here
>>> understands the deeper meaning I'm attempting to communicate.
>>> Some people seem to be getting close, which is nice, but they may not
>>> yet be fully there.
>>> If everyone fully understood this, all these conversations would be
>>> radically different.
>>> Even if you disagree with me, can anyone describe the deeper meaning I'm
>>> attempting to communicate with this image?
>>> What does this image say about qualities, different ways of representing
>>> information, and different ways of doing computation?
>>>
>>> How about this, I'll give $100 worth of Ether, or just USD, to anyone
>>> who can fully describe the meaning attempting to be portrayed with this
>>> image.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Sat, Apr 8, 2023 at 10:27 AM Gordon Swobe via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>>
>>>> On Sat, Apr 8, 2023 at 9:31 AM Jason Resch <jasonresch at gmail.com>
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Sat, Apr 8, 2023, 10:45 AM Gordon Swobe <gordon.swobe at gmail.com>
>>>>> wrote:
>>>>>
>>>>>>
>>>>>> On Sat, Apr 8, 2023 at 3:43 AM Jason Resch via extropy-chat <
>>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>>
>>>>>>
>>>>>>> There is phenomenal consciousness. That I would call awareness of
>>>>>>> first person non-sharable information concerning one's internal states of
>>>>>>> mind.
>>>>>>>
>>>>>>
>>>>>> It is this phenomenal consciousness to which I refer. If you do not
>>>>>> think there something it is like to be a large language model then we have
>>>>>> no disagreement.
>>>>>>
>>>>>
>>>>> I believe there is something it is like to be for either the LLM, or
>>>>> something inside it.
>>>>>
>>>>
>>>> Not sure what you mean by something inside it. A philosopher named
>>>> Thomas Nagel wrote a famous paper titled something like “What is it like to
>>>> be a bat?” That is the sense that I mean here. Do you think there something
>>>> it is like to be GPT-4? When you ask it a question and it replies, is it
>>>> aware of its own private first person experience in the sense that we are
>>>> aware of our private experience? Or does it have no awareness of any
>>>> supposed experience?
>>>>
>>>> -gts
>>>>
>>>>> _______________________________________________
>>>> extropy-chat mailing list
>>>> extropy-chat at lists.extropy.org
>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>
>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230408/73535a90/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 3_functionally_equal_machines_tiny.png
Type: image/png
Size: 26214 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230408/73535a90/attachment-0001.png>
More information about the extropy-chat
mailing list