[ExI] Computers, qualia, 'symbol grounding' (@Gordon)

Gordon Swobe gordon.swobe at gmail.com
Mon Apr 3 00:30:49 UTC 2023


Dear ChatGPT 4,
--
Can you solve the symbol grounding problem for yourself?

As an AI language model, I don't possess consciousness or the ability to
experience the world like humans do. The symbol grounding problem arises
from the challenge of connecting symbols, like words or concepts, to the
actual things they represent in the real world. Although I can process and
generate text based on patterns in the data I've been trained on, I do not
have direct access to the experiences or sensations that would allow me to
truly "ground" these symbols in the same way a human does.

Instead, I rely on the context provided by the data I've been trained on to
generate meaningful responses. While I can provide useful information and
answer questions to some extent, my understanding is limited to the
patterns and relationships I've learned from the text during training. This
means that I can't truly solve the symbol grounding problem for myself, as
I lack the necessary subjective experiences and sensory input.
--
-gts


On Sun, Apr 2, 2023 at 6:04 PM Gordon Swobe <gordon.swobe at gmail.com> wrote:

> On Sun, Apr 2, 2023 at 12:55 PM Will Steinberg via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> Mr. Groks The Sensorium, you keep claiming that ChatGPT hasn't 'solved'
>> the 'symbol grounding problem.'
>>
>
> It is not merely my claim. ChatGPT-4 itself claims it cannot solve the
> problem as it has no conscious experience -- no access to the referents in
> the real world -- that give words meaning.
>
> -gts
>
>
>
>
>
>> but I have yet to see any evidence for this, only evidence that ChatGPT
>> is unlikely to experience the same qualia that we experience.  But I have
>> seen no proof that the AI has NO qualia with which to ground symbols with,
>> and if you did have that proof, you would become a very famous philosopher.
>>
>> How do you know that qualia aren't fungible?
>>
>> Was Hellen Keller a p-zombie just because she didn't have grounded
>> symbols for sight and sound?
>>
>> How do you know that it's not possible to build a model of the world
>> using only whatever qualia computers experience as the base?
>>
>> You seem to believe that if you reverse engineer language, you are left
>> with a bunch of empty spaces for qualia, and that self-consciousness is
>> dependent on these atomic experiences.
>>
>> What's to say that any qualia can't take the spots of the ones we used to
>> develop language?  We can communicate with people who are deaf and blind
>> from birth.  Even someone who had none of the external senses that we have,
>> but a single bit of input/output of some kind, could communicate with us.
>>
>> Imagine for a second there are aliens which only perceive the world
>> through magnetic fields.  We have no possible way to reckon the qualia for
>> these fields, but we CAN produce the fields, and measure them.  And with
>> this we could both send and receive magnetic fields.  You might say that
>> without known constants to both refer to, we could never talk with these
>> beings, but is it true?  Can you say without the shadow of a doubt that
>> qualia cannot be inferred from the entirety of language?  After all, at the
>> end of the day, past the sensory organs everything is condensed into
>> electrochemical signals, same as language.  So wouldn't you perhaps think,
>> with utter knowledge of one side of that equation, that it could even be
>> simple to reconstruct the other?
>>
>> If I was able to perfectly recreate a human eye and brain, and knew the
>> neurophysocal content of a 'standard' red quale, would I not be able to
>> make that brain experience the red quale?  Do you think it is possible that
>> access to the relations between all language, ever, could enable one to
>> reconstruct the workings of the sensorium, and then infer qualia from
>> there?  What if the entity in question not only had this ability, but also
>> experienced its own types of qualia? (You do not know whether this is the
>> case.)  Would that make it even easier to reverse engineer?
>>
>> I simply think--or rather, I would say I KNOW--that you can't possibly
>> know whether a system, of which you do not know whether experiences any
>> qualia or not, using an inference tool on language of which you have no
>> personal access to verify whether can reconstruct qualia, and which
>> actually, not even the people who make it understand fully what is going
>> on, is conscious of itself.
>>
>> Btw, is that even what you are arguing?  You seem to be jumping back and
>> forth between the argument that ChatGPT has no qualia (which again, you
>> can't know) and the argument that it has no awareness of itself (which
>> again, again, you can't know).  These are very different arguments; the
>> first is the most important unsolved problem in philosophy.
>>
>> This is really getting into the weeds of the subject and I don't think
>> you should speak so surely on the matter. These problems are the hardest
>> problems in all of philosophy, neuroscience, theory of mind. There are
>> NUMEROUS thought experiments that at the very least bring the sureness of
>> your opinion below 100%.
>>
>> You're free to argue for your opinion but can you stop acting like
>> everyone who disagrees with you is an idiot?  You're arguing for something
>> that is currently unknowable, so you should be more humble.  And if you
>> have special information on what makes qualia, PLEASE make it known here,
>> because--again--it is the most important philosophy problem in existence,
>> and I'm sure everyone here and every philosopher and neuroscientist and
>> human ever would like to know the answer.
>>
>> Until then, chill with the hubris.  It's uncouth.
>>
>>
>>
>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230402/151be8fc/attachment-0001.htm>


More information about the extropy-chat mailing list