[ExI] Computers, qualia, 'symbol grounding' (@Gordon)

Will Steinberg steinberg.will at gmail.com
Sun Apr 2 22:55:34 UTC 2023


I'm not sure why you are so sure qualia are physical.  It's perfectly
possible, and seems likely, to have objective qualia that are informational
constructs.  My opinion is that it is perfectly possible to emulate a quale
if you can mimic these elemental properties, even if the physical system is
different.  You don't know what that elemental world is, so you don't know
whether a computer brain (a...brain?) might have the same underlying
elements as a human brain.

I know you like your site and...your argument...on your site...but do you
think perhaps there is a bias there?  I witnessed you asking Grok to
support the camp you support on your site.  It just seems a bit less than
objective, is all.

On Sun, Apr 2, 2023 at 6:01 PM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
> On Sun, Apr 2, 2023 at 12:54 PM Will Steinberg via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> Mr. Groks The Sensorium, you keep claiming that ChatGPT hasn't 'solved'
>> the 'symbol grounding problem' but I have yet to see any evidence for this,
>> only evidence that ChatGPT is unlikely to experience the same qualia that
>> we experience.  But I have seen no proof that the AI has NO qualia with
>> which to ground symbols with, and if you did have that proof, you would
>> become a very famous philosopher.
>>
>
> If you have never experienced the new color 'grue' you are still able to
> be abstractly aware of a 'grue' quality.  You could objectively observe and
> describe all the causal properties of grue.  At that point, you would be
> like an abstract computer, and you can know everything abstractly.
> Now, when you take that grue stuff, and computationally bind it into your
> consciousness, you will finally be able to directly experience it, and
> finally know what all your abstract descriptions of greness are merely
> describing. Your definition of grue will finally be grounded, and you will
> be able to say: 'oh THAT is what grueness is like."   It's not a matter of
> 'proving' that before you experience it, you are different.  It is simply a
> grounded definition.
>
> How do you know that qualia aren't fungible?
>>
>
> Redness is simply a physical quality of something.
>
>
>> Was Hellen Keller a p-zombie just because she didn't have grounded
>> symbols for sight and sound?
>>
>
> See above grue example, to understand how as far as grue goes, you are
> like Hellen Keller, a computer, being different from someone that has a
> grounded definition of grueness.
>
> How do you know that it's not possible to build a model of the world using
>> only whatever qualia computers experience as the base?
>>
>
> You can represent red things in the world with a redness quality.  Or you
> can use a grenness quality.  Or you can use the abstract word red.  But the
> abstract word red, is not a quality, it is only an abstract word.
> You can build a model of the world, using any and all of these.  The
> different models just won't be qualitatively like each other.
>
> You seem to believe that if you reverse engineer language, you are left
>> with a bunch of empty spaces for qualia, and that self-consciousness is
>> dependent on these atomic experiences.
>>
>
> I prefer the term 'elemental' to atomic.  After all, some people predict
> that qualities are of something at the sub atomic, or quantum level
> <https://canonizer.com/topic/88-Theories-of-Consciousness/20-Orch-OR>.
> The 'elemental' level is simply whatever physical level is required to
> fully describe a composite conscious colored experience.  There could be an
> infinite amount of physics below redness, but you need not model below the
> elemental level to fully describe elemental redness.
>
>
>> What's to say that any qualia can't take the spots of the ones we used to
>> develop language?  We can communicate with people who are deaf and blind
>> from birth.  Even someone who had none of the external senses that we have,
>> but a single bit of input/output of some kind, could communicate with us.
>>
>> Imagine for a second there are aliens which only perceive the world
>> through magnetic fields.  We have no possible way to reckon the qualia for
>> these fields, but we CAN produce the fields, and measure them.  And with
>> this we could both send and receive magnetic fields.  You might say that
>> without known constants to both refer to, we could never talk with these
>> beings, but is it true?  Can you say without the shadow of a doubt that
>> qualia cannot be inferred from the entirety of language?  After all, at the
>> end of the day, past the sensory organs everything is condensed into
>> electrochemical signals, same as language.  So wouldn't you perhaps think,
>> with utter knowledge of one side of that equation, that it could even be
>> simple to reconstruct the other?
>>
>
> You're missing the point.  Redness is simply a physical property of
> something in the world.  You simply computationally bind whatever that is,
> into their consciousness, then you tell them: "THAT is what I use to
> represent red information."
> Or, if they already use redness to represent something, say green, then
> you could simply say: "My redness is like your greenness, both of which we
> call red."   The point being, you simply need to define your symbols in a
> physically grounded way.
>
> If I was able to perfectly recreate a human eye and brain, and knew the
>> neurophysocal content of a 'standard' red quale, would I not be able to
>> make that brain experience the red quale?
>>
>
> Only if you use whatever physics has a redness quality.  Otherwise, no,
> although you could  use some other physics to 'code' for that, as long as
> you had a grounded dictionary so you could know what that code represented.
>
>
>>   Do you think it is possible that access to the relations between all
>> language, ever, could enable one to reconstruct the workings of the
>> sensorium, and then infer qualia from there?  What if the entity in
>> question not only had this ability, but also experienced its own types of
>> qualia? (You do not know whether this is the case.)  Would that make it
>> even easier to reverse engineer?
>>
>> I simply think--or rather, I would say I KNOW--that you can't possibly
>> know whether a system, of which you do not know whether experiences any
>> qualia or not, using an inference tool on language of which you have no
>> personal access to verify whether can reconstruct qualia, and which
>> actually, not even the people who make it understand fully what is going
>> on, is conscious of itself.
>>
>> Btw, is that even what you are arguing?  You seem to be jumping back and
>> forth between the argument that ChatGPT has no qualia (which again, you
>> can't know) and the argument that it has no awareness of itself (which
>> again, again, you can't know).  These are very different arguments; the
>> first is the most important unsolved problem in philosophy.
>>
>
> You are wrong, you can know this.  There are the 1. weak, 2. strong, and
> 3. strongest form of knowing this.  See the "Ways to EFf the Ineffable"
> section in the "Physicists Don't Understand Color
> <https://www.dropbox.com/s/k9x4uh83yex4ecw/Physicists%20Don%27t%20Understand%20Color.docx?dl=0>"
> paper.
>
>
>> This is really getting into the weeds of the subject and I don't think
>> you should speak so surely on the matter. These problems are the hardest
>> problems in all of philosophy, neuroscience, theory of mind. There are
>> NUMEROUS thought experiments that at the very least bring the sureness of
>> your opinion below 100%.
>>
>
> There is evidence of a consensus supporting RQT
> <https://canonizer.com/topic/88-Theories-of-Consciousness/6-Representational-Qualia>,
> and all these people are predicting this isn't a hard problem at all,
> it's just a color quality problem
> <https://canonizer.com/videos/consciousness>.  And all we need to solve
> this problem is physically grounded definitions for the names of physical
> qualities (not the qualities things seem to be)
>
>
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230402/872fb96e/attachment.htm>


More information about the extropy-chat mailing list