[ExI] Computers, qualia, 'symbol grounding' (@Gordon)
Will Steinberg
steinberg.will at gmail.com
Sun Apr 2 18:53:17 UTC 2023
Mr. Groks The Sensorium, you keep claiming that ChatGPT hasn't 'solved' the
'symbol grounding problem' but I have yet to see any evidence for this,
only evidence that ChatGPT is unlikely to experience the same qualia that
we experience. But I have seen no proof that the AI has NO qualia with
which to ground symbols with, and if you did have that proof, you would
become a very famous philosopher.
How do you know that qualia aren't fungible?
Was Hellen Keller a p-zombie just because she didn't have grounded symbols
for sight and sound?
How do you know that it's not possible to build a model of the world using
only whatever qualia computers experience as the base?
You seem to believe that if you reverse engineer language, you are left
with a bunch of empty spaces for qualia, and that self-consciousness is
dependent on these atomic experiences.
What's to say that any qualia can't take the spots of the ones we used to
develop language? We can communicate with people who are deaf and blind
from birth. Even someone who had none of the external senses that we have,
but a single bit of input/output of some kind, could communicate with us.
Imagine for a second there are aliens which only perceive the world through
magnetic fields. We have no possible way to reckon the qualia for these
fields, but we CAN produce the fields, and measure them. And with this we
could both send and receive magnetic fields. You might say that without
known constants to both refer to, we could never talk with these beings,
but is it true? Can you say without the shadow of a doubt that qualia
cannot be inferred from the entirety of language? After all, at the end of
the day, past the sensory organs everything is condensed into
electrochemical signals, same as language. So wouldn't you perhaps think,
with utter knowledge of one side of that equation, that it could even be
simple to reconstruct the other?
If I was able to perfectly recreate a human eye and brain, and knew the
neurophysocal content of a 'standard' red quale, would I not be able to
make that brain experience the red quale? Do you think it is possible that
access to the relations between all language, ever, could enable one to
reconstruct the workings of the sensorium, and then infer qualia from
there? What if the entity in question not only had this ability, but also
experienced its own types of qualia? (You do not know whether this is the
case.) Would that make it even easier to reverse engineer?
I simply think--or rather, I would say I KNOW--that you can't possibly know
whether a system, of which you do not know whether experiences any qualia
or not, using an inference tool on language of which you have no personal
access to verify whether can reconstruct qualia, and which actually, not
even the people who make it understand fully what is going on, is conscious
of itself.
Btw, is that even what you are arguing? You seem to be jumping back and
forth between the argument that ChatGPT has no qualia (which again, you
can't know) and the argument that it has no awareness of itself (which
again, again, you can't know). These are very different arguments; the
first is the most important unsolved problem in philosophy.
This is really getting into the weeds of the subject and I don't think you
should speak so surely on the matter. These problems are the hardest
problems in all of philosophy, neuroscience, theory of mind. There are
NUMEROUS thought experiments that at the very least bring the sureness of
your opinion below 100%.
You're free to argue for your opinion but can you stop acting like everyone
who disagrees with you is an idiot? You're arguing for something that is
currently unknowable, so you should be more humble. And if you have
special information on what makes qualia, PLEASE make it known here,
because--again--it is the most important philosophy problem in existence,
and I'm sure everyone here and every philosopher and neuroscientist and
human ever would like to know the answer.
Until then, chill with the hubris. It's uncouth.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230402/29d4dc81/attachment.htm>
More information about the extropy-chat
mailing list