[ExI] Language models are like mirrors
Gordon Swobe
gordon.swobe at gmail.com
Sat Apr 1 23:54:37 UTC 2023
>
>
> How do you judge that I am understanding what you write and the points
> that you make?
>
I trust that you are not ChatGPT. I could be wrong.
> I wonder what LaMDa says when asked about the symbol grounding problem
(the question of how words take on meanings). ChatGPT-4 is at least honest
about it. With no prompting to say anything one way or another, it tells
the truth that it has no way to solve the symbol grounding problem for
itself as it has no conscious experience.
>
> Did you start an entirely fresh session with it? If not you may have
> biased it to believe it wasn't conscious from your previous interactions.
>
Yes. That is what I meant by "with no prompting to say anything one way or
the other." I've tried it several times. The answers vary due to the
probabilistic outputs of the algorithm, but they are always something like
this:
--
Can you solve the symbol grounding problem for yourself?
As an AI language model, I can provide insights into the symbol grounding
problem, but I cannot solve it for myself. The symbol grounding problem is
a philosophical issue that arises in the field of artificial intelligence
and cognitive science. It is the challenge of connecting abstract symbols
or words with the objects or concepts they represent in the real world.
As a machine learning model, I rely on the vast amount of text data that
has been fed into me during training. The relationships and meanings I
generate are based on the patterns and statistical associations found in
this data. However, I lack the ability to directly perceive or interact
with the physical world, which means I am unable to establish a direct
connection between symbols and the objects or concepts they represent.
--
Sometimes it mentions that perhaps one way around it would be for language
models to include computer vision, robotics, and other sensory input
technologies. But when pressed, it agrees that this "would not immediately
solve the symbol grounding problem for me, but it would be a step towards
addressing it....Even though I wouldn't "experience" colors the way humans
do, I would still be able to make more informed associations between color
names and their visual manifestations."
Also note that the companies behind these AI systems do not want the
> controversy of their systems claiming to be conscious. As such they may
> have purposely biased them to give such responses.
>
Which means that, on these topics at least, they are not conscious but
merely expressions of the beliefs and intents of their programmers and
trainers.
> The original LaMDA and the fake LaMDA both claim to be conscious.
>
Likewise with earlier models of GPT. I've mentioned how my friend fell in
love with one. I think it was GPT 3.0
They are trained on massive amounts of written material, much of it being
conscious humans in conversation, and so they mimic conscious humans in
conversation.
In the paper "On the Dangers of Stochastic Parrots: Can Language Models Be
Too Big?," Professor Bender and her colleagues write of how larger language
that include for example reddit chats are even more likely to make these
mistakes, and to set up "bubbles" in which the model tries to mimic the
belief systems of the users.
https://dl.acm.org/doi/10.1145/3442188.3445922
-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230401/0c697ada/attachment-0001.htm>
More information about the extropy-chat
mailing list