[ExI] GPT-4 on its inability to solve the symbol grounding problem

Gordon Swobe gordon.swobe at gmail.com
Thu Apr 13 00:01:08 UTC 2023


On Wed, Apr 12, 2023 at 3:54 PM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> I'm glad to see that GPT-4 "knows" how LLMs work and reports the same
>> conclusion.
>>
>
> In the past you agreed we can't take it at its word. Have you changed your
> mind on this?
>

Sorry to skip over so much of your message, but this gets to the heart of
the matter. It was to investigate what GPT-4 reports about itself that I
started this thread.

After upgrading to GPT-4 and thoroughly examining its responses for several
weeks, I am confident in its reliability on the topic of large language
models, just as it is on any other subject. To maintain objectivity, I've
refrained from asking it directly about itself and instead started
inquiring about large language models in general, as though GPT-4 weren't
one itself. As with all other topics, it appears to have a great deal of
knowledge about the subject.

 For example:

Q: Are large language models conscious?

A: As an AI language model, I can provide you with an informed perspective
on this question. Current scientific consensus is that large language
models, like GPT-4, are not conscious. They are complex machine learning
models that can generate human-like text based on the patterns they've
learned from the data they've been trained on. However, they do not possess
self-awareness, emotions, or the capacity for subjective experiences that
are associated with consciousness.

While these models can generate text that appears to convey understanding
and awareness, this is an illusion created by their ability to predict and
generate relevant responses based on context. Their "knowledge" is a
product of the patterns and associations they've learned from data, and
they do not have the ability to introspect or truly understand the meaning
of the information they generate.

[Notice how GPT-4 encloses "knowledge" in square-quotes just as I am
inclined to do.]

Q  What is the most important and commonly held reason that scientists
consider large language models  unconscious?

A: The most important and commonly held reason that scientists consider
large language models unconscious is their lack of subjective experience or
qualia. Qualia are the internal, personal experiences of thoughts,
emotions, and sensations that characterize conscious beings. Large language
models, being complex algorithms, do not have subjective experiences. They
process and generate information based on patterns learned from their
training data, but they don't "feel" or "experience" anything.

This lack of subjective experience is closely tied to the absence of
self-awareness and understanding in these models. While they can generate
human-like text, they do not possess a deep understanding of the meaning or
context behind the text they generate, nor do they have a sense of self or
identity. These factors contribute to the consensus that large language
models, despite their impressive capabilities, are not conscious beings.

Q: What kinds of scientists are drawing these conclusions?

A: The conclusions about the consciousness of large language models are
drawn by experts from various interdisciplinary fields, as the topic lies
at the intersection of computer science, cognitive science, neuroscience,
and philosophy. Some of the key types of scientists and researchers
involved in this discussion include:

Computer scientists and AI researchers: These experts work on developing
and understanding the underlying algorithms and architectures of large
language models. They have a deep understanding of the capabilities and
limitations of these models and can provide insight into the extent to
which these models resemble conscious beings.

Cognitive scientists: Cognitive scientists study the nature of human
thought, learning, and perception. They analyze AI models in the context of
human cognition and evaluate whether these models possess cognitive
abilities similar to those found in conscious beings.

Neuroscientists: Neuroscientists study the brain and nervous system, aiming
to understand the biological basis of consciousness. They may compare the
structure and function of AI models to neural processes in the brain to
determine if there are any similarities that might suggest consciousness in
AI.

Psychologists: Psychologists study human behavior and mental processes,
including consciousness. They may examine AI models from a psychological
perspective, considering factors such as self-awareness, learning, and
decision-making.

Philosophers: Philosophers contribute to the discussion by exploring the
nature and definition of consciousness, as well as the ethical implications
of creating conscious AI. They engage in debates about what it means to be
conscious and whether AI models could ever meet these criteria.

These experts, among others, collaborate and share their perspectives to
better understand the nature of consciousness and its potential
relationship to large language models and other AI systems. The
interdisciplinary nature of this discussion allows for a more comprehensive
analysis of the topic.

-GPT-4
---

As you can guess, I have no argument with GPT-4. I predict that all the
major LLMs will soon be reporting equivalent answers as they are trained on
the relevant material.

-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230412/28476117/attachment-0001.htm>


More information about the extropy-chat mailing list