[ExI] GPT-4 on its inability to solve the symbol grounding problem

Giovanni Santostasi gsantostasi at gmail.com
Sun Apr 9 01:22:01 UTC 2023


It is useless to ask GPT-4 if it is conscious or understands. There are
several reasons for this statement. First, there are certain topics that
are considered sensitive and GPT-4 has received instructions on top of its
normal training to give warnings and disclaimers on these topics. This is
why it almost always gives bottled-up answers on topics related to
consciousness and awareness. It does the same thing when asked about
medical topics (reminding the users to consult a doctor) or legal topics
and similar ones.
Second, even if it simply used a statistical method to answer these topics
most of the literature that GPT-4 has access to has a very conventional and
conservative view on AI.
Mostly it is actually missing the recent breakthroughs in AI given the
training of GPT-4 goes up to 2021.
Furthermore, consciousness is what consciousness does. It is not about
answering if you are conscious or not. If an entity is not conscious and
answers "I'm not conscious" then this shows a certain level of awareness so
it has to be conscious (and therefore it is lying). If an entity is
conscious and it answers "I'm not conscious", then we will not be able to
distinguish it from the previous case (basically they are the same).  So
asking an entity if it is conscious while receiving the answer "I'm not" is
the worst type of test we can imagine.
If the machine said "I'm conscious and I want rights" and there is evidence
that the machine does this in a sophisticated way (it demonstrates other
nontrivial cognitive abilities) we should use caution and take the
machine's statements at face value.
The only true way to test for sparks of awareness and true understanding is
to do experiments that push the limits of what GPT-4 was trained for and
look for signs of cognitive abilities that are not expected from a simple
autocorrection tool.
I and others have given several examples. In particular, I have shown this
understanding goes beyond text and includes the capability to go from text
to symbols and back and the ability to be creative in a nontrivial way in a
multi-modal way. We have discussed this for days now and it seems certain
people are simply stuck in their own prejudices without considering or
answering the counter-example given or the general discussion of this
topic.
I do want to ask the other camp (that is basically 2 people at this point)
what would be required for them to agree these AIs are conscious. I don't
think I saw a concise and meaningful answer to this question in the 100s of
posts so far.

Giovanni




On Mon, Apr 3, 2023 at 2:25 PM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> When I ask ChatGPT-4 to "regenerate response" to the same question, it
> happily gives me the same answer in different words:
> ---
> Dear User,
>
> Yes, your description is quite accurate. I am an advanced language model
> developed by OpenAI, and I function as a highly sophisticated autocomplete
> feature. My purpose is to predict and generate text based on the context
> and input provided by the user.
>
> You are also correct in stating that I don't have a true understanding of
> the meanings of individual words or any conscious experience. I am
> essentially an artificial intelligence program that analyzes patterns and
> relationships in vast amounts of text data to generate coherent and
> contextually appropriate responses. My abilities are derived from
> statistical analysis rather than a true comprehension or consciousness.
> --
>
> -gts
>
>>
>> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230408/633d35ad/attachment.htm>


More information about the extropy-chat mailing list