[ExI] GPT-4 on its inability to solve the symbol grounding problem

Giovanni Santostasi gsantostasi at gmail.com
Sun Apr 9 03:33:14 UTC 2023


This is my updated Brent's picture.
What this represents is the fact, in all the 3 cases we have in the end
just neural activation patterns. It is not important if it is neurons or
digital on-and-off switches in the bot brain. If there are recursive loops
that communicate to the system its own state then it is basically the same
experience. Red and green can be differentiated because they are different
patterns and are usually associated with a strawberry and other objects
that appear red. In the case of the green strawberry, the system identifies
the correct shape of the strawberry (shape is another activation pattern)
but it is perceived as green. It is an unusual association and may cause
confusion and difficulty in recognizing the object but not big deal. Same
with what happens with the bot, an English word is what is used to label
the experience but it could be a musical tone, a sequence of numbers, or
even the neural pattern itself. The consciousness is really in the loop,
the system knowing about its own state.

[image: Image2.png]





On Sat, Apr 8, 2023 at 7:07 PM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
> On Sat, Apr 8, 2023 at 7:23 PM Giovanni Santostasi via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> It is useless to ask GPT-4 if it is conscious or understands. There are
>> several reasons for this statement. First, there are certain topics that
>> are considered sensitive and GPT-4 has received instructions on top of its
>> normal training to give warnings and disclaimers on these topics. This is
>> why it almost always gives bottled-up answers on topics related to
>> consciousness and awareness. It does the same thing when asked about
>> medical topics (reminding the users to consult a doctor) or legal topics
>> and similar ones.
>> Second, even if it simply used a statistical method to answer these
>> topics most of the literature that GPT-4 has access to has a very
>> conventional and conservative view on AI.
>> Mostly it is actually missing the recent breakthroughs in AI given the
>> training of GPT-4 goes up to 2021.
>> Furthermore, consciousness is what consciousness does. It is not about
>> answering if you are conscious or not. If an entity is not conscious and
>> answers "I'm not conscious" then this shows a certain level of awareness so
>> it has to be conscious (and therefore it is lying). If an entity is
>> conscious and it answers "I'm not conscious", then we will not be able to
>> distinguish it from the previous case (basically they are the same).  So
>> asking an entity if it is conscious while receiving the answer "I'm not" is
>> the worst type of test we can imagine.
>> If the machine said "I'm conscious and I want rights" and there is
>> evidence that the machine does this in a sophisticated way (it demonstrates
>> other nontrivial cognitive abilities) we should use caution and take the
>> machine's statements at face value.
>> The only true way to test for sparks of awareness and true understanding
>> is to do experiments that push the limits of what GPT-4 was trained for and
>> look for signs of cognitive abilities that are not expected from a simple
>> autocorrection tool.
>> I and others have given several examples. In particular, I have shown
>> this understanding goes beyond text and includes the capability to go from
>> text to symbols and back and the ability to be creative in a nontrivial way
>> in a multi-modal way. We have discussed this for days now and it seems
>> certain people are simply stuck in their own prejudices without considering
>> or answering the counter-example given or the general discussion of this
>> topic.
>>
>
> We seem to be talking about two completely different things.  You are
> talking about general intelligence, while I am talking about what mechanism
> any intelligent system is using to represent information and compute with.
> (has nothing to do with general intelligence, other than some ways
> (representing information directly on qualities, no dictionary required)
> are more efficient than others (abstract - required additional dictionaries
> to know what the property represents.)
>
>
>> I do want to ask the other camp (that is basically 2 people at this
>> point) what would be required for them to agree these AIs are conscious. I
>> don't think I saw a concise and meaningful answer to this question in the
>> 100s of posts so far.
>>
>
> Sorry you missed this.  Let me summarize.  There is general agreement that
> redness is not a quality of the strawberry, it is a property of
> our conscious knowledge of the strawberry - a quality of something in our
> head.  The question is, what is the nature of that redness property?
> Functionalists
> <https://canonizer.com/topic/88-Theories-of-Consciousness/18-Qualia-Emerge-from-Function>,
> like Stathis, predict that redness "supervenes" on some function.
> Dualists
> <https://canonizer.com/topic/88-Theories-of-Consciousness/19-Property-Dualism>,
> predict redness is a spiritual quality of something non physical.  "
> Materialists
> <https://canonizer.com/topic/88-Theories-of-Consciousness/7-Qualia-are-Physical-Qualities>'
> ', like me, predict that redness is just a physical property of something
> in our brain.  I like to take Glutamate as an example of a hypothetical
> possibility.  We predict that glutamate behaves the way it does, in a
> synapse, because of its redness quality.  If someone experiences redness,
> without glutamate, that hypothesis would then be falsified.  Then you test
> for something else in the brain, till you find what it is that is reliably
> responsible for someone experiencing redness.  Functionalists like Stathis
> seem to predict that redness can "arise" from some function.  I predict
> that they will never find any function that will result in a redness
> experience, and that without glutamate, redness will not be possible.
> Obviously a falsifiable claim.
>
> If you objectively observe a bat representing some of its echolocated
> knowledge with glutamate (your redness), Not only will you know the bat is
> conscious, you will know what part of that bat's knowledge is like.  All
> that will be possible, once we have a dictionary that said it is glutamate
> that behaves the way it does, because of its redness quality.
>
> So, objectively observing whether something is conscious or not, has
> nothing to do with what it is saying, as you point out.  Once we have the
> required dictionary telling us which of all our abstract descriptions of
> stuff in the brain has what colorness qualities, it is simply objectively
> observing  what we see in the brain, then using the dictionary to know not
> only if it is conscious, but to also know what it is like.
>
> Again, consciousness isn't a 'hard problem' it is just a color quality
> problem.
>
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230408/628168ef/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Image2.png
Type: image/png
Size: 28745 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230408/628168ef/attachment.png>


More information about the extropy-chat mailing list