[ExI] GPT-4 on its inability to solve the symbol grounding problem
gsantostasi at gmail.com
Sun Apr 9 21:56:09 UTC 2023
Maybe you want to forward this to the group because it came just to me.
Yes, you are right in your interpretation of what I'm saying, greeness (the
perception of green) and redness (the perception of red) in a individual
are 2 activations patterns that are different (involved different neurons
in certain brain regions). They will be different in this sense.
*Would you be able to detect this difference by objectively observing
anything in their brains? If so, what would the difference be?*It is
different patterns so yes, I would be able to tell something is different
from when red is perceived vs green. The question then is can I tell that
the person is seeing red or green?
YES. But only statistically because brains are slightly different. If I
take a sample of people and show them red their activation pattern would be
unique at the level of single neurons but the general area of activation,
types of neurons and approximate number will be the same. I could draw the
activation patterns in a map of the brain for each person and see there is
a general over laps in a given region for red and one in a given region for
green. By the way this is exactly how they can do crazy stuff like showing
pictures of different objects to people and measure fMRI (that are related
to brain activation patterns) and train an AI to associate that fMRI
patterns with the given image. I made a post several days ago about this
but for whatever reason the administrators did't approve the post (not sure
why). But here it is:
On Sun, Apr 9, 2023 at 1:44 PM Brent Allsop <brent.allsop at gmail.com> wrote:
> On Sat, Apr 8, 2023 at 9:33 PM Giovanni Santostasi <gsantostasi at gmail.com>
>> This is my updated Brent's picture.
>> What this represents is the fact, in all the 3 cases we have in the end
>> just neural activation patterns. It is not important if it is neurons or
>> digital on-and-off switches in the bot brain. If there are recursive loops
>> that communicate to the system its own state then it is basically the same
>> experience. Red and green can be differentiated because they are different
>> patterns and are usually associated with a strawberry and other objects
>> that appear red. In the case of the green strawberry, the system identifies
>> the correct shape of the strawberry (shape is another activation pattern)
>> but it is perceived as green. It is an unusual association and may cause
>> confusion and difficulty in recognizing the object but not big deal. Same
>> with what happens with the bot, an English word is what is used to label
>> the experience but it could be a musical tone, a sequence of numbers, or
>> even the neural pattern itself. The consciousness is really in the loop,
>> the system knowing about its own state.
>> [image: Image2.png]
> Having a hard time understanding what you are saying here. I use the term
> "red" as a label for something that has a physical property in that it
> reflects or emits 700 nm light. I use redness as a label for a quality of
> subjective experience. Given that I have this different than normal
> definitions of words, would I be correct in translating your language: "Red
> and green can be differentiated because they are different patterns..." as
> "Redness and greenness [experience] can be differentiated..." in my
> So are you saying that redness is a particular "neural activation pattern"
> that has "recursive loops that communicate to the system its own state"
> And that a greeness experience would be a little bit different neural
> activation pattern, with similar recursion?
> Let's say you were observing the brains of the first two, where one
> represents red knowledge with the other's greenness. Would you be able to
> detect this difference by objectively observing anything in their brains?
> If so, what would the difference be?
> The way you talk about "perceiving" redness in the opposite of a direct
> way, you seem to be talking about something different than the model I'm
> attempting to describe.
-------------- next part --------------
An HTML attachment was scrubbed...
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 28745 bytes
Desc: not available
More information about the extropy-chat