[ExI] GPT-4 on its inability to solve the symbol grounding problem

Giovanni Santostasi gsantostasi at gmail.com
Fri Apr 14 21:12:15 UTC 2023


*They *do* have to do with the statistical properties of words and symbols
and the relations and patterns between them. The shapes of pears and apples
(and eyes etc) are describable and distinguishable in the language of
mathematics*
Yes, and that is what the majority of the list involved in the discussion
is claiming over and over. That we can derive meaning from patterns alone
and in particular using logical and mathematical language.
I cannot imagine any experience or concept that is not made of relations.
If GPT-4 understands above, inside, below, on top, if understand that he
needs to use symmetry to place human eyes, that hair goes on top of the
head, human have arms that come out of a body and so on, you can see that
it possess many internal representations of things. I have a really hard
time imagining how this is derived from a simple autocomplete operation. It
is obvious these higher cognitive functions are "emergent", they are there
are a result of highly nonlinear interactions that give rise to behavior
that is more than the sum of the parts.
Right now we do have not the means to say, here in these weights there is
the emergent behavior because we have no clue how to do that and how to
interpret the weights of the system but one way to understand the behavior
of a system that is complex is to perturb it and see how the change affects
it.
For example, if you use ChatGPT alone you cannot get this level of correct
interpretation of the concepts I have mentioned before.
The difference between GPT-4 and ChatGPT is not really in the architecture
but in the data and then a number of parameters involved. As you would
expect from a highly nonlinear system increasing even by little the number
of parameters creates more complex behavior. That is not what you get from
a simple statistical predictor because after some time the stats converge,
and there is not much difference between predicting something is going to
happen at the 68.3 % or 68.345 %. The stats of the words are already known
at this point, more training is not going to improve prediction. But it is
evident that as we increase parameters the behavior of GPT improves
dramatically, and that cannot be achieved with better stats alone (given
the convergence). This is a pretty good argument that there is something
else beyond the stats even if stats is what was used to train the system.
This is behavior is exactly the definition of emergence.

Giovanni




On Fri, Apr 14, 2023 at 1:46 PM Gordon Swobe <gordon.swobe at gmail.com> wrote:

>
> I’ll bet if you ask it to draw a perfect circle, it will draw one without
> ever having “seen” one. It should have learned from the language about
> circles including the language of mathematics of circles how to draw one.
> Is that really so amazing?
>
> -gts
>
>
> On Fri, Apr 14, 2023 at 2:17 PM Gordon Swobe <gordon.swobe at gmail.com>
> wrote:
>
>>
>>
>> On Fri, Apr 14, 2023 at 1:54 PM Giovanni Santostasi <
>> gsantostasi at gmail.com> wrote:
>>
>>>
>>> I showed you the different pics GPT-4 can create given nonvisual
>>> training. How can it draw an apple and know how to distinguish it from a
>>> pear…
>>>
>> These tasks have nothing to do with the statistical properties of words
>>> given they are spatial tasks and go beyond verbal communication. How do you
>>> explain all this?
>>>
>>
>>
>> They *do* have to do with the statistical properties of words and symbols
>> and the relations and patterns between them. The shapes of pears and apples
>> (and eyes etc) are describable and distinguishable in the language of
>> mathematics.
>>
>> I agree it is amazing, but the “meaning” is something we assign to the
>> output.
>>
>> -gts
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230414/f486aa40/attachment.htm>


More information about the extropy-chat mailing list