[ExI] e: GPT-4 on its inability to solve the symbol grounding problem

Ben Zaiboc ben at zaiboc.net
Tue Apr 18 10:16:37 UTC 2023


On 18/04/2023 00:37, Brent Allsop wrote:
> I'm trying to get my head around this view that all there are is 
> relationships.
>
> My normal thinking is, there is the subject, I.  There is the object, 
> the ball.  Then there is the programmed relationship;  I throw.
> I, is a label for my body.  The ball is a round object that fits in my 
> hand.  And "throw" is a label for a set of programming that defines 
> the relationship (what am I going to do to the ball?)
> For me, it is the computational binding which contains all the diverse 
> sets of programmed, or meaningful relationships. For me, you still 
> need the objective, for the relationships to be meaningful.



This is how I'd put it in terms of the 'Internal Models' model that I've 
been talking about:

"there is the subject, I"

Which is an agent model of the agent doing the modelling (a 'self-model')


"There is the object"

Well, how do you know that? What is 'an object'? All we really have is 
incoming sensory signals. So we join them together, in accordance with 
regularities we notice, to create another model. This we give a label, 
and is what we are actually referring to when we talk about 'an object'. 
We really mean our internal model that we assume corresponds to 
something coherent in the world outside our heads that we assume exists 
(and which we have no absolute knowledge of, because we only have access 
to incoming sensory signals)

So I'd prefer to say 'There is the object model'

So far, two internal models.


"Then there is the programmed relationship; I throw"

Again, how do we know we 'throw'?

Bearing in mind that all we have are incoming signals, that we can 
connect to outgoing signals (instructions to the motor cortex to perform 
actions), we have to rely on predictable patterns that can be produced, 
and can then generate an 'action model' for throwing, that we can link 
to an object model for a ball. This involves at least three 
interconnected internal models - one for 'the ball', one for our body 
(or the relevant parts of it at the time) and one for 'throwing'. 
Incoming sensory data gives us information about the result of the 
action. And then we feel bad because the result is closely associated 
with the 'you throw like a girl' conceptual model.


"I, is a label for my body"

I'd say 'my body' and 'I' are two different models. Closely associated, 
but not the same thing.


"The ball is a round object that fits in my hand"

'The ball' is an object model that can be associated in various ways 
with the hand portion of my body model.


So presumably, here, 'computational binding' means the associations 
these models make with one another under different circumstances.


I think the key thing here, is the concept that /we never deal directly 
with 'real-world things'/. In fact this is impossible. instead, we deal 
with models in our heads, using incoming sensory (and outgoing motor, 
with feedback loops) signals to create and manipulate the internal 
mental models.

When we say "the flower smells nice", it's shorhand for "my pleasure 
centres are being stimulated by olfactory signals closely associated 
with my internal model labelled 'the flower'".

The fact that we can only have 'second hand' information via our senses, 
and not 'direct knowledge' of things in the world, explains why we are 
easily fooled sometimes. The smell actually came from an open packet of 
fruit pastilles that we didn't see, and the flower has no scent at all.

Or that bang we just heard, simultaneous with the sight of a pigeon 
landing on the lawn, is actually a bike backfiring, and not the sound of 
a really heavy pigeon, which is what we first thought. I suppose you 
could say that we have 'computationally bound' the auditory and visual 
signals together, but the result is soon realised as absurd (because we 
have no memories of such massively heavy pigeons, so the interpretation, 
or model, is so weak that it's easily outcompeted by other interpretations).

'Knowledge of real things', if such a thing were possible, would make 
these illusions impossible.

Ben

PS when you say "computationally bound", it seems to me you mean 
"associated". If this is correct, isn't that an easier, quicker and more 
importantly, clearer, term?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230418/395b0a30/attachment.htm>


More information about the extropy-chat mailing list