<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
On 18/04/2023 00:37, Brent Allsop wrote:<br>
<blockquote type="cite"
cite="mid:mailman.456.1681774647.847.extropy-chat@lists.extropy.org">
<div>I'm trying to get my head around this view that all there are
is relationships.</div>
<div><br>
</div>
<div>My normal thinking is, there is the subject, I. There is the
object, the ball. Then there is the programmed relationship; I
throw.</div>
<div>I, is a label for my body. The ball is a round object that
fits in my hand. And "throw" is a label for a set of
programming that defines the relationship (what am I going to do
to the ball?)</div>
<div>For me, it is the computational binding which contains all
the diverse sets of programmed, or meaningful relationships.
For me, you still need the objective, for the relationships to
be meaningful.</div>
</blockquote>
<br>
<br>
<br>
This is how I'd put it in terms of the 'Internal Models' model that
I've been talking about:<br>
<br>
"there is the subject, I"<br>
<br>
Which is an agent model of the agent doing the modelling (a
'self-model')<br>
<br>
<br>
"There is the object"<br>
<br>
Well, how do you know that? What is 'an object'? All we really have
is incoming sensory signals. So we join them together, in accordance
with regularities we notice, to create another model. This we give a
label, and is what we are actually referring to when we talk about
'an object'. We really mean our internal model that we assume
corresponds to something coherent in the world outside our heads
that we assume exists (and which we have no absolute knowledge of,
because we only have access to incoming sensory signals)<br>
<br>
So I'd prefer to say 'There is the object model'<br>
<br>
So far, two internal models.<br>
<br>
<br>
"Then there is the programmed relationship; I throw"<br>
<br>
Again, how do we know we 'throw'?<br>
<br>
Bearing in mind that all we have are incoming signals, that we can
connect to outgoing signals (instructions to the motor cortex to
perform actions), we have to rely on predictable patterns that can
be produced, and can then generate an 'action model' for throwing,
that we can link to an object model for a ball. This involves at
least three interconnected internal models - one for 'the ball', one
for our body (or the relevant parts of it at the time) and one for
'throwing'. Incoming sensory data gives us information about the
result of the action. And then we feel bad because the result is
closely associated with the 'you throw like a girl' conceptual
model.<br>
<br>
<br>
"I, is a label for my body"<br>
<br>
I'd say 'my body' and 'I' are two different models. Closely
associated, but not the same thing.<br>
<br>
<br>
"The ball is a round object that fits in my hand"<br>
<br>
'The ball' is an object model that can be associated in various ways
with the hand portion of my body model.<br>
<br>
<br>
So presumably, here, 'computational binding' means the associations
these models make with one another under different circumstances.<br>
<br>
<br>
I think the key thing here, is the concept that <i>we never deal
directly with 'real-world things'</i>. In fact this is impossible.
instead, we deal with models in our heads, using incoming sensory
(and outgoing motor, with feedback loops) signals to create and
manipulate the internal mental models.<br>
<br>
When we say "the flower smells nice", it's shorhand for "my pleasure
centres are being stimulated by olfactory signals closely associated
with my internal model labelled 'the flower'".<br>
<br>
The fact that we can only have 'second hand' information via our
senses, and not 'direct knowledge' of things in the world, explains
why we are easily fooled sometimes. The smell actually came from an
open packet of fruit pastilles that we didn't see, and the flower
has no scent at all.<br>
<br>
Or that bang we just heard, simultaneous with the sight of a pigeon
landing on the lawn, is actually a bike backfiring, and not the
sound of a really heavy pigeon, which is what we first thought. I
suppose you could say that we have 'computationally bound' the
auditory and visual signals together, but the result is soon
realised as absurd (because we have no memories of such massively
heavy pigeons, so the interpretation, or model, is so weak that it's
easily outcompeted by other interpretations).<br>
<br>
'Knowledge of real things', if such a thing were possible, would
make these illusions impossible.<br>
<br>
Ben<br>
<br>
PS when you say "computationally bound", it seems to me you mean
"associated". If this is correct, isn't that an easier, quicker and
more importantly, clearer, term?<br>
<br>
</body>
</html>