[ExI] Symbol Grounding
ben at zaiboc.net
Sat Apr 29 11:04:41 UTC 2023
On 29/04/2023 10:29, Giovanni Santostasi wrote:
> Hi Ben,
> I see sorry I'm tired, lol. Yeah, it makes sense now and I understand
> what you tried to say that is basically what I try to say. The
> components is not what matters but the process. I see why I was
> confused to hear this sensible argument from Brent, lol.
Yes. The 'missing ingredient' is organisation. The process. Information.
Without this, you just have a pile of bricks. girders,
neurotransmitters, spike trains, etc., that can't, on their own, do or
As I was cut short by Gordon, who doesn't want to listen to anything but
his own ideas, I didn't continue my theme, but it was basically this:
Spike trains, even though I've been banging on about them, despite being
the 'language of the brain' (or more like the 'alphabet of the brain')
aren't the important thing. They are just a low-level component that
underlies the brain's communication with itself.
The important thing is the organisation of them into patterns of
information. Just as with human language, the individual letters don't
matter, the organisation of them into words paragraphs, etc., does.
Which is why we have so many different alphabets. They are just the
lowest level of structure, and could be anything (this also underlies
the 'substrate indifference' argument, which should be obvious, really.
The high-level patterns of thought are indifferent to the basic
components that are used. Spike trains and neurotransmitters, magnetic
fields and plasma, electrons and logic gates, beer-cans and string. What
they are is irrelevant, as long as they work).
I'm not directing this at Gordon, because I know he doesn't want to
listen, but I was going to point out that human language. human brain
language and computer language, all use the same principles of having
low-level components that are organised into higher-level ones (in
several distinct tiers), to produce the patterns that we are interested
in. As far as the inner workings of our brains are concerned, patterns
of information are all there is. Where they originate is not only not
important, it's unknown. Just like word tokens in a large language model.
When you think about it, the whole 'grounding' issue is bogus. As I said
long ago now, it's all about associations in the brain (or what passes
for one, like a vast array of GPUs). We don't link the concept of
'horse' directly to any horse. It's all about the many many many
separate details gleaned from the outside world (whatever that consists
of, including a set of training data) and stitched into a set of
patterns that are associated with other patterns.
I disproved, several years ago, Brent's naive idea of a specific
neurotransmitter being the actual experience of a specific colour. It's
very easy. Just count the number of neurotransmitters there are, then
count the number of colours that we can percieve. Just colours, don't
even worry about the millions upon millions of other experiences we're
capable of. The conclusion is inescapable. But, like Gordon, he simply
refuses to listen, and just continues to repeat the same old nonsense
(conceptually and literally).
More information about the extropy-chat