[ExI] Symbol Grounding

Brent Allsop brent.allsop at gmail.com
Mon May 1 21:32:35 UTC 2023


Hi Ben,

On Sat, Apr 29, 2023 at 5:05 AM Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> On 29/04/2023 10:29, Giovanni Santostasi wrote:
> > Hi Ben,
> > I see sorry I'm tired, lol. Yeah, it makes sense now and I understand
> > what you tried to say that is basically what I try to say. The
> > components is not what matters but the process. I see why I was
> > confused to hear this sensible argument from Brent, lol.
> > Ok...
>
>
> Yes. The 'missing ingredient' is organisation. The process. Information.
> Without this, you just have a pile of bricks. girders,
> neurotransmitters, spike trains, etc., that can't, on their own, do or
> mean anything.
>
> As I was cut short by Gordon, who doesn't want to listen to anything but
> his own ideas, I didn't continue my theme, but it was basically this:
> Spike trains, even though I've been banging on about them, despite being
> the 'language of the brain' (or more like the 'alphabet of the brain')
> aren't the important thing. They are just a low-level component that
> underlies the brain's communication with itself.
>
> The important thing is the organisation of them into patterns of
> information. Just as with human language, the individual letters don't
> matter, the organisation of them into words paragraphs, etc., does.
> Which is why we have so many different alphabets. They are just the
> lowest level of structure, and could be anything (this also underlies
> the 'substrate indifference' argument, which should be obvious, really.
> The high-level patterns of thought are indifferent to the basic
> components that are used. Spike trains and neurotransmitters, magnetic
> fields and plasma, electrons and logic gates, beer-cans and string. What
> they are is irrelevant, as long as they work).
>
> I'm not directing this at Gordon, because I know he doesn't want to
> listen, but I was going to point out that human language. human brain
> language and computer language, all use the same principles of having
> low-level components that are organised into higher-level ones (in
> several distinct tiers), to produce the patterns that we are interested
> in. As far as the inner workings of our brains are concerned, patterns
> of information are all there is.


You guys seem to forever only be interested in, and always insisting on
changing the subject to, everything that has nothing to do with subjective
properties.  In my opinion, you need to get rid of all the complexity and
organization you are talking about here.  Get rid of all the recursion, or
"communication with itself" Giovani is always talking about.  Get rid of
ALL the intelligence, get rid of any subject (knowledge of a spirit in the
brain) being aware of the qualities in a first person way, get rid of the
eyes, and any perception system.  Stop talking about the neural correlates
of, or the causes of consciousness.  And instead, just focus on the
qualities, themselves, not what causes them.  Stop assuming that qualities
arise from function.  Instead, accept the obvious, that function runs on
top of properties, not the other way around.

In my opinion, this is the way everyone is looking to figure out
consciousness, everyone thinks it needs to be something hard, and THIS is
the reason everyone is mishing what is in reality, quite simple.  Simple
colorness qualities (much of reality really has them) that can be
computationally bound into one composite qualitative experience that does
computation in a way which is more powerful than the brute force logic
gates we use in today's CPUs.

Just make a simple physical device.  All it is is two pixels of subjective
qualities.  One of them is a constant redness quality, and the other is
switching from redness to greenness.  The computationally bound system is
just dreaming of this simple composite subjective two pixel experience of
one pixel of redness, computationally bound with another pixel that is
changing from redness to greenness.  There is no complexity, the system is
representing at most two bits of information:  11 then 10 then 11 then
10... repeated.  Um, I mean redness|redness then redness|greenness then
redness|redness then redness|greenness... repeated.  I would define the
second one to be conscious, and not the first one.  Does anyone else agree
with something this simple fitting under the definition of being
phenomenally conscious, or like something?



> Where they originate is not only not
> important, it's unknown. Just like word tokens in a large language model.
>

I don't believe this.  Half of our subjective visual awareness is in one
hemisphere, and half in the other.  My understanding is that It is very
clear how this visual bubble world
<https://canonizer.com/videos/consciousness?chapter=the+world+in+your+head&format=360>
space is laid out in the visual cortex.  It is very clear when a particular
region suffers damage, it is THAT region in the subjective buble world
which becomes a blind spot.  Steven Lehar (who consulted with the  bubble
world
<https://canonizer.com/videos/consciousness?chapter=the+world+in+your+head&format=360>
video)
argues that the 3D model must be laid out in the brain, very similar to the
way we experience it, and there are important computational reasons for why
adjacent voxel elements of our subjective knowledge must be adjacent to
each other in the neural tissue.



> When you think about it, the whole 'grounding' issue is bogus. As I said
> long ago now, it's all about associations in the brain (or what passes
> for one, like a vast array of GPUs). We don't link the concept of
> 'horse' directly to any horse. It's all about the many many many
> separate details gleaned from the outside world (whatever that consists
> of, including a set of training data) and stitched into a set of
> patterns that are associated with other patterns.
>
> I disproved, several years ago, Brent's naive idea of a specific
> neurotransmitter being the actual experience of a specific colour. It's
> very easy. Just count the number of neurotransmitters there are, then
> count the number of colours that we can percieve. Just colours, don't
> even worry about the millions upon millions of other experiences we're
> capable of. The conclusion is inescapable. But, like Gordon, he simply
> refuses to listen, and just continues to repeat the same old nonsense
> (conceptually and literally).
>

Thank you for counting these up.  That is a good data point.  So, I chalk
this up to yet another piece of evidence that says it needs to be more than
just neurotransmitters.  And, still, the point of glutamate is
falsifiability.  THAT is what this field is lacking, so easy falsifiability
is the most important reason I'm still using glutamate as a hypothetical
possibility, which is easiest for anyone to understand, and falsify.

The bottom line is, when we look at something, we have a
composite qualitative experience.  There must be something that is this
experience, and each of its qualities.  Redness may not be glutamate, but
it must be something in the brain which is objectively observable.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230501/5d91ebdd/attachment.htm>


More information about the extropy-chat mailing list