[ExI] Symbol Grounding

Giovanni Santostasi gsantostasi at gmail.com
Mon May 1 21:52:02 UTC 2023


*Stop assuming that qualities arise from function.  Instead, accept the
obvious, that function runs on top of properties, not the other way around.*
Brent,
I tried to explain to you that there are no properties. It is true for
fundamental particles, it is true for more complex phenomena such as
consciousness and redness.
Do an exercise, start with something simple you know, and tell me what a
property of that something simple is.
Go ahead. Don't hide behind stuff like redness that is not fully
understood. Go ahead and tell me something about stuff we know better.
I will start. I will pretend to be Brent.
Brent: Giovanni what about wetness of water? Is it not a property of water?
Giovanni: No, Brent water is not wet, let alone water has multiple states
(it can be a gas, or a solid) the sensation of wetness is due to the
interaction of water and our skin. What you feel as wetness is actually a
change in temperature that our body perceives when in contact with water
blah blah
Really there is no one thing that is considered by science a property.
I have tried to explain this to you. Do you think I'm changing the topic?
No, this is perfectly the topic. You are looking for properties and I tell
you there are no such things. It is not changing the topic.
I even tried to tell you that this business of properties is how the Greek
philosophers thought about nature and it turned out that idea was full of
shit. It didn't work as a way to explain how the universe work.
Why do you want to go back to that useless idea?

Giovanni






On Mon, May 1, 2023 at 2:38 PM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> Hi Ben,
>
> On Sat, Apr 29, 2023 at 5:05 AM Ben Zaiboc via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>> On 29/04/2023 10:29, Giovanni Santostasi wrote:
>> > Hi Ben,
>> > I see sorry I'm tired, lol. Yeah, it makes sense now and I understand
>> > what you tried to say that is basically what I try to say. The
>> > components is not what matters but the process. I see why I was
>> > confused to hear this sensible argument from Brent, lol.
>> > Ok...
>>
>>
>> Yes. The 'missing ingredient' is organisation. The process. Information.
>> Without this, you just have a pile of bricks. girders,
>> neurotransmitters, spike trains, etc., that can't, on their own, do or
>> mean anything.
>>
>> As I was cut short by Gordon, who doesn't want to listen to anything but
>> his own ideas, I didn't continue my theme, but it was basically this:
>> Spike trains, even though I've been banging on about them, despite being
>> the 'language of the brain' (or more like the 'alphabet of the brain')
>> aren't the important thing. They are just a low-level component that
>> underlies the brain's communication with itself.
>>
>> The important thing is the organisation of them into patterns of
>> information. Just as with human language, the individual letters don't
>> matter, the organisation of them into words paragraphs, etc., does.
>> Which is why we have so many different alphabets. They are just the
>> lowest level of structure, and could be anything (this also underlies
>> the 'substrate indifference' argument, which should be obvious, really.
>> The high-level patterns of thought are indifferent to the basic
>> components that are used. Spike trains and neurotransmitters, magnetic
>> fields and plasma, electrons and logic gates, beer-cans and string. What
>> they are is irrelevant, as long as they work).
>>
>> I'm not directing this at Gordon, because I know he doesn't want to
>> listen, but I was going to point out that human language. human brain
>> language and computer language, all use the same principles of having
>> low-level components that are organised into higher-level ones (in
>> several distinct tiers), to produce the patterns that we are interested
>> in. As far as the inner workings of our brains are concerned, patterns
>> of information are all there is.
>
>
> You guys seem to forever only be interested in, and always insisting on
> changing the subject to, everything that has nothing to do with subjective
> properties.  In my opinion, you need to get rid of all the complexity and
> organization you are talking about here.  Get rid of all the recursion, or
> "communication with itself" Giovani is always talking about.  Get rid of
> ALL the intelligence, get rid of any subject (knowledge of a spirit in the
> brain) being aware of the qualities in a first person way, get rid of the
> eyes, and any perception system.  Stop talking about the neural correlates
> of, or the causes of consciousness.  And instead, just focus on the
> qualities, themselves, not what causes them.  Stop assuming that qualities
> arise from function.  Instead, accept the obvious, that function runs on
> top of properties, not the other way around.
>
> In my opinion, this is the way everyone is looking to figure out
> consciousness, everyone thinks it needs to be something hard, and THIS is
> the reason everyone is mishing what is in reality, quite simple.  Simple
> colorness qualities (much of reality really has them) that can be
> computationally bound into one composite qualitative experience that does
> computation in a way which is more powerful than the brute force logic
> gates we use in today's CPUs.
>
> Just make a simple physical device.  All it is is two pixels of subjective
> qualities.  One of them is a constant redness quality, and the other is
> switching from redness to greenness.  The computationally bound system is
> just dreaming of this simple composite subjective two pixel experience of
> one pixel of redness, computationally bound with another pixel that is
> changing from redness to greenness.  There is no complexity, the system is
> representing at most two bits of information:  11 then 10 then 11 then
> 10... repeated.  Um, I mean redness|redness then redness|greenness then
> redness|redness then redness|greenness... repeated.  I would define the
> second one to be conscious, and not the first one.  Does anyone else agree
> with something this simple fitting under the definition of being
> phenomenally conscious, or like something?
>
>
>
>> Where they originate is not only not
>> important, it's unknown. Just like word tokens in a large language model.
>>
>
> I don't believe this.  Half of our subjective visual awareness is in one
> hemisphere, and half in the other.  My understanding is that It is very
> clear how this visual bubble world
> <https://canonizer.com/videos/consciousness?chapter=the+world+in+your+head&format=360>
> space is laid out in the visual cortex.  It is very clear when a particular
> region suffers damage, it is THAT region in the subjective buble world
> which becomes a blind spot.  Steven Lehar (who consulted with the  bubble
> world
> <https://canonizer.com/videos/consciousness?chapter=the+world+in+your+head&format=360> video)
> argues that the 3D model must be laid out in the brain, very similar to the
> way we experience it, and there are important computational reasons for why
> adjacent voxel elements of our subjective knowledge must be adjacent to
> each other in the neural tissue.
>
>
>
>> When you think about it, the whole 'grounding' issue is bogus. As I said
>> long ago now, it's all about associations in the brain (or what passes
>> for one, like a vast array of GPUs). We don't link the concept of
>> 'horse' directly to any horse. It's all about the many many many
>> separate details gleaned from the outside world (whatever that consists
>> of, including a set of training data) and stitched into a set of
>> patterns that are associated with other patterns.
>>
>> I disproved, several years ago, Brent's naive idea of a specific
>> neurotransmitter being the actual experience of a specific colour. It's
>> very easy. Just count the number of neurotransmitters there are, then
>> count the number of colours that we can percieve. Just colours, don't
>> even worry about the millions upon millions of other experiences we're
>> capable of. The conclusion is inescapable. But, like Gordon, he simply
>> refuses to listen, and just continues to repeat the same old nonsense
>> (conceptually and literally).
>>
>
> Thank you for counting these up.  That is a good data point.  So, I chalk
> this up to yet another piece of evidence that says it needs to be more than
> just neurotransmitters.  And, still, the point of glutamate is
> falsifiability.  THAT is what this field is lacking, so easy falsifiability
> is the most important reason I'm still using glutamate as a hypothetical
> possibility, which is easiest for anyone to understand, and falsify.
>
> The bottom line is, when we look at something, we have a
> composite qualitative experience.  There must be something that is this
> experience, and each of its qualities.  Redness may not be glutamate, but
> it must be something in the brain which is objectively observable.
>
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230501/19c2384f/attachment.htm>


More information about the extropy-chat mailing list