[ExI] Why stop at glutamate?

Jason Resch jasonresch at gmail.com
Mon Apr 10 23:46:29 UTC 2023


On Mon, Apr 10, 2023, 7:08 PM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
> On Mon, Apr 10, 2023 at 11:11 AM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Sun, Apr 9, 2023 at 5:20 PM Giovanni Santostasi via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> If this doesn't destroy completely anybody illusion that the a brain
>>> made of meat (and particular stuff like glutamate) I don't know what else
>>> it could. These people will always believe that meat brains are necessary
>>> because God made them so. No amound of science would convince them.
>>>
>> 2) You can train an AI to recognize activation patterns in the brain and
>>> associate them with particular stimuli. This has been tried with words and
>>> even images both in wake and dreaming state. Here an example that should
>>> blow everybody minds:
>>> https://www.biorxiv.org/content/10.1101/2022.11.18.517004v2.full.pdf
>>> Again, from this study we can see that it doesn't matter how the pattern
>>> is generated, but that there is a pattern of activation. These patterns are
>>> unique for each individual but statistically they are similar enough that
>>> after training over many subjects you can give a statistical estimate that
>>> the person is seeing or even thinking about something in particular. Again,
>>> IT WORKS people !
>>>
>>
>> I consider this a knock-down argument against the functional role of
>> glutamate (or other molecules) in the sensation of red. These tests use
>> only blood flow data, which is a proxy for neural activity. They are not
>> measuring ratios of specific neurotransmitters or molecules, or
>> introspecting the activity within the cell, the fMRI looks only at which
>> neurons are more vs. less active. And yet, from this data we can extract
>> images and colors. This proves that neural activity embodies this
>> information.
>>
>
> I guess I've failed to communicate something important about why we use
> glutamate.  The primary reason we use glutamate is precisely because of
> its ease of falsifiability.  I fully expect redness to be falsified
> (someone will experience redness with no glutamate present) and something
> different from glutamate will then be tried, and eventually something will
> be found to be experimentally proven to be redness.  Easy and obvious
> falsifiability is what everyone is missing, so THAT is what I'm most
> attempting to communicate with the glutamate example.
>
> If you guys think there are knock down arguments for why a redness quality
> is simply due to recursive network configurations (I am not yet convinced,
> and am still predicting otherwise (see below), and it's much easier to say
> glutamate than whatever stuff you guys are talking about, which nobody is
> concisely stating, and I have problems understanding), then please, every
> time I say 'glutamate', do a substitution for anything you like such as
> 'Recursive network model A', or any other yet to be falsified theory.  And
> let's leave it up to the experimentalists to prove who is right, like good,
> humble, theoretical scientists should.
>
>
> P.S.
> At least that paper
> <https://www.biorxiv.org/content/10.1101/2022.11.18.517004v2.full.pdf> you
> referenced has pictures (composed of real qualities), not just abstract
> text (tells you nothing about qualities), as text only would be completely
> meaningless, right?
> But why don't you guys ask the publishers of that paper, how they came up
> with the qualities displayed on the images depicting what they are
> detecting?
> Here is a link to Jack Galant's work
> <https://www.youtube.com/watch?v=6FsH7RK1S2E&t=1s>, done over a decade
> ago, to which all these modern examples are just derivative works, easily
> done with modern AI tools.
> When I saw Jack Galant's work
> <https://www.youtube.com/watch?v=6FsH7RK1S2E&t=1s> back then, I knew he
> had a problem determining what qualities to display on his screens,
> depicting what he was detecting.  The fMRI only providing abstract
> qualityless data which is meaningless without a quality grounded dictionary.
> So I called him and asked him how he knew what qualities to display.  He
> immediately admitted they "false-colored" them (Jack Gallant's words).
> They used the original color codes in the digital images they were showing
> to their subjects, to determine what color to display.  In other words,
> they were grounding their colors to physical light, which is nothing like
> either the properties of a strawberry, which the light merely represents,
> or the very different properties of conscious knowledge they are detecting
> and describing with qualityless abstract text.  As Giovanni admits, they
> are correcting for any changes in physical properties or qualities they are
> detecting so they can falsely map all those diverse sets of properties they
> are detecting back to the same false colored light, blinding them to any
> possible inverted qualities they may be detecting in all that diversity.
>
> By the way, I added this Japanese paper
> <https://www.biorxiv.org/content/10.1101/2022.11.18.517004v2.full.pdf> to
> the list of yet another example of quality blind papers, including Jack
> Galant's work that only uses one falsely grounded abstract word for all
> things representing 'red' here
> <https://canonizer.com/topic/603-Color-Exprnc-Observation-Issue/1-Agreement>
> .
>
> If anyone finds a peer reviewed paper that is not quality blind. (other
> than mine
> <https://www.dropbox.com/s/k9x4uh83yex4ecw/Physicists%20Don%27t%20Understand%20Color.docx?dl=0>,
> which is about to be published) will you please let me know about one?  As
> I will trust someone that believes and understands that qualities are
> necessarily real properties of real hallucinations in our brain.  I predict
> they are just the physical properties they are detecting but only
> abstractly describing and then false coloring.
>


Brent,

I appreciate that added detail and correction. If the colors in the
reconstructed images are false colors or inferred by the AI from the
reconstructed image then I retract my statement of it being a knockdown
argument against the molecular basis of color qualia. I still suspect color
information is encoded in the patterns of neural activity, but it may be at
a low enough level that the fMRI lacks the spatial resolution to detect it.

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230410/d762c99d/attachment.htm>


More information about the extropy-chat mailing list