[ExI] Why stop at glutamate?

Jason Resch jasonresch at gmail.com
Tue Apr 11 15:50:43 UTC 2023


On Tue, Apr 11, 2023, 11:30 AM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
> On Tue, Apr 11, 2023 at 7:45 AM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Tue, Apr 11, 2023, 9:20 AM Brent Allsop via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> On Tue, Apr 11, 2023 at 3:21 AM Jason Resch via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>> On Tue, Apr 11, 2023, 12:05 AM Brent Allsop via extropy-chat <
>>>> extropy-chat at lists.extropy.org> wrote:
>>>>
>>>>>
>>>>>> Other parts of the brain decode the meaning of the signals they
>>>>>> receive.
>>>>>>
>>>>>
>>>>> They decode it to WHAT?  Decoding from one code, to another code, none
>>>>> of which is like anything
>>>>>
>>>>
>>>> You are now theorizing that there is nothing it is like to be the
>>>> process that decodes a signal and reaches some state of having determined
>>>> which from a broad array of possibilities, that signal represents. That is
>>>> what qualia are: discriminations within a high dimensionality space.
>>>>
>>>> nor are they grounded is not yet grounding anything.  It is still just
>>>>> a code with no grounded referent so you can't truly decode them in any
>>>>> meaningful way.
>>>>>
>>>>>
>>>> What does it mean to ground something? Explain how you see grounding
>>>> achieved (in detail)?
>>>>
>>>
>>> It is all about what is required (experimentally) to get someone to
>>> experience stand alone, no grounding dictionary required, "old guys
>>> redness".  (the requirement for grounding as in: "oh THAT is what old guys
>>> redness is like.")
>>>
>>
>> You need to be the conscious of old guy's brain to ever know that.
>>
>
> I've had this identical conversations with multiple other people like John
> Clark.  Our response is canonized in the RQT camp statement
> <https://canonizer.com/topic/88-Theories-of-Consciousness/6-Representational-Qualia>.
> In summary, It's the difference between elemental qualities and
> composite qualities.  Of course, if you consider redness to be like the
> entire monalisa, it is going to be much more difficult to communicate what
> all that is like.  And you have to transmit all the pixels to accomplish
> that.  All that is required, is elemental codes, that are grounded in
> elemental properties.  And send that grounded code, for each pixel of the
> monalisa, to that person.
> P.S.  the person receiving the coded message, could decode the codes,
> representing the mona lisa, with redness and greenness inverted, if they
> wanted.  I guess you would consider that to be the same painting?
>

No.

There is no objective image (i.e. imagining) of the Mona Lisa. There just
some arrangement of atoms in the Louvre. Each person creates the image anew
in their head when they look it it, but there's no way of sharing or
comparing the experiences between any two individuals.

If you think otherwise could you explain how two people with different
brains could come to know how the other perceives?

I liken the problem to two AIs, each in their own virtual worlds, trying to
work out a common understanding of a unit distance between them, while
having no common references of length.

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230411/28ccbed2/attachment.htm>


More information about the extropy-chat mailing list