[ExI] Fwd: Chalmers

Brent Allsop brent.allsop at gmail.com
Thu Dec 19 19:42:29 UTC 2019


Oh that’s our problem: you haven’t yet seen the descriptions of the “1.
Week”, “2. Stronger”, and “3 Strongest” forms of effing the ineffable
explained in this “Objectively, We are Blind to Physical Qualities
paper, referenced in “Representational Qualia Theory

Our physical knowledge of our right field of vision exists in our left
hemisphere, and visa versa for the left field of vision.  Steven Lehar
<https://canonizer.com/topic/81-Steven-Lehar/4> recommends thinking of it
as a “diorama” of knowledge, split between our brain hemispheres.  The
corpus callosum can “computationally bind” these two hemispheres into one
composite awareness of what we see.  Like “I think, therefor I am” we
cannot doubt the reality and physical quality of both of these physical
hemispheres of knowledge.  We don’t perceive them, they are the final
result of perception, the physical knowledge we are directly aware of.  In
other words, the corpus callosum is performing the “3. Strongest” form of
effing the ineffable by enabling both your right and left hemisphere to be
directly aware of the physical knowledge in the other in one unified
conscious experience through computational binding.

The “3. Strongest” form of effing the ineffable was portrayed as a neural
ponytail in the Avatar movie
<https://www.youtube.com/watch?v=X0mAKz7eLRc&t=125s>.  With such a neural
ponytail, you could experience all of the experience, not just half.  If
your redness was like your partners greenness, you would be directly aware
of such physical facts with such a neural ponytail.

Also, I’m not the only one that has realized that such a neural ponytail
could falsify solipsism by enabling us to be directly aware of physical
knowledge someone else was experiencing.  (or that failing to achieve such
a neural ponytail could verify it).  See this: “A Modest Proposal for
Solving the Solipsism Problem
in Scientific American.  Note: the reason it is only “modest” is because
McGinn is only using the “2. Stronger” form of effing the ineffable, not
the “3. Strongest” where you are directly aware of other’s physical
knowledge.  V. S. Ramachandran was the first to propose this type of
computational binding and effing of the ineffable in his “3 laws of qualia
paper where he proposed connecting  2 brains with a similar “bundle of
neurons,” in the 90s.  When I presented our paper to him he basically
admitted he didn’t realize it’s significance way back then.

On Thu, Dec 19, 2019 at 11:16 AM John Clark via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Thu, Dec 19, 2019 at 12:48 PM Brent Allsop via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>>  > You can certainly achieve the same functionally of voluntary choice
>> with an abstract system.  But if you define “voluntary”, like you do
>> consciousness: to be a system that makes decisions with a system
>> implemented directly on physical qualities, then you are right.  A
>> substrate independent computer cannot perform “voluntary” actions, per this
>> definition.
> I don't understand any of that. If you or I or a computer does X rather
> than Y there are only 2 possibilities:
> 1) You I and the AI did it for a reason, that is to say we did it because
> of cause and effect.
> 2)  You I and the AI did it for *no* reason, that is to say we did it
> randomly.
> There are no other possibilities because everything either happens for a
> reason or it doesn't happen for a reason.
>  John K Clark
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20191219/8b921160/attachment-0001.htm>

More information about the extropy-chat mailing list