<div dir="ltr"><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span style="font-size:12pt;font-family:"Times New Roman",serif">John,</span></p>
<p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span style="font-size:12pt;font-family:"Times New Roman",serif">Oh that’s our problem: you haven’t
yet seen the descriptions of the “1. Week”, “2. Stronger”, and “3 Strongest”
forms of effing the ineffable explained in this “<a href="https://docs.google.com/document/d/1uWUm3LzWVlY0ao5D9BFg4EQXGSopVDGPi-lVtCoJzzM/edit?usp=sharing" style="color:blue">Objectively,
We are Blind to Physical Qualities</a>” paper, referenced in “<a href="https://canonizer.com/topic/88-Representational-Qualia/6" style="color:blue">Representational
Qualia Theory</a>”.</span></p>
<p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span style="font-size:12pt;font-family:"Times New Roman",serif"> </span></p>
<p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span style="font-size:12pt;font-family:"Times New Roman",serif">Our physical knowledge of our right
field of vision exists in our left hemisphere, and visa versa for the left
field of vision. <a href="https://canonizer.com/topic/81-Steven-Lehar/4" style="color:blue">Steven Lehar</a> recommends
thinking of it as a “diorama” of knowledge, split between our brain hemispheres. The corpus callosum can “computationally bind”
these two hemispheres into one composite awareness of what we see. Like “I think, therefor I am” we cannot doubt
the reality and physical quality of both of these physical hemispheres of
knowledge. We don’t perceive them, they
are the final result of perception, the physical knowledge we are directly aware
of. In other words, the corpus callosum
is performing the “3. Strongest” form of effing the ineffable by enabling both your
right and left hemisphere to be directly aware of the physical knowledge in the
other in one unified conscious experience through computational binding.</span></p>
<p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span style="font-size:12pt;font-family:"Times New Roman",serif"> </span></p>
<p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span style="font-size:12pt;font-family:"Times New Roman",serif">The “3. Strongest” form of effing
the ineffable was portrayed as a <a href="https://www.youtube.com/watch?v=X0mAKz7eLRc&t=125s" style="color:blue">neural ponytail
in the Avatar movie</a>. With such a
neural ponytail, you could experience all of the experience, not just half. If your redness was like your partners greenness,
you would be directly aware of such physical facts with such a neural ponytail.</span></p>
<p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span style="font-size:12pt;font-family:"Times New Roman",serif"> </span></p>
<p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span style="font-size:12pt;font-family:"Times New Roman",serif">Also, I’m not the only one that has
realized that such a neural ponytail could falsify solipsism by enabling us to
be directly aware of physical knowledge someone else was experiencing. (or that failing to achieve such a neural
ponytail could verify it). See this:</span><span style="font-family:Arial,sans-serif;color:black"> “<a href="https://blogs.scientificamerican.com/cross-check/a-modest-proposal-for-solving-the-solipsism-problem/" style="color:blue">A
Modest Proposal for Solving the Solipsism Problem</a>” </span><span style="font-size:12pt;font-family:"Times New Roman",serif">in Scientific American.
Note: the reason it is only “modest” is because McGinn is only using the
“2. Stronger” form of effing the ineffable, not the “3. Strongest” where you
are directly aware of other’s physical knowledge. V. S. Ramachandran was the first to propose
this type of computational binding and effing of the ineffable in his </span><span style="font-family:Arial,sans-serif;color:black">“</span><a href="https://pdfs.semanticscholar.org/efca/ea27e85a89170f55c563623b25de5090f18a.pdf" style="color:blue"><span style="font-size:12pt;font-family:"Times New Roman",serif">3 laws of qualia</span></a><span style="font-size:12pt;font-family:"Times New Roman",serif">”
paper where he proposed connecting 2
brains with a similar “bundle of neurons,” in the 90s. When I presented our paper to him he
basically admitted he didn’t realize it’s significance way back then.</span></p></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Dec 19, 2019 at 11:16 AM John Clark via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><span style="font-family:Arial,Helvetica,sans-serif">On Thu, Dec 19, 2019 at 12:48 PM Brent Allsop via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:</span><br></div></div><div class="gmail_quote"><span style="font-family:"Times New Roman",serif;font-size:12pt"> </span><br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">
<p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;font-family:Calibri,sans-serif"><span style="font-size:12pt;font-family:"Times New Roman",serif"> <span class="gmail_default" style="font-family:arial,helvetica,sans-serif">> </span></span><span style="font-family:"Times New Roman",serif">You can certainly achieve the same functionally
of voluntary choice with an abstract system.
But if you define “voluntary”, like you do consciousness: to be a system
that makes decisions with a system implemented directly on physical qualities,
then you are right. A substrate
independent computer cannot perform “voluntary” actions, per this definition.</span></p></div></blockquote><div><br></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><font size="4">I don't understand any of that. If you or I or a computer does X rather than Y there are only 2 possibilities:</font></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><font size="4">1) You I and the AI did it for a reason, that is to say we did it because of cause and effect. </font></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><font size="4">2) <span style="font-family:Arial,Helvetica,sans-serif">You I and the AI did it for <b>no</b> reason, that is to say we did it randomly.</span></font></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><span style="font-family:Arial,Helvetica,sans-serif"><font size="4"><br></font></span></div><div class="gmail_default"><font size="4">There are no other possibilities because everything either happens for a reason or it doesn't happen for a reason.</font></div><div class="gmail_default"><font size="4"><br></font></div><div class="gmail_default"><font size="4"> John K Clark </font></div>
</div></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>