<div dir="ltr"><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:small;color:#000000">I still have not heard from anyone (Will?) that all they were doing was stimulating the ascending reticular formation. Long known to activate the cortex in a broad manner (and improve learning).</div><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:small;color:#000000"><br></div><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:small;color:#000000">bill w</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Feb 15, 2020 at 11:41 AM John Clark via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><span style="font-family:Arial,Helvetica,sans-serif">On Sat, Feb 15, 2020 at 11:30 AM Brent Allsop via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:</span><br></div></div><div class="gmail_quote"><br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div dir="auto"><div dir="auto"><i>I first realized it was important to distinguish between reality and knowledge of reality in an AI class as an undergraduate. We were trying to program a vision system for a robot that could manipulate blocks on a table.<span class="gmail_default" style="font-family:arial,helvetica,sans-serif"> </span>They were instructing us how to build 3D models of the blocks from 2D camera data. I was thinking this had to be the wrong way to do things, since I was naively thinking we didn't need to do all that extra work since we were just "directly aware of the blocks on the table.'</i></div></div></div></blockquote><div><br></div><div class="gmail_default"><font face="arial, helvetica, sans-serif"></font><font size="4">So the "extra work" proved not to be extra at all, not if the robot was to behave in the way you wanted it to.</font></div><br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div dir="auto"><div dir="auto"><i><span class="gmail_default" style="font-family:arial,helvetica,sans-serif">> </span>As Representational Qualia theory defines: "Consiosness as computationally bound elemental subjective qualities like redness and greenness."</i></div></div></div></blockquote><div><font size="4"><br></font></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><font size="4">I still don't know what "<span style="font-family:Arial,Helvetica,sans-serif">computationally bound" means, but I do know that red and green do not have "</span><span style="font-family:Arial,Helvetica,sans-serif">elemental subjective qualities" anymore than bigness or smallness does.</span></font></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div dir="auto"><div dir="auto"><span class="gmail_default" style="font-family:arial,helvetica,sans-serif">> <i></i></span><i>John, this predicts there are two types of seeing. The kind that is done by robots, our subconscious, and blind sight, where there is no conscios computational binding, and the conscious kind where there is binding. </i></div></div></div></blockquote><div><font size="4"><br></font></div><div class="gmail_default"><font size="4"><font face="arial, helvetica, sans-serif"></font>If these two types of seeing end up producing the same behavior then science can never distinguish between them, and Darwinian Evolution could never have created the type that produces not only intelligence but consciousness too; and yet here I am a conscious being. On the other hand if the two types of seeing end up producing different behaviors then any intelligent activity that would convince you that one of your fellow humans was conscious, and not sleeping or under anesthesia or dead, should if you're being logical convince you that a robot who behaved in the same intelligent way was just as conscious as the human.</font></div><div class="gmail_default"><br></div><font size="4">And as I keep emphasising, as a practical matter it's not really important if humans think an AI is conscious, but i<span class="gmail_default" style="font-family:arial,helvetica,sans-serif">t</span> is important that the AI think humans are conscious. If the AI thinks we'<span class="gmail_default" style="font-family:arial,helvetica,sans-serif">re</span> conscious like it is it might feel some empathy toward us, but if it thinks we're just primitive meat machines then the human race is in deep trouble.</font></div><div class="gmail_quote"><font size="4"><br></font></div><div class="gmail_quote"><font size="4"><span class="gmail_default" style="font-family:arial,helvetica,sans-serif"> John K Clark</span></font></div></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>