<div dir="ltr"><div dir="ltr"><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><span style="font-family:Arial,Helvetica,sans-serif">On Tue, Dec 25, 2018 at 12:03 PM Brent Allsop <</span><a href="mailto:brent.allsop@gmail.com" target="_blank" style="font-family:Arial,Helvetica,sans-serif">brent.allsop@gmail.com</a><span style="font-family:Arial,Helvetica,sans-serif">> wrote:</span><br></div></div><div dir="ltr"><div class="gmail_quote"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span style="font-size:12pt;font-family:"Times New Roman",serif"><span class="gmail_default" style="font-family:arial,helvetica,sans-serif">> </span>How do you know
what it is like to be a bat,</span></p></div></blockquote><div><br></div><div class="gmail_default" style=""><font face="arial, helvetica, sans-serif"></font><font size="4">That is easily answered, you can't. To do that you'd have to turn into a bat and even then you wouldn't know because you wouldn't be you, you'd be a bat that didn't know what it's like to be a human.</font></div><div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span style="font-size:12pt;font-family:"Times New Roman",serif"><i><span class="gmail_default" style="font-family:arial,helvetica,sans-serif">> </span>what did Mary learn, when she experienced red for
the first time even though she knew, abstractly, everything about red, before
she experienced it for the first time? How
do you “eff the ineffable” and all that.
In my opinion, this is the only hard problem. </i></span></p></div></blockquote><div><br></div><div class="gmail_default" style=""><font face="arial, helvetica, sans-serif"></font><font size="4">And suppose I gave you answers to all these questions, why would you believe me? What sort of supporting evidence could I give that would make anyone say "yes you must be correct"? I don't see how there could be anything. That's why I think the "easy" problem is far more profound than the hard one. If I have a idea and say if matter and energy are arranged in a certain way it will behave intelligently you can try it for yourself and see if it works. If it does then I'm right, if it doesn't then I'm wrong. I</font><span style="font-size:large">t is impossible to do the same thing or anything close to it with consciousness, even in theory.</span></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span style="font-size:12pt;font-family:"Times New Roman",serif"><span class="gmail_default" style="font-family:arial,helvetica,sans-serif">> </span><i>what is required to bridge
the explanatory gap, is to discover which set of our abstract descriptions of physics
in the brain should be interpreted as a redness, and a greenness physical
quality, and so on. </i></span></p></div></blockquote><div><br></div><div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><font size="4">Once the "easy" problem is solved you could explain why red and green objects cause an intelligent being to behave differently, and that's as good as it's ever going to be if its a brute fact that consciousness is the way data feels when it is being processed. And I don't think a chain of iterated "why?" questions can continue for infinity, I think eventually you reach a fundamental level and the sequence terminates in a brute fact. Every event need not have a cause.</font><span style="font-family:Arial,Helvetica,sans-serif"> </span></div></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span style="font-size:12pt;font-family:"Times New Roman",serif"><i><span class="gmail_default" style="font-family:arial,helvetica,sans-serif">> </span>Once an experimentalist
does this, we will then be able to “eff the ineffable” or bridge the explanatory
gap. </i></span></p></div></blockquote><div><br></div><font size="4">Even if the explanation the experimentalist<span class="gmail_default" style="font-family:arial,helvetica,sans-serif"> gives is correct there is no what for him to prove it is correct even to himself. Solving the so called hard problem would be equivalent to proving that solipsism is untrue, and I see no way to ever do that even theoretically. </span></font><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span style="font-size:12pt;font-family:"Times New Roman",serif"><i><span class="gmail_default" style="font-family:arial,helvetica,sans-serif">> </span>In other words, the prediction
being made in the “Representational Qualia Theory” camp needs to be verified by
experimentalists, as the theory predicts is about to happen, before it will be a real solution to the qualitative hard
problem.</i></span></p></div></blockquote><div><br></div><div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><font size="4">If you could do that then you'd have proof the "easy" problem had been solved, not the hard one. </font></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><font size="4"><br></font></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><font size="4">John K Clark</font></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><font size="4"><br></font></div><br></div><div><br></div><div> </div><span style="font-family:"Times New Roman",serif;font-size:12pt"><i> </i></span><br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">
<p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><br></p>
<p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span style="font-size:12pt;font-family:"Times New Roman",serif"> </span></p></div><br><div class="gmail_quote"><div dir="ltr">On Tue, Dec 25, 2018 at 9:29 AM William Flynn Wallace <<a href="mailto:foozler83@gmail.com" target="_blank">foozler83@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div style="font-family:"comic sans ms",sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:large">coming up with a theory of consciousness is easy</span><span class="gmail_default" style="color:rgb(34,34,34);font-size:large;font-family:arial,helvetica,sans-serif"> but</span><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:large"> coming up with a theory of intelligence is not. John Clark</span><br></div><div style="font-family:"comic sans ms",sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:large"><br></span></div><div style="font-family:"comic sans ms",sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:large">Just what sort of theory do you want, John? Any abstract entity like intelligence, love, hate, creativity, has to be dragged down to operational definitions involving measurable things. For many years the operational definition of intelligence has been the scores on an intelligence test, and of course there are many different opinions as to what tests are appropriate, meaning in essence that people differ on just what intelligence is.</span></div><div style="font-family:"comic sans ms",sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:large"><br></span></div><div style="font-family:"comic sans ms",sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:large">The problem is that it is not anything. Oh, it is reducible in theory to actions in the brain - neurons and hormones and who knows what from the glia. So is love those actions as well, and every other thing you can think of. But people have generally resisted reductionism in this area. Me too, until someone can find a use for it.</span></div><div style="font-family:"comic sans ms",sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:large"><br></span></div><div style="font-family:"comic sans ms",sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:large">Look up the word 'nice' and you will find a trail of very different meanings. Just what meaning is correct? All of them - at least they were true at the time a particular use occurred.</span></div><div style="font-family:"comic sans ms",sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:large"><br></span></div><div style="font-family:"comic sans ms",sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:large">Intelligence is that way too - it is whatever we want to mean by the word. Most want to use it in a way that means one thing (usually determined by factor analysis). Some want to call it several things which may intercorrelate to some extent. The first idea usually wins out. </span></div><div style="font-family:"comic sans ms",sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:large"><br></span></div><div style="font-family:"comic sans ms",sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:large">Whatever it is, it is the most useful test in existence because it correlates with and thus predicts more things than any other test in existence.</span></div><div style="font-family:"comic sans ms",sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:large"><br></span></div><div style="font-family:"comic sans ms",sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:large">So - the best theory is the one which predicts more things in the 'real' world than any other, and the operational definition wins. And nobody is really happy with that. I can't understand it.</span></div><div style="font-family:"comic sans ms",sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:large"><br></span></div><div style="font-family:"comic sans ms",sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:large">bill w</span></div></div><br><div class="gmail_quote"><div dir="ltr">On Tue, Dec 25, 2018 at 9:29 AM John Clark <<a href="mailto:johnkclark@gmail.com" target="_blank">johnkclark@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div style="font-family:arial,helvetica,sans-serif"><span style="font-family:Arial,Helvetica,sans-serif">On Fri, Dec 21, 2018 at 12:08 PM Brent Allsop <<a href="mailto:brent.allsop@gmail.com" target="_blank">brent.allsop@gmail.com</a>> wrote:</span><br></div></div><div class="gmail_quote"><div dir="ltr"><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div><span class="gmail_default" style="font-family:arial,helvetica,sans-serif">> </span> <i>we've launched Canonizer 2.0.</i></div><div><i>My Partner Jim Bennett just put together this video:</i></div><div><br></div><div><a href="https://vimeo.com/307590745" target="_blank">https://vimeo.com/307590745</a></div></div></div></blockquote><div><br></div><font size="4">I notice that the third most popular topic on the Canonizer is "the hard problem" (beaten only by theories of consciousness and God). Apparently this too has something to do with consciousness but it would seem to me the first order of business should be to state exactly what general sort of evidence would be sufficient to consider the problem having been solved. I think the evidence from biological Evolution is overwhelming that if you'd solved the so called "easy problem" which deals with intelligence then you've come as close to solving the "hard problem" as anybody is ever going to get. </font><br></div><div class="gmail_quote"><br></div><font size="4">I also note there is no listing at all for "theories of intelligence" and I think I know why, coming up with a theory of consciousness is easy<span class="gmail_default" style="font-family:arial,helvetica,sans-serif"> but</span> coming up with a theory of intelligence is not. It takes years of study to become an expert in the field of AI but anyone can talk about consciousness.</font><div><br><div><div><font face="arial, helvetica, sans-serif"><font size="4">However I think the </font></font><font size="4"><span style="font-family:Arial,Helvetica,sans-serif"> </span>Canonizer does a good job on specifying what "friendly AI" means, in fact it's the best definition of it I've seen:</font></div><div class="gmail_quote"><span class="gmail_default" style="font-family:arial,helvetica,sans-serif"><br></span></div><div class="gmail_quote"><span class="gmail_default" style="font-family:arial,helvetica,sans-serif"><font size="4">"</font></span><font size="4"><i>It means that the entity isn't blind to our interests. Notice that I didn't say that the entity has our interests at heart, or that they are its highest priority goal. Those might require intelligence with a human shape. But an SI that was ignorant or uncaring of our interests could do us enormous damage without intending it.</i><span class="gmail_default" style="font-family:arial,helvetica,sans-serif">"</span><br></font></div><div class="gmail_quote"><span class="gmail_default" style="font-family:arial,helvetica,sans-serif"><br></span></div><div class="gmail_quote"><span class="gmail_default" style="font-family:arial,helvetica,sans-serif"><font size="4"> John K Clark </font></span><br>
</div></div></div></div></blockquote></div></blockquote></div><br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">e</a><br>
</blockquote></div></div>
</div>