<div dir="ltr"><div dir="ltr"><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><span style="font-family:Arial,Helvetica,sans-serif">On Fri, Dec 21, 2018 at 12:08 PM Brent Allsop <<a href="mailto:brent.allsop@gmail.com">brent.allsop@gmail.com</a>> wrote:</span><br></div></div><div class="gmail_quote"><div dir="ltr"><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div><span class="gmail_default" style="font-family:arial,helvetica,sans-serif">> </span> <i>we've launched Canonizer 2.0.</i></div><div><i>My Partner Jim Bennett just put together this video:</i></div><div><br></div><div><a href="https://vimeo.com/307590745" target="_blank">https://vimeo.com/307590745</a></div></div></div></blockquote><div><br></div><font size="4">I notice that the third most popular topic on the Canonizer is "the hard problem" (beaten only by theories of consciousness and God). Apparently this too has something to do with consciousness but it would seem to me the first order of business should be to state exactly what general sort of evidence would be sufficient to consider the problem having been solved. I think the evidence from biological Evolution is overwhelming that if you'd solved the so called "easy problem" which deals with intelligence then you've come as close to solving the "hard problem" as anybody is ever going to get. </font><br></div><div class="gmail_quote"><br></div><font size="4">I also note there is no listing at all for "theories of intelligence" and I think I know why, coming up with a theory of consciousness is easy<span class="gmail_default" style="font-family:arial,helvetica,sans-serif"> but</span> coming up with a theory of intelligence is not. It takes years of study to become an expert in the field of AI but anyone can talk about consciousness.</font><div><br><div><div class="gmail_default" style=""><font face="arial, helvetica, sans-serif"><font size="4">However I think the </font></font><font size="4"><span style="font-family:Arial,Helvetica,sans-serif"> </span>Canonizer does a good job on specifying what "friendly AI" means, in fact it's the best definition of it I've seen:</font></div><div class="gmail_quote"><span class="gmail_default" style="font-family:arial,helvetica,sans-serif"><br></span></div><div class="gmail_quote"><span class="gmail_default" style="font-family:arial,helvetica,sans-serif"><font size="4">"</font></span><font size="4"><i>It means that the entity isn't blind to our interests. Notice that I didn't say that the entity has our interests at heart, or that they are its highest priority goal. Those might require intelligence with a human shape. But an SI that was ignorant or uncaring of our interests could do us enormous damage without intending it.</i><span class="gmail_default" style="font-family:arial,helvetica,sans-serif">"</span><br></font></div><div class="gmail_quote"><span class="gmail_default" style="font-family:arial,helvetica,sans-serif"><br></span></div><div class="gmail_quote"><span class="gmail_default" style="font-family:arial,helvetica,sans-serif"><font size="4"> John K Clark </font></span><br>
</div></div></div></div>