<div dir="ltr"><div class="gmail_default" style="font-family:arial,helvetica,sans-serif;font-size:small;color:rgb(0,0,0)"><br></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif;font-size:small;color:rgb(0,0,0)">stathis wrote:  <span style="font-size:19.2px;font-family:arial,sans-serif;color:rgb(34,34,34)">It may also be that trying to achieve true "understanding" is a red herring - behaving *as if* it understands is sufficient, and at bottom what humans do.</span></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif;font-size:small;color:rgb(0,0,0)"><span style="font-size:19.2px;font-family:arial,sans-serif;color:rgb(34,34,34)"><br></span></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif;font-size:small;color:rgb(0,0,0)"><span style="font-size:19.2px;font-family:arial,sans-serif;color:rgb(34,34,34)">---------</span></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif;font-size:small;color:rgb(0,0,0)"><span style="font-size:19.2px;font-family:arial,sans-serif;color:rgb(34,34,34)">This is my nit to pick with philosophers.  They attack some concept and if someone can come up with any counterexample, not matter how trivial or based on absurd hypothetical situations, then they agree that they have to try again.</span></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif;font-size:small;color:rgb(0,0,0)"><span style="font-size:19.2px;font-family:arial,sans-serif;color:rgb(34,34,34)"><br></span></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif;font-size:small;color:rgb(0,0,0)"><span style="font-size:19.2px;font-family:arial,sans-serif;color:rgb(34,34,34)">So they tend to never conclude anything.  But we have to go on living, and take 'good enough for who/what it's for' data, apply it, deal with the good and the bad effects.</span></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif;font-size:small;color:rgb(0,0,0)"><span style="font-size:19.2px;font-family:arial,sans-serif;color:rgb(34,34,34)"><br></span></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif;font-size:small;color:rgb(0,0,0)"><span style="font-size:19.2px;font-family:arial,sans-serif;color:rgb(34,34,34)">In a sense, error variance, </span><span style="font-size:19.2px;font-family:arial,sans-serif;color:rgb(34,34,34)">a strong force, </span><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:19.2px">will always be with us .</span></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif;font-size:small;color:rgb(0,0,0)"><span style="font-size:19.2px;font-family:arial,sans-serif;color:rgb(34,34,34)"><br></span></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif;font-size:small;color:rgb(0,0,0)"><span style="font-size:19.2px;font-family:arial,sans-serif;color:rgb(34,34,34)">bill w</span></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Dec 7, 2017 at 7:12 PM, Stathis Papaioannou <span dir="ltr"><<a href="mailto:stathisp@gmail.com" target="_blank">stathisp@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><span class="">On 8 December 2017 at 11:57, William Flynn Wallace <span dir="ltr"><<a href="mailto:foozler83@gmail.com" target="_blank">foozler83@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div style="font-family:arial,helvetica,sans-serif;font-size:small;color:#000000">I suspect that if we were to look at what philosophers say about it, they would tell us that they really did not know for sure what the word 'know' means.  Only the toad knows (Alice in Wonderland).</div><div style="font-family:arial,helvetica,sans-serif;font-size:small;color:#000000"><br></div><div style="font-family:arial,helvetica,sans-serif;font-size:small;color:#000000">We may never know how the unconscious works.  It is not meant (whatever that word means) to be conscious.  Duh.</div><div style="font-family:arial,helvetica,sans-serif;font-size:small;color:#000000"><br></div><div style="font-family:arial,helvetica,sans-serif;font-size:small;color:#000000">"It's as if they are doing this when they think."  This will be as close as we can get.  A model. </div><div style="font-family:arial,helvetica,sans-serif;font-size:small;color:#000000"><br></div><div style="font-family:arial,helvetica,sans-serif;font-size:small;color:#000000">So it may be that studying people's minds so that we can program computers to copy the way they work is not the best strategy to advance computer thinking.</div></div></blockquote><div><br></div></span><div>It may also be that trying to achieve true "understanding" is a red herring - behaving *as if* it understands is sufficient, and at bottom what humans do.</div><span class=""><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div style="font-family:arial,helvetica,sans-serif;font-size:small;color:#000000">I suspect the Singularity is pretty far off.</div><div style="font-family:arial,helvetica,sans-serif;font-size:small;color:#000000"><br></div><div style="font-family:arial,helvetica,sans-serif;font-size:small;color:#000000">bill w </div></div><div class="gmail_extra"><br><div class="gmail_quote"><div><div class="m_1540640178336531622h5">On Thu, Dec 7, 2017 at 6:32 PM, Dave Sill <span dir="ltr"><<a href="mailto:sparge@gmail.com" target="_blank">sparge@gmail.com</a>></span> wrote:<br></div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="m_1540640178336531622h5"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span>On Thu, Dec 7, 2017 at 5:30 PM, John Clark <span dir="ltr"><<a href="mailto:johnkclark@gmail.com" target="_blank">johnkclark@gmail.com</a>></span> wrote:<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><span class="m_1540640178336531622m_8346288387344392004m_-1248506918043351340gmail-"><div style="font-family:arial,helvetica,sans-serif"></div></span><div class="gmail_extra"><div class="gmail_quote"><span class="m_1540640178336531622m_8346288387344392004m_-1248506918043351340gmail-"><div> <div style="font-family:arial,helvetica,sans-serif;display:inline">​</div><span style="color:rgb(80,0,80);font-size:12.8px">On Thu, Dec 7, 2017 at 2:02 PM, Dylan Distasio </span><span dir="ltr" style="color:rgb(80,0,80);font-size:12.8px"><<a href="mailto:interzone@gmail.com" target="_blank">interzone@gmail.com</a>></span><span style="color:rgb(80,0,80);font-size:12.8px"><wbr> wrote:</span> </div></span></div></div></div></blockquote></span><span><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class="m_1540640178336531622m_8346288387344392004m_-1248506918043351340gmail-"><div><div style="font-family:arial,helvetica,sans-serif;display:inline">> ​</div> Deep learning neural nets appear to bear little resemblance to how biological nervous systems actually work.</div><div><br></div></span><div><div style="font-family:arial,helvetica,sans-serif;display:inline">​As far as Chess​</div> <div style="font-family:arial,helvetica,sans-serif;display:inline">​Go and Shogi are concerned it works far better than ​</div>biological nervous systems<div style="font-family:arial,helvetica,sans-serif;display:inline">​.​</div> </div></div></div></div></blockquote><div><br></div></span><div>Yes, in simple, well-defined domains. Computers are incredibly fast at math but that doesn't mean they're math geniuses. I can't do billions of floating point operations per second, but I can explain to a child in terms it will understand what "addition" means. A CPU has no understanding of what it does. Likewise, AlphaGO has no understanding of the games it plays. It can't explain its strategy--it has none, it just "knows" what usually works--and that's excessively anthropomorphic, it knows nothing: it just does what it was programmed to do.</div><div><br></div><div>It a clever and useful technique but it's a far cry from a general intelligence that can interact directly with the world where the rules aren't all known, and communicate with other intelligent entities, evaluate novel situations, and solve complex problems.</div><span class="m_1540640178336531622m_8346288387344392004HOEnZb"><font color="#888888"><div><br></div><div>-Dave</div><div><br></div><div><br></div></font></span></div></div></div>
<br></div></div><span>______________________________<wbr>_________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailm<wbr>an/listinfo.cgi/extropy-chat</a><br>
<br></span></blockquote></div><br></div>
<br>______________________________<wbr>_________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailm<wbr>an/listinfo.cgi/extropy-chat</a><br>
<br></blockquote></span></div><span class="HOEnZb"><font color="#888888"><br><br clear="all"><div><br></div>-- <br><div class="m_1540640178336531622gmail_signature" data-smartmail="gmail_signature">Stathis Papaioannou</div>
</font></span></div></div>
<br>______________________________<wbr>_________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/<wbr>mailman/listinfo.cgi/extropy-<wbr>chat</a><br>
<br></blockquote></div><br></div>