<div dir="ltr"><div class="gmail_default" style="font-family:'comic sans ms',sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px">I'm neither a Singularitarian nor an AItheist. I think human-level AI is inevitable, if President Trump doesn't manage to wipe out the human race first :-). But I don't buy the notion that super intelligence is akin to a superpower, and don't think it's necessary for an AI to have consciousness, human-like emotions, or the ability to set its own goals, and without those there is no need to fear them. dave</span><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px"><font color="#888888"><br></font></span></div><div class="gmail_default" style="font-family:'comic sans ms',sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px"><br></span></div><div class="gmail_default" style="font-family:'comic sans ms',sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px">If you want an AI to be superintelligent, why reference the neuron, Spike? Human brains are so fallible it's just silly. A person super intelligent about one thing is totally at a loss about many other things. I think brains must be still evolving, because as they are, they are cobbled together among available equipment and have functioned well enough to get us to the present. You don't have to be a psychologist to see the irrationality, the emotional involvement, the selfishness, of the output of human brains. There are many functions of brains that we can do well without entirely. Start with all the cognitive errors we already know about.</span></div><div class="gmail_default" style="font-family:'comic sans ms',sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px"><br></span></div><div class="gmail_default" style="font-family:'comic sans ms',sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px">OK, so what else can we do? Every decision we make is wrapped up in emotions. That alone does not make them wrong or irrational, but often they are. Take them out and see what we get. Of course they are already out of the AIs we have now. So here is the question: do we really want an AI to function like a human brain? I say no. We are looking for something better, right? </span></div><div class="gmail_default" style="font-family:'comic sans ms',sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px"><br></span></div><div class="gmail_default" style="font-family:'comic sans ms',sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px">Since by definition we are not yet posthumans, how would we even know that an AI decision was super intelligent? I don't know enough about computer simulations to criticize them, but sooner or later we have to put an AI decision to experimental tests in the real world not knowing what will happen. </span></div><div class="gmail_default" style="font-family:'comic sans ms',sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px"><br></span></div><div class="gmail_default" style="font-family:'comic sans ms',sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px">In any case, I don't think that there is any magic in the neuron. It's in the connections. And let's not forget about the role of glial cells, about which we are just barely aware. (see The Other Brain by Douglas Fields) Oh yeah, and the role of the gut microbiome - also just barely aware of its functions. Not even to mention all the endocrine glands and their impact on brain functions. Raising and lowering hormones has profound effects on functioning of the brain. Ditto food, </span><span style="font-size:12.8px;font-family:arial,sans-serif;color:rgb(34,34,34)"> </span><span style="font-size:12.8px;font-family:arial,sans-serif;color:rgb(34,34,34)">sunspots (?), humidity and temperature, chemicals in the dust we breathe, pheromones,</span><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px"> and drugs (I take over 20 pills of various sorts, Who or what could figure out the results of that?) All told, an incredible number of variables, some of which we may not know about at present, all interacting with one another, our learning, and our genes.</span></div><div class="gmail_default" style="font-family:'comic sans ms',sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px"><br></span></div><div class="gmail_default" style="font-family:'comic sans ms',sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px">All told, we are many decades away from a good grasp of the brain, maybe 100 years. A super smart AI will likely not function at all like a human brain. No reason it should. (boy am I going to get flak on this one)</span></div><div class="gmail_default" style="font-family:'comic sans ms',sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px"><br></span></div><div class="gmail_default" style="font-family:'comic sans ms',sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px">bill w</span></div><div class="gmail_default" style="font-family:'comic sans ms',sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px"><br></span></div><div class="gmail_default" style="font-family:'comic sans ms',sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px"><br></span></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, May 9, 2016 at 11:27 AM, Dave Sill <span dir="ltr"><<a href="mailto:sparge@gmail.com" target="_blank">sparge@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class="">On Mon, May 9, 2016 at 10:44 AM, spike <span dir="ltr"><<a href="mailto:spike66@att.net" target="_blank">spike66@att.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div link="#0563C1" vlink="#954F72" lang="EN-US"><div><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal">Nothing particularly profound or insightful in this AI article, but it is good clean fun:</p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal"><a href="https://aeon.co/essays/true-ai-is-both-logically-possible-and-utterly-implausible?utm_source=Aeon+Newsletter&utm_campaign=6469cf0d50-Daily_Newsletter_9_May_20165_9_2016&utm_medium=email&utm_term=0_411a82e59d-6469cf0d50-68957125" target="_blank">https://aeon.co/essays/true-ai-is-both-logically-possible-and-utterly-implausible?utm_source=Aeon+Newsletter&utm_campaign=6469cf0d50-Daily_Newsletter_9_May_20165_9_2016&utm_medium=email&utm_term=0_411a82e59d-6469cf0d50-68957125</a></p></div></div></blockquote><div><br></div></span><div>Yeah, not bad. Mostly on the mark, IMO, but he says a few things that are just not rational. <br><br></div><span class=""><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div link="#0563C1" vlink="#954F72" lang="EN-US"><div><p class="MsoNormal"><u></u><u></u></p><p class="MsoNormal">He reminds me a little of Roger Penrose’s take on the subject from a long time ago: he introduces two schools of thought, pokes fun at both while offering little or no evidence or support, then reveals he is pretty much a follower of one of the two: the Church of AI-theists.</p></div></div></blockquote><div><br></div></span><div>To be fair, he says both camps are wrong and the truth is probably somewhere in between. And I agree.<br> <br></div><span class=""><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div link="#0563C1" vlink="#954F72" lang="EN-US">There are plenty of AI-theists, but nowhere have I ever seen a really good argument for why we can never simulate a neuron and a dendrite and synapses. Once we understand them well enough, we can write a sim of one. We already have sims of complicated systems, such as aircraft, nuclear plants and such. So why not a brain cell? And if so, why not two, and why not a connectome and why can we not simulate a brain? I have been pondering that question for over 2 decades and have still never found a good reason. That puts me in Floridi-dismissed Church of the Singularitarian.</div></blockquote><div><br></div></span><div>Yeah, his "<span>True AI is not logically impossible, but it is utterly implausible" doesn't seem to be based on reality.<br><br></span></div><div><span>I'm neither a Singularitarian nor an AItheist. I think human-level AI is inevitable, if President Trump doesn't manage to wipe out the human race first :-). But I don't buy the notion that super intelligence is akin to a superpower, and don't think it's necessary for an AI to have consciousness, human-like emotions, or the ability to set its own goals, and without those there is no need to fear them.<span class="HOEnZb"><font color="#888888"><br><br></font></span></span></div><span class="HOEnZb"><font color="#888888"><div><span>-Dave<br></span></div></font></span></div></div></div>
<br>_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
<br></blockquote></div><br></div>