<div dir="ltr"><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:small;color:#000000">Seemingly missing from all the intelligence definitions in your post is the ability to adapt to novel situations, which to me is really important. This must be really hard for an AI, since it cannot generalize from past problems (I am assuming that that's all it can do, which might be wrong). bill w</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Sep 4, 2019 at 4:30 PM Dave Sill via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Wed, Sep 4, 2019 at 4:22 PM Stuart LaForge via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><div dir="ltr"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
Quoting Dave Sill:<br>
<br>
"Risk factors in the Overhyping<br>
> of AI and Machine Learning."<br>
><br>
> Per Cerf's abstract: "Artificial Intelligence, especially Machine <br>
> Learning have become part of an overhyped mantra (that goes for <br>
> blockchain, too). It is vital that the expectations for these <br>
> technologies be tempered by deeper appreciation for their strengths <br>
> and weaknesses. ML is narrow and brittle, producing dramatic results <br>
> but also failing dramatically (see Generative Adversarial Networks).<br>
<br>
I agree that current ML is narrow and brittle. However small <br>
biological brains are brittle as well. Think about how much difficulty <br>
glass windows give houseflies.<br>
<br>
But increased size is no barrier to failure. Humans fail dramatically <br>
also. Google "epic fail" for proof.<br>
<br>
I don't disagree with you, I just don't think anything you (and Cerf) <br>
have said detracts from the observation that artificial neural <br>
networks are approaching the functionality of biological ones at an <br>
encouraging rate.<br>
<br>
What dramatic failure of GANs are you referring to?<br></blockquote><div><br></div><div>That's Cerf, not me. I don't know what he's referring to. I'll report back after the talk.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
> Finally, ML is NOT Artificial General Intelligence."<br>
[snip]<br>
<br>
Is a housefly Natural General Intelligence?</blockquote><div><br></div><div>I suppose...but not very much of it. Nowhere near human level, which is what AGI generally refers to.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> How about your one of your amygdala in isolation?</blockquote><div><br></div><div>Not really.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> What exactly is general intelligence again?<br></blockquote><div><br></div><div>Let's start with the Wikipedia entry:</div><div><br></div><div> <a href="https://en.wikipedia.org/wiki/Artificial_general_intelligence" target="_blank">https://en.wikipedia.org/wiki/Artificial_general_intelligence</a></div><div><br></div><div><i>Artificial general intelligence (AGI) is the intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and future studies. Some researchers refer to Artificial general intelligence as "strong AI",[1] "full AI"[2], "true AI" or as the ability of a machine to perform "general intelligent action";[3] others reserve "strong AI" for machines capable of experiencing consciousness.<br><br>Some references emphasize a distinction between strong AI and "applied AI"[4] (also called "narrow AI"[1] or "weak AI"[5]): the use of software to study or accomplish specific problem solving or reasoning tasks. Weak AI, in contrast to strong AI, does not attempt to perform the full range of human cognitive abilities.<br><br>As of 2017, over forty organizations worldwide are doing active research on AGI.[6]<br></i></div><div><i><br></i></div><div><i>Requirements[edit]<br>Main article: Cognitive science<br>Various criteria for intelligence have been proposed (most famously the Turing test) but to date, there is no definition that satisfies everyone.[7] However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:[8]<br><br>reason, use strategy, solve puzzles, and make judgments under uncertainty;<br>represent knowledge, including commonsense knowledge;<br>plan;<br>learn;<br>communicate in natural language;<br>and integrate all these skills towards common goals.</i></div><div><i><br>Other important capabilities include the ability to sense (e.g. see) and the ability to act (e.g. move and manipulate objects) in the world where intelligent behaviour is to be observed.[9] This would include an ability to detect and respond to hazard.[10] Many interdisciplinary approaches to intelligence (e.g. cognitive science, computational intelligence and decision making) tend to emphasise the need to consider additional traits such as imagination (taken as the ability to form mental images and concepts that were not programmed in)[11] and autonomy.[12] Computer based systems that exhibit many of these capabilities do exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent), but not yet at human levels.</i><br></div><div><i><br></i></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
> I've been saying the same thing, but I'm not Turing Award winner.<br>
<br>
I certainly agree that ML is not "the Singularity" but what about it <br>
do you think is over-hyped? As far as I know, nobody is writing <br>
Alpha-zero fan-mail just yet.<br></blockquote><div><br></div><div>Oh, you missed John's fan-email, "AlphaZero –The ‘Lucy’ of the Emerging AI Epoch"?</div><div><br></div><div>AI has been over-hyped for 50 years despite chronically under-delivering. Now that deep learning has had some success, the hype train has again left the station.</div><div><br></div><div>Don't get me wrong: this is good stuff, with plenty of practical applications. I just don't think we're on the verge of AGI yet.</div><div><br></div><div>-Dave</div></div></div>
</div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>