<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Apr 18, 2020 at 2:11 PM Keith Henson via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
My work in evolutionary psychology makes me think that building AI<br>
based on human brains is an intolerably dangerous approach.<br>
<br>
Humans have rarely invoked psychological traits such as<br>
capture-bonding and those related to going to war and the perhaps<br>
related trait of being infested with religions.<br>
<br>
A human-based AI that understood a looming resource crisis could go to<br>
war with humans.</blockquote><div><br></div><div>### What I am talking about in my post is still miles away from general AI that could to to war with humans. I was discussing the reasons for the fragility of deep learning algorithms in processing real-life sensory inputs, which is surprising given their phenomenal performance in some circumscribed, artificial domains like games. </div><div><br></div><div>I think it's perfectly OK to take inspiration from the human brain in the development of sensory analysis systems and knowledge representation systems since these devices don't act on their own and don't have much a goal system. </div><div><br></div><div>As we get closer to recapitulating a whole mind in AI, we will need to be more circumspect but this is still a few years away, according to the prophecy I published decades ago, nine years and five months away.</div><div><br></div><div>Rafal</div></div></div>