<html><head><style type=text/css><!--
.mcnt p.mcntMsoNormal, .mcnt li.mcntMsoNormal, .mcnt div.mcntMsoNormal {margin:0in;margin-bottom:.0001pt;font-size:12.0pt;font-family:"Times New Roman","serif";}
.mcnt a:link, .mcnt span.mcntMsoHyperlink {mso-style-priority:99;color:blue;text-decoration:underline;}
.mcnt a:visited, .mcnt span.mcntMsoHyperlinkFollowed {mso-style-priority:99;color:purple;text-decoration:underline;}
.mcnt span.mcntEmailStyle17 {mso-style-type:personal-reply;font-family:"Calibri","sans-serif";color:#1F497D;}
.mcnt .mcntMsoChpDefault {mso-style-type:export-only;font-family:"Calibri","sans-serif";}
.mcnt div.mcntWordSection1 {page:WordSection1;}
.mcnt a {color:blue;}
.mcnt a:visited {color:purple;}

--></style></head><body>A really intriguing paper shows that we still do not know as much about deep learning as we thought:<div><br></div><div>http://www.i-programmer.info/news/105-artificial-intelligence/7352-the-flaw-lurking-in-every-deep-neural-net.html</div><div>http://cs.nyu.edu/~zaremba/docs/understanding.pdf</div><div><br></div><div>Whether this is just a small technical issue to correct for (as they do, by training using the adversarial examples) or a profound insight into perceptual systems (maybe this is true for all brains, us included) remains to be seen. <div><br><br>Anders Sandberg,
Future of Humanity Institute
Philosophy Faculty of Oxford University</div></div></body></html>