<html><head></head><body><div><span data-mailaddress="test@ssec.wisc.edu" data-contactname="Bill Hibbard" class="clickable"><span title="test@ssec.wisc.edu">Bill Hibbard</span><span class="detail"> <test@ssec.wisc.edu></span></span> , 20/9/2014 2:33 PM:</div><blockquote class="mori" style="margin: 0px 0px 0px 0.8ex; border-left-width: 2px; border-left-color: blue; border-left-style: solid; padding-left: 1ex;"><a href="http://www.ssec.wisc.edu/~billh/g/searle_comment.pdf" target="_blank" title="http://www.ssec.wisc.edu/~billh/g/searle_comment.pdf">http://www.ssec.wisc.edu/~billh/g/searle_comment.pdf</a> </blockquote><div><br></div>Interesting that somebody as smart as Searle misses this fairly obvious point. <div><br></div><div>I suspect his mistake is that he thinks consciousness is essential for intelligence, and hence, given his philosophical commitments, there is no AI problem. But this is a risky strategy for arguing a risk is zero, since even if one has good reasons to think one is right one can still be wrong. Especially about philosophy of mind and future technology.<br><div></div><br><blockquote class="mori" style="margin: 0px 0px 0px 0.8ex; border-left-width: 2px; border-left-color: blue; border-left-style: solid; padding-left: 1ex;"></blockquote><br>Anders Sandberg,
Future of Humanity Institute
Philosophy Faculty of Oxford University</div></body></html>