<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div><div>On Feb 16, 2011, at 10:20 AM, Richard Loosemore wrote:</div><blockquote type="cite"><div><font class="Apple-style-span" color="#000000"><br></font></div></blockquote><blockquote type="cite"><div>That kind of self-awareness is extremely interesting and is being addressed quite deliberately by some AGI researchers (e.g. myself). </div></blockquote><div><br></div>So I repeat my previous request, please tell us all about the wonderful AI program that you have written that does things even more intelligently than Watson.<br><blockquote type="cite"><div> </div></blockquote><blockquote type="cite"><div>But statistical adaptation is far removed from awareness of one's own problem solving strategies. Kernel methods do not buy you models of cognition!<br></div></blockquote><div><br></div><div>To hell with awareness! Consciousness theories are the last refuge of the scoundrel. As there is no data they need to explain consciousness theories are incredibly easy to come up with, any theory will do and one is as good as another. If you really want to establish your gravitas as an AI researcher then come up with an advancement in machine INTELLIGENCE one tenth of one percent as great as Watson.</div><div><br></div><div><blockquote type="cite">I confess I don't understand the need for the personal remarks.</blockquote><br></div><div>That irritation probably comes from the demeaning remarks you have made about people in the AI community that are supposed to be your colleagues, scientists who have done more than philosophize but have actually written a program and accomplished something pretty damn remarkable. </div><div><br></div><div> John K Clark</div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><br><div><br></div><div><br></div></div></body></html>