<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div><div>On Feb 2, 2011, at 11:56 AM, Richard Loosemore wrote:</div><div><br></div><blockquote type="cite"><div>Every few months, it seems, there is another announcement about some project, which the press writes up as "Could it be that AI is on the brink of a breakthrough?". Can you imagine how indignant you would be if you saw those same stories being written 20 years ago? <br></div></blockquote><div><br></div>Forget 20 years, just a little over 10 years ago I started hearing about a new thing called "Google" that was supposed to be a breakthru in AI, and it turned out those stories were big understatements and Google has changed our world. <blockquote type="cite"><div><br>I am trying to get enough funding to make what I consider to be real progress in the field, but doing that is almost impossible</div></blockquote><div><br></div><div>I guess if venture capitalists were impressed with your idea they were not very impressed, and that's what they need to be before they start betting their own money on something.</div><div><br></div><blockquote type="cite"><div>Meanwhile, if I had had the resources of the Watson project a decade ago, we might be talking with real (and safe) AGI systems right now.<br></div></blockquote><div><br></div><div>Real probably not, safe definitely not. There is no way you can guarantee that something smarter than you will always do what you want. </div><div><br></div><div> John K Clark</div><div><br></div><div> </div><div><br></div><div><br></div><div><br></div><br><div><br></div><div><br></div></div></body></html>