<div dir="ltr"><div><div><div><div><div>I think I understand the probabilistic tools you rely on. As you mentioned, Bayesian approach and assumptions over families of distributions.<br></div>The main crunch of the new machine learning theory is to prove <a href="http://en.wikipedia.org/wiki/Generalization_error">generalization bounds</a> that apply to <b>all</b> distributions.<br></div>Say you sampled 100 ravens and 60% of them are black. Can you tell something about the unsampled ravens, that will hold regardless to the ravens' color distribution? Or in other words, be true for any unknown underlying distribution?<br>The astonishing answer is yes. It is a consequence of a property called 'concentration of measure'. <a href="http://en.wikipedia.org/wiki/Margin_classifier">Here</a> is an example. Chebyshev inequality can be seen as something weaker (since it requires the second moment to be finite and known) that gives you bounds applying to a wide family of distributions.<br></div>I see the present and the future of machine learning focusing on such approaches, designing learning algorithms to apply to any underlying distribution. From here, the road to a general purpose AI, it open. All, at least IMHO.<br></div><br></div>What is your opinion regarding those universal bounds?<br><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Sun, Sep 28, 2014 at 12:01 PM, Anders Sandberg <span dir="ltr"><<a href="mailto:anders@aleph.se" target="_blank">anders@aleph.se</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div><span><span title="ohadasor@gmail.com">Ohad Asor</span><span> <<a href="mailto:ohadasor@gmail.com" target="_blank">ohadasor@gmail.com</a>></span></span> , 28/9/2014 4:56 AM:<span class=""><br><blockquote style="margin:0 0 0 .8ex;border-left:2px blue solid;padding-left:1ex"><div>Hi all, great to be here :)</div></blockquote></span></div>Hi!<div><span class=""><br><div><blockquote style="margin:0 0 0 .8ex;border-left:2px blue solid;padding-left:1ex"><div>On Sun, Sep 28, 2014 at 12:58 AM, Anders Sandberg <<a href="mailto:anders@aleph.se" title="mailto:anders@aleph.se" target="_blank">anders@aleph.se</a>> wrote:</div><div><div><blockquote style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Decades of failure is obviously some evidence</blockquote></div><br></div><div>Why do you think so, sir?</div></blockquote></div><div><br></div></span><div>I was using it in a Bayesian sense: it is information that ought to change our probability estimates, but it might of course be weak evidence that just multiplies them with 0.999999 or something like that. </div><div><br></div><div>If one thinks that real AI research is only possible now because of computational advances or some relevant new insights, then decades of failure are very weak evidence. Just like decades of flying failure was not really good evidence against heavier-than-air flying since most of those approaches lacked the necessary aerodynamic knowledge: it was only after that had been discovered the Wright brothers had a chance. However, now the uncertainty resides in whether we think we know enough or not.</div><div><br></div><div>One neat way of reasoning about problems with unknown difficulty is to assume the amount of effort needed to succeed has a power-law distribution. Why? Because it is scale free, so whatever your way of measuring effort you get the same distribution (also, there are some entropy maximization properties I think). We also have priors which can be approximated as log-uniform. From this some useful things can be seen, like that the probability of success tends to grow in a strongly convex way as a function of resources spent, that neglected domains can be extra profitable to investigate even when our priors say they are difficult, and estimates of expected benefit given a certain resource spending and our current knowledge. See </div><div><a href="http://www.fhi.ox.ac.uk/how-to-treat-problems-of-unknown-difficulty/" target="_blank">http://www.fhi.ox.ac.uk/how-to-treat-problems-of-unknown-difficulty/</a></div><div>for a start - Owen have a lot of neat results I hope he puts up soon. </div><span class=""><div><br><br>Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University</div></span></div></div><br>_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
<br></blockquote></div><br></div>