<html><head></head><body><div><span data-mailaddress="ohadasor@gmail.com" data-contactname="Ohad Asor" class="clickable"><span title="ohadasor@gmail.com">Awesome bounds. Need to try to keep up more with the field. But...</span></span></div><div><span data-mailaddress="ohadasor@gmail.com" data-contactname="Ohad Asor" class="clickable"><span title="ohadasor@gmail.com"><br></span></span></div><div><span data-mailaddress="ohadasor@gmail.com" data-contactname="Ohad Asor" class="clickable"><span title="ohadasor@gmail.com">Ohad Asor</span><span class="detail"> <ohadasor@gmail.com></span></span> , 29/9/2014 3:07 AM:<blockquote class="mori" style="margin:0 0 0 .8ex;border-left:2px blue solid;padding-left:1ex;"><div><div><div><div><div>The main crunch of the new machine learning theory is to prove <a href="http://en.wikipedia.org/wiki/Generalization_error" title="http://en.wikipedia.org/wiki/Generalization_error" target="_blank">generalization bounds</a> that apply to <b>all</b> distributions.<br></div>Say you sampled 100 ravens and 60% of them are black. Can you tell something about the unsampled ravens, that will hold regardless to the ravens' color distribution? Or in other words, be true for any unknown underlying distribution?</div></div></div></div></blockquote></div><div><br></div><div>That is a tall order. While I appreciate the impressive advances in machine learning theory, they rests on a slightly risky assumption: that probability theory holds. </div><div><br></div><div>The problem is that if the outcome space is not well defined, the entire edifice built on the Kolmogorov axioms crashes. In most models and examples we use the outcome space is well defined: ravens have colours. But what if I show you a raven whose colour was *fish*? (or colourless green?) The problem here is of course that there is a category mistake, and we do not allow them in our examples. Unfortunately reality doesn't care: in Taleb's urn example a demon manipulates an urn with 10 white and 10 black balls. What is the probability of drawing a black ball now? The demon may have added a white one. Or a black. Or a red. Or a frog. </div><div><br></div><div>The interesting thing here is that humans take this in a stride: when the possibility of a red ball is mentioned they immediately update their outcome space, and then do it again when the frog shows up. We need clever ways of reasoning rationally about nasty kinds of uncertainty, and most people can only do it weakly (which is why insurance people both get shocked by the example and can easily bring up lists of disasters they have experienced just like it). But this is a key frontier for truly general purpose machine learning beyond just generalizing across all distributions: being able to handle epistemic crises.</div><div><br></div><br><br>Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University</body></html>