<div dir="ltr">Newcomb's paradox seems to me to stop being a paradox if you accept the actual terms of the proposition.<div><br></div><div>In real life, any apparently supernaturally effective predictor is going to engender a healthy degree of skepticism in anyone academically sophisticated enough to even know what Newcomb's paradox is. Atheism is the dogma of the day, and Omega, as defined is, not to put too fine a point on it, at least a small -"g" god. And that's something a lot of people are never going to accept, on a deep visceral level.</div><div><br></div><div>But if you can accept the proposition as delivered - that yes, Omega /can/ predict you perfectly, and yes, Omega really /is/ /that/ good, the entire paradox vanishes. </div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Feb 24, 2023 at 11:56 AM Adrian Tymes via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Thu, Feb 23, 2023 at 8:56 AM Jason Resch via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Wed, Feb 22, 2023 at 8:00 PM Giovanni Santostasi via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Jason,<div>The Newcomb paradox is mildly interesting. But the perceived depthness of it is all in the word game that AGAIN philosophers are so good at. I'm so glad I'm a physicist and not a philosopher (we are better philosophers than philosophers but we stopped calling ourselves that given the bad name philosophers gave to philosophy). The false depth of this so-called paradox comes from a sophistry that is the special case of the predictor being infallible. In that case all kinds of paradoxes come up and "deep" conversations about free will, time machines and so on ensue. <br></div></div></blockquote><div><br></div><div>I agree there is no real paradox here, but what is interesting is that it shows a conflict between two commonly used decision theories: one based on empiricism and the other based on expected-value. Note that perfect prediction is not required for this conflict to arise, this happens even for imperfect predictors, say a psychologist who is 75% accurate:</div><div>Empiricist thinker: Those who take only one box walk away 75% of the time with a million dollars. Those who take both 75% of the time walk away with $1,000 and 25% of the time walk away with $1,001,000. So I am better off trying my luck with one box, as so many others before me did that and made out well.</div><div>Expected-value thinker: The guess of my behavior has already been made by the psychologist. Box A is already empty or not. I will increase my expected value by $1,000 if I take box B. I am better off taking both boxes. </div><div>On an analysis it seems the empiricist tends to do better. So is expected-value thinking wrong? If so, where is its error?</div></div></div></blockquote><div><br></div><div>What these miss is the information that led the predictor to make its prediction in the first place: does the predictor think you are the kind of thinker who'll take both boxes or the kind who'll only take one?</div><div><br></div><div>Reputation and demonstrated behavior presumably feed into the predictor's decision. Whether you get that million is decided by the predictor's action, which will have been decided in part by your actions before you even get to this decision point.</div><div><br></div><div>I have been in quite a few analogous real-world situations, where I was offered a chance at small, short-term gains but the true reward depended on whether the offerer thought I would grab them or not - specifically, that the offerer trusted that I would not. As with the Prisoner's Dilemma, the true answer (at least for real world situations) only emerges if one considers that the game repeats indefinitely (until one dies, which is usually not predictable enough to base strategies on), and thus how one's choices this time impacts future plays.</div><div><br></div><div>On the other hand, if it is definitely guaranteed that your choice now will not impact any other events you care about, including that you will never play this game again and your actions now won't affect any future predictions that you care about, then the expected-value thinker is correct, and the empiricist is wrong for thinking that other plays of the game are relevant (those preconditions make them irrelevant, at least to you). That said, it is all too easy to believe that those conditions apply when they do not.</div></div></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>