[ExI] Newcomb's Paradox

Darin Sunley dsunley at gmail.com
Fri Feb 24 19:15:30 UTC 2023


Newcomb's paradox seems to me to stop being a paradox if you accept the
actual terms of the proposition.

In real life, any apparently supernaturally effective predictor is going to
engender a healthy degree of skepticism in anyone academically
sophisticated enough to even know what Newcomb's paradox is. Atheism is the
dogma of the day, and Omega, as defined is, not to put too fine a point on
it, at least a small -"g" god. And that's something a lot of people are
never going to accept, on a deep visceral level.

But if you can accept the proposition as delivered - that yes, Omega /can/
predict you perfectly, and yes, Omega really /is/ /that/ good, the entire
paradox vanishes.

On Fri, Feb 24, 2023 at 11:56 AM Adrian Tymes via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Thu, Feb 23, 2023 at 8:56 AM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Wed, Feb 22, 2023 at 8:00 PM Giovanni Santostasi via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> Jason,
>>> The Newcomb paradox is mildly interesting. But the perceived depthness
>>> of it is all in the word game that AGAIN philosophers are so good at. I'm
>>> so glad I'm a physicist and not a philosopher (we are better philosophers
>>> than philosophers but we stopped calling ourselves that given the bad name
>>> philosophers gave to philosophy). The false depth of this so-called paradox
>>> comes from a sophistry that is the special case of the predictor being
>>> infallible. In that case all kinds of paradoxes come up and "deep"
>>> conversations about free will, time machines and so on ensue.
>>>
>>
>> I agree there is no real paradox here, but what is interesting is that it
>> shows a conflict between two commonly used decision theories: one based on
>> empiricism and the other based on expected-value. Note that perfect
>> prediction is not required for this conflict to arise, this happens even
>> for imperfect predictors, say a psychologist who is 75% accurate:
>> Empiricist thinker: Those who take only one box walk away 75% of the time
>> with a million dollars. Those who take both 75% of the time walk away with
>> $1,000 and 25% of the time walk away with $1,001,000. So I am better off
>> trying my luck with one box, as so many others before me did that and made
>> out well.
>> Expected-value thinker: The guess of my behavior has already been made by
>> the psychologist. Box A is already empty or not. I will increase my
>> expected value by $1,000 if I take box B. I am better off taking both
>> boxes.
>> On an analysis it seems the empiricist tends to do better. So is
>> expected-value thinking wrong? If so, where is its error?
>>
>
> What these miss is the information that led the predictor to make its
> prediction in the first place: does the predictor think you are the kind of
> thinker who'll take both boxes or the kind who'll only take one?
>
> Reputation and demonstrated behavior presumably feed into the predictor's
> decision.  Whether you get that million is decided by the predictor's
> action, which will have been decided in part by your actions before you
> even get to this decision point.
>
> I have been in quite a few analogous real-world situations, where I was
> offered a chance at small, short-term gains but the true reward depended on
> whether the offerer thought I would grab them or not - specifically, that
> the offerer trusted that I would not.  As with the Prisoner's Dilemma, the
> true answer (at least for real world situations) only emerges if one
> considers that the game repeats indefinitely (until one dies, which is
> usually not predictable enough to base strategies on), and thus how one's
> choices this time impacts future plays.
>
> On the other hand, if it is definitely guaranteed that your choice now
> will not impact any other events you care about, including that you will
> never play this game again and your actions now won't affect any future
> predictions that you care about, then the expected-value thinker is
> correct, and the empiricist is wrong for thinking that other plays of the
> game are relevant (those preconditions make them irrelevant, at least to
> you).  That said, it is all too easy to believe that those conditions apply
> when they do not.
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230224/b41e7abe/attachment-0001.htm>


More information about the extropy-chat mailing list