[extropy-chat] Superrationality
Lee Corbin
lcorbin at tsoft.com
Sat May 20 06:28:09 UTC 2006
Russell wrote
> ...ultimate case where the prediction is made by running an
> exact copy of you through an exact copy of the test; this is
> equivalent to increasing the similarity of the partners in
> the one-shot PD, to the ultimate case where the other player
> is your mirror image. Once you do this, there _is_ a causal
> link between your decision and the outcome, and it again
> becomes rational to cooperate.
Actually, there need be no *causal* link; as I said in my earlier
post, a correlation is sufficient. A small point, but...
In an adjacent post Russell also wrote
> Very simply. I offer to give my word that I will take only one
> box, in return for the forecaster's word that the prize money
> will be there. On stage, the rational course of action is then
> for me to take only one box, since my word is much more important
> to me than $1000. The forecaster's prediction record is supported,
> and I get the prize.
Yes, but then if we want to reason outside the boxes (as it were),
then one may wish to take only one box in Newcomb's Paradox in order
to show that one is a nice guy, or that one is not greedy, or some
other irrelevant consideration.
The monetary payoffs in Newcomb's Paradox are designed to thwart
such motivations, which distract from the key issue. (And, by the
way, there is only one correct answer: you take just the one box,
any other course of action being either foolish, or not in accord
with the hypotheses, or relevant only to a slightly different but
entirely uninteresting parallel puzzle.)
> And if you think about it, that's just how we handle a lot of
> PD-type situations in real life.
The ones that we encounter in real life fail to have all our incentives
tabulated in the one-shot PD. If one is entirely *rational*, and one's
values are entirely in accordance with the entries in the payoff matrix,
then whether in real life or not, one of course defects (except in the
peculiar circumstances that one is playing against one's duplicate or
mirror image, etc.)
In real life, on the contrary, we are not entirely rational, nor should
we be, insofar as "rational" means self-interested. Most of us have an
innate tendency to be altruistic, (as is carefully explained "The Origins
of Virtue" and other sources, as surely you know). For the 96% of us who
are not psychopaths, for a great majority these other rewards must be
added to the payoff matrix in order for it to retain verisimilitude.
When the boxes in the "Cooperate" row are thus incremented, one does
cooperate, and indeed it then becomes quite rational to do so.
Lee
More information about the extropy-chat
mailing list