[extropy-chat] Superrationality

Christopher Healey CHealey at unicom-inc.com
Thu May 18 16:44:41 UTC 2006


> Lee Corbin wrote:
>
> My reasoning: if you know the other person
> is going to cooperate, then according to the 
> table, you must defect.  Likewise, if you know
> that the person is going to defect, then you
> must defect. (Failure to do so simply means 
> that you aren't reading the payoff table, or 
> don't know what it means.) Only in the case 
> that you don't know what the person will do-
> --and, most importantly, there is reason to 
> believe that his behavior is correlated with 
> yours---can you logically cooperate.

Lee,

Then might we say that superrationality is prescriptive, rather than
decisive?  In other words, that it doesn't tell you what the rational
response is to a fixed scenario, but rather what alterations to that
scenario (which may be within your sphere of influence) could modify the
results toward some positive-sum outcome?

I agree that in the case of iterated PD, with side-channel
communications unavailable, it's really a cut-and-dry outcome, but as
far as informing us toward real-world decisions it leaves a lot to be
desired.  Any communications before or during the PD, between agents,
could conceivably alter the dynamics (in effect, allow the establishment
of a side-channel protocol based on primary-channel patterning).  

Also, might it be possible for previously-isolated intelligent agents
constrained to only their primary-channel in the PD (assuming they
possess *any* assumptions in common about the world) to negotiate a
side-channel protocol through their choice/response to another agent's
actions over a large number of iterations?  Perhaps this only makes
sense where the total number of iterations is unknown, but known to be
large.

My intended direction with this is that if there is any side-channel
communications, then the agents involved could construct an
accountability mechanism.  It might work this way: The agents
voluntarily bind themselves to cooperate, after which they are
irrevocably exposed to a greater single-case loss (group ostracism) than
any possible single-case gain (effectively ceding freedom of action), if
and when they defect against a colluder.  

In effect, they would be forgoing all (or many) future positive-sums, by
the result of a single action.  This would seem to be particularly
salient as the number of agents increases, because the magnitude of the
potential penalty would escalate more quickly (loss compounded over I
iterations vs. single iteration gain).

Just some thoughts...

-Chris




More information about the extropy-chat mailing list