pharos at gmail.com
Thu Dec 27 09:57:56 UTC 2012
On 12/27/12, Anders Sandberg wrote:
> Aumann agreement theorem and Bayesian rationality. A perfect Bayesian
> agent who receives new information will act better or equal to its prior
> state, since either the information is useful or it is not, in which
> case it is ignored[*]. Rational agents with the same priors who share
> even a small amount of information will also come to agree completely
> with each other: http://wiki.lesswrong.com/wiki/Aumann's_agreement_theorem
> So our advice for unilateralist curse situations is: 1) if possible,
> talk and set up a joint decision. 2) if talking is not possible,
> calculate how a rational agent should have solved the situation
> (including uncertainty about the other agents and their abilities) and
> act like that, 3) if that is too complex and you can just randomly
> select a single agent to act, do that (yes, sometimes the rational
> choice is to flip a coin to decide whether to act even when you think
> the action is good). 4) if that cannot be done either, try to defer to a
> group consensus (real or imaginary) about this type of action rather
> than striking out unilaterally.
> I find it intellectually enjoyable to see that our paper leads to a
> conclusion I intuitively do not like: deferring to consensus rather than
> striking out gloriously individually.
This sounds too academic. ;)
When human groups discuss, it quickly changes into politics.
And politics has very little to do with finding the 'best' solution.
Humans often have to decide on things where most of the group know
very little about the science of the subject, understand little about
what the consequences might be and are being lied to about the costs
and benefits that will be incurred, both immediate and long-term.
The decision making process often collapses down to compromising
between what is actually possible without too many protests, what
benefits the majority of the group (and inflicts damage on non-group
members) and what makes good publicity. Politics is not like science
where you solve the equations and only one correct answer is possible.
More information about the extropy-chat