anders at aleph.se
Thu Dec 27 09:22:44 UTC 2012
On 2012-12-27 08:10, Daniel Shown wrote:
> "The best solution would be to have all people involved get together and
> pool their knowledge, making a joint decision"
> How certain of this are you?
Aumann agreement theorem and Bayesian rationality. A perfect Bayesian
agent who receives new information will act better or equal to its prior
state, since either the information is useful or it is not, in which
case it is ignored[*]. Rational agents with the same priors who share
even a small amount of information will also come to agree completely
with each other: http://wiki.lesswrong.com/wiki/Aumann's_agreement_theorem
In *practice* this is less effective. Perfect Bayesian agents are
computationally too expensive for human minds to emulate, and we know
from practice that even well-meaning fairly rational people do disagree
with each other. But if the agents are not willfully incompetent they
can still do significantly better than naive agents (or even fairly
clever rational but isolated agents).
For example, suppose the true value of the action is a Gaussian
distributed random number, and each agent gets a noisy signal (the value
plus some Gaussian noise with the same variance). If one agent decides
to act they all get rewarded the true value, otherwise zero. In the
omniscient case where they somehow magically see through the noise they
will act just when the value is positive, but in reality they will slip
occasionally and get less. We want to reduce this performance loss.
Roughly speaking, the expected performance loss of a single agent
compared to a group of 5 naive agents (there is almost always one agent
that thinks a negative value is positive) is about half: they miss out
half as much as the group (and larger groups do even worse, of course).
Using a Bayesian threshold setting calculation (no communication
involved) halves the losses again if all agents in the group use it. If
they do a majority vote (just signalling yeas and nays) the losses are
halved again. If they share the noisy estimates of the true value they
have and do a maximum likelihood estimation (which in this case is a
simple mean) they get a slight improvement over majority voting, but it
is not huge. It seems that they cannot improve their performance beyond
this, since there simply is no more data to process. But by now they are
surprisingly close to the omniscient case performance.
So our advice for unilateralist curse situations is: 1) if possible,
talk and set up a joint decision. 2) if talking is not possible,
calculate how a rational agent should have solved the situation
(including uncertainty about the other agents and their abilities) and
act like that, 3) if that is too complex and you can just randomly
select a single agent to act, do that (yes, sometimes the rational
choice is to flip a coin to decide whether to act even when you think
the action is good). 4) if that cannot be done either, try to defer to a
group consensus (real or imaginary) about this type of action rather
than striking out unilaterally.
I find it intellectually enjoyable to see that our paper leads to a
conclusion I intuitively do not like: deferring to consensus rather than
striking out gloriously individually.
[*] Yes, there are things like anti-predictable sequences and negative
), but a full Bayesian agent is immune to them. Which is why they don't
exist in practical reality.
Future of Humanity Institute
More information about the extropy-chat