[extropy-chat] Re: Overconfidence and meta-rationality

Robin Hanson rhanson at gmu.edu
Mon Mar 21 14:52:03 UTC 2005


At 01:25 AM 3/20/2005, Eliezer S. Yudkowsky wrote:
>Therefore, if two Bayesianitarian *altruists* find that they disagree, and 
>they have no better algorithm to resolve their disagreement, they should 
>immediately average together their probability estimates. ... But:
>1)  I can't just change my beliefs any time I please.  ...
>2)  Evolution is still correct regardless ...
>3) ... afterward I'll still know, deep down, whatever my lips say...
>4)  I have other beliefs about biology that would be inconsistent ...
>Thus, if you want to claim a mathematical result about an expected 
>individual benefit (let alone optimality!) for rationalists deliberately 
>*trying* to agree with each other, I think you need to specify what 
>algorithm they should follow to agreement - Aumann agent? Bayesianitarian 
>altruist?  In the absence of any specification of how rationalists try to 
>agree with each other, I don't see how you could prove this would be an 
>expected individual improvement.

I find great use in the concept of rationality 
criteria/constraints/rules.  Consider the example of the claim that your 
beliefs should satisfy P(A) = 1- P(not A).  This is a constraint on 
rational beliefs, and one should arguably strive to have one's beliefs 
satisfy this constraint all else equal.  This is not to say one must pay 
any cost to achieve this result.  Rather, noticing that you have failed to 
satisfy this constraint is a strong clue that you should consider modifying 
your beliefs to eliminate this failure.

Now you could argue against P(A) = 1- P(not A) as you did above.  You 
sincerely believe that both the Republicans and the Democrats have a 70% 
chance of winning the next presidential election, and it just wouldn't be 
sincere to just change your beliefs - deep down you would know what you 
really believed.  And you could change your beliefs to satisfy the 
constraint by setting P(Republicans) = 99.99% and P(Democrats) = 0.01%, but 
that would be worse wouldn't it.  So unless someone gives you a complete 
feasible algorithm for choosing your exact beliefs in every situation, and 
proves to you that this is the exact optimal way to choose all beliefs, 
well you don't see any point to this P(A) = 1- P(not A) rule.

The disagreement results are similar to P(A) = 1- P(not A) in that they 
point out a problem without giving you an exact procedure to fix the 
problem.  I am *not* proposing that whenever you discover you disagree you 
should simply change your beliefs to the average of the two beliefs.  I am 
saying that whatever procedures you use, if you discover that you do have 
persistent disagreements, then that is a strong clue that something is 
seriously wrong with at least one of you.  And you need to be very wary of 
too quickly concluding that it must of course be the other guy.



Robin Hanson  rhanson at gmu.edu  http://hanson.gmu.edu
Assistant Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326  FAX: 703-993-2323  





More information about the extropy-chat mailing list