[extropy-chat] Re: Overconfidence and meta-rationality
Robin Hanson
rhanson at gmu.edu
Wed Mar 16 01:19:43 UTC 2005
At 12:57 AM 3/13/2005, Eliezer S. Yudkowsky wrote:
>>Eliezer, you are just writing far too much for me to comment on all of it.
>
>Yes. I know. You don't have to comment on all of it. I just thought I
>should say all of it before you wrote your book, rather than afterward. I
>don't think that this issue is simple
I probably won't even get started on the book until this summer, and it
will probably take me at least a year to write it. So no particular rush
here. I do thank you for engaging me on the topic, and helping me to think
about it. And I agree that it is not at all simple.
>If I had to select out two points as most important, they would be:
>1) Just because perfect Bayesians, or even certain formally imperfect
>Bayesians that are still not like humans, *will* always agree; it does not
>follow that a human rationalist can obtain a higher Bayesian score (truth
>value), or the maximal humanly feasible score, by deliberately *trying* to
>agree more with other humans, even other human rationalists.
>2) Just because, if everyone agreed to do X without further argument or
>modification (where X is not agreeing to disagree), the average Bayesian
>score would increase relative to its current position, it does not follow
>that X is the *optimal* strategy.
These points are stated very weakly, basically just inviting me to *prove*
my claims with mathematical precision. I may yet rise to that challenge
when I get more back into this.
>>>I know of no *formal* extension of Aumann's Agreement Theorem such that
>>>its premises are plausibly applicable to humans.
>>Then see: <http://hanson.gmu.edu/disagree.pdf>For Bayesian Wannabes, Are
>>Disagreements Not About Information?
>><http://www.kluweronline.com/issn/0040-5833/>Theory and Decision
>>54(2):105-123, March 2003.
>
>These Bayesian Wannabes are still unrealistically skilled rationalists; no
>human is a Bayesian Wannabe as so defined. BWs do not self-deceive. They
>approximate their estimates of deterministic computations via guesses
>whose error they treat as random variables.
>I remark on the wisdom of Jaynes who points out that 'randomness' exists
>in the map rather than the territory; random variables are variables of
>which we are ignorant. I remark on the wisdom of Pearl, who points out
>that when our map sums up many tiny details we can't afford to compute, it
>is advantageous to retain the Markov property, ... If the errors in BWs
>computations are uncorrelated random errors, the BWs are, in effect,
>simple measuring instruments, and they can treat each other as such,
>combining their two measurements to obtain a third, more reliable measurement.
But Bayesian Wannabes *can* self-deceive. The phrase "random variable" is
a standard phrase in statistics - it just means any state function. A
real-valued random variable, which I use in that paper, is just a function
that assigns a real number to each state. I made no assumptions about
independence or Markov properties. Surely you believe that your error can
be described with a state function.
>>>>His [Aumann's] results are robust because they are based on the simple
>>>>idea that when seeking to estimate the truth, you should realize you
>>>>might be wrong; others may well know things that you do not.
>>>I disagree; this is *not* what Aumann's results are based on.
>>>Aumann's results are based on the underlying idea that if other entities
>>>behave in a way understandable to you, then their observable behaviors
>>>are relevant Bayesian evidence to you. This includes the behavior of
>>>assigning probabilities according to understandable Bayesian cognition.
>>The paper I cite above is not based on having a specific model of the
>>other's behavior.
>
>The paper you cite above does not yield a constructive method of agreement
>without additional assumptions. But then the paper does not prove
>agreement *given* a set of assumptions. As far as I can tell, the paper
>says that Bayesian Wannabes who agree to disagree about state-independent
>computations and who treat their computation error as a state-independent
>"random" variable - presumably meaning, a variable of whose exact value
>they are to some degree ignorant - must agree to disagree about a
>state-independent random variable. ... So in that sense, the paper proves
>a non-constructive result that is unlike the usual class of Aumann
>Agreement theorems. Unless I'm missing something?
I do think you are misreading the paper. *Given* that such agents are
unwilling to disagree about topics where information is irrelevant, *then*
such agents cannot disagree about *any* topic. Which is another way to say
they agree.
More some other day.
Robin Hanson rhanson at gmu.edu http://hanson.gmu.edu
Assistant Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326 FAX: 703-993-2323
More information about the extropy-chat
mailing list