[extropy-chat] Re: Overconfidence and meta-rationality
Dustin Wish with INDCO Networks
dwish at indco.net
Mon Mar 14 13:59:11 UTC 2005
Allow me a chance to add to this topic. First, programmed beliefs are
largely an environment factor that determines the "faith" in those beliefs.
If as a child you are taught that others are stupid and you are smart then
you will be predisposed to treating those you deal with as morons. Not that
you are smarter than they, but that you are told that you are. That seems to
me the basics of your argument, what you are taught is right.
-----Original Message-----
From: extropy-chat-bounces at lists.extropy.org
[mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Robin Hanson
Sent: Saturday, March 12, 2005 11:54 AM
To: World Transhumanist Association Discussion List; sl4 at sl4.org; ExI chat
list
Subject: [extropy-chat] Re: Overconfidence and meta-rationality
Eliezer, you are just writing far too much for me to comment on all of
it. If you give me an indication of what your key points are, I will try
to respond to those points. For now, I will just make a few comments on
specific claims.
At 06:40 PM 3/9/2005, Eliezer S. Yudkowsky wrote:
>The modesty argument uses Aumann's Agreement Theorem and AAT's extensions
>as plugins, but the modesty argument itself is not formal from start to
>finish. I know of no *formal* extension of Aumann's Agreement Theorem
>such that its premises are plausibly applicable to humans.
Then see: <http://hanson.gmu.edu/disagree.pdf>For Bayesian Wannabes, Are
Disagreements Not About Information?
<http://www.kluweronline.com/issn/0040-5833/>Theory and Decision
54(2):105-123, March 2003.
>you say: "If people mostly disagree because they systematically violate
>the rationality standards that they profess, and hold up for others, then
>we will say that their disagreements are dishonest." (I would disagree
>with your terminology; they might be dishonest *or* they might be
>self-deceived. ...
I was taking self-deception to be a kind of dishonesty.
>... if Aumann's Agreement Theorem is wrong (goes wrong reliably in the
>long run, not just failing 1 time out of 100 when the consensus belief is
>99% probability) then we can readily compare the premises of AAT against
>the dynamics of the agents, their updating, their prior knowledge, etc.,
>and track down the mistaken assumption that caused AAT (or the extension
>of AAT) to fail to match physical reality. ...
This actually seems to me rather hard, as it is hard to observe people's
priors.
>... You attribute the great number of extensions of AAT to the following
>underlying reason: "His [Aumann's] results are robust because they are
>based on the simple idea that when seeking to estimate the truth, you
>should realize you might be wrong; others may well know things that you do
>not."
>I disagree; this is *not* what Aumann's results are based on.
>Aumann's results are based on the underlying idea that if other entities
>behave in a way understandable to you, then their observable behaviors are
>relevant Bayesian evidence to you. This includes the behavior of
>assigning probabilities according to understandable Bayesian cognition.
The paper I cite above is not based on having a specific model of the
other's behavior.
>So A and B are *not* compromising between their previous positions; their
>consensus probability assignment is *not* a linear weighting of their
>previous assignments.
Yes, of course, who ever said it was?
>... If this were AAT, rather than a human conversation, then as Fred and I
>exchanged probability assignments our actual knowledge of the moon would
>steadily increase; our models would concentrate into an ever-smaller set
>of possible worlds. So in this sense the dynamics of the modesty argument
>are most unlike the dynamics of Aumann's Agreement Theorem, from which the
>modesty argument seeks to derive its force. AAT drives down entropy
>(sorta); the modesty argument doesn't. This is a BIG difference.
AAT is *not* about dynamics at all. It might require a certain dynamics to
reach the state where AAT applies, but this paper of mine applies at any
point during any conversation:
<http://hanson.gmu.edu/unpredict.pdf>Disagreement Is Unpredictable.
<http://www.sciencedirect.com/science/journal/01651765>Economics Letters
77(3):365-369, November 2002.
>The AATs I know are constructive; they don't just prove that agents will
>agree as they acquire common knowledge, they describe *exactly how* agents
>arrive at agreement.
Again, see my Theory and Decision paper cited above.
>>... people uphold rationality standards that prefer logical consistency...
>
>Is the Way to have beliefs that are consistent among themselves? This is
>not the Way, though it is often mistaken for the Way by logicians and
>philosophers. ...
Preferring consistency, all else equal, is not the same as requiring
it. Surely you also prefer it all else equal.
>... agree that when two humans disagree and have common knowledge of each
>other's opinion ... *at least one* human must be doing something wrong.
...
>One possible underlying fact of the matter might be that one person is
>right and the other person is wrong and that is all there ever was to it.
This is *not* all there is too it. There is also the crucial question of
what exactly one of them did wrong.
>Trying to estimate your own rationality or meta-rationality involves
>severe theoretical problems ... "Beliefs" ... are not ontological parts of
>our universe, ... if you know the purely abstract fact that the other
>entity is a Bayesian reasoner (implements a causal process with a certain
>Bayesian structure),... how do you integrate it? If there's a
>mathematical solution it ought to be constructive. Second, attaching this
>kind of *abstract* confidence to the output of a cognitive system runs
>into formal problems.
I think you exaggerate the difficulties. Again see the above papers.
>It seems to me that you have sometimes argued that I should foreshorten my
>chain of reasoning, saying, "But why argue and defend yourself, and give
>yourself a chance to deceive yourself? Why not just accept the modesty
>argument? Just stop fighting, dammit!" ...
I would not put my advice that way. I'd say that whatever your reasoning,
you should realize that if you disagree, that has certain general
implications you should note.
>It happens every time a scientific illiterate argues with a scientific
>literate about natural selection. ... How does the scientific literate
>guess that he is in the right, when he ... is also aware of studies of
>human ... biases toward self-overestimation of relative competence? ... I
>try to estimate my rationality in detail, instead of using unchanged my
>mean estimate for the rationality of an average human. And maybe an
>average person who tries to do that will fail pathetically. Doesn't mean
>*I'll* fail, cuz, let's face it, I'm a better-than-average
>rationalist. ... If you, Robin Hanson, go about saying that you have no
>way of knowing that you know more about rationality than a typical
>undergraduate philosophy student because you *might* be deceiving
>yourself, then you have argued yourself into believing the patently
>ridiculous, making your estimate correct
You claim to look in detail, but in this conversation on this the key point
you continue to be content to just cite the existence of a few extreme
examples, though you write volumes on various digressions. This is what I
meant when I said that you don't seem very interested in formal analysis.
Maybe there are some extreme situations where it is "obvious" that one side
is right and the other is a fool. This possibility does not justify your
just disagreeing as you always have. The question is what reliable clues
you have to justify disagreement in your typical practice. When you decide
that your beliefs are better than theirs, what reasoning are you going
through at the meta-level? Yes, you have specific arguments on the
specific topic, but so do they - why exactly is your process for producing
an estimate more likely to be accurate than their process?
In the above you put great weight on literacy/education, presuming that
when two people disagree the much more educated person is more likely to be
correct. Setting aside the awkward fact of not actually having hard data
to support this, do you ever disagree with people who have a lot more
literacy/education than you? If so, what indicators are you using there,
and what evidence is there to support them?
A formal Bayesian analysis of such an indicator would be to construct a
likelihood and a prior, find some data, and then do the math. It is not
enough to just throw out the possibility of various indicators being useful.
Robin Hanson rhanson at gmu.edu http://hanson.gmu.edu
Assistant Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326 FAX: 703-993-2323
_______________________________________________
extropy-chat mailing list
extropy-chat at lists.extropy.org
http://lists.extropy.org/mailman/listinfo/extropy-chat
--
Internal Virus Database is out-of-date.
Checked by AVG Anti-Virus.
Version: 7.0.300 / Virus Database: 266.5.6 - Release Date: 3/1/2005
--
Internal Virus Database is out-of-date.
Checked by AVG Anti-Virus.
Version: 7.0.300 / Virus Database: 266.5.6 - Release Date: 3/1/2005
More information about the extropy-chat
mailing list