[extropy-chat] Re: Overconfidence and meta-rationality
Russell Wallace
russell.wallace at gmail.com
Tue Mar 22 00:58:21 UTC 2005
On Mon, 21 Mar 2005 12:40:24 -0800 (PST), "Hal Finney" <hal at finney.org> wrote:
> What about the thought experiment I offered to Eliezer: suppose you had
> a duplicate made in a magical duplicating machine, and then the two
> of you went off and studied some question for a year, independently.
> Perhaps it's the one you disagreed upon with Eliezer. When you return,
> you each simultaneously announce your estimates of the probability for
> the matter in question.
Now, that does largely eliminate the other factors I mentioned, so
that the difference is indeed likely to be primarily in the
domain-specific knowledge each copy has access to.
Do you really mean "probability"? I suspect you don't, that you're
just following the habit some people have of saying "90% probability"
when you mean "I think so, though I'm not sure, but I actually have no
basis whatsoever for assigning a number to it". The Cyc project
started off doing this, but abandoned it, for good reason - it
obscures information. For example, it obscures the difference between
"I strongly believe X is probably true" and "I weakly believe X is
true".
But let's suppose I say what I would actually say, something like "I
think X is false, though I'm not certain" and my copy says "I think X
is true, but I'm not certain."
My reaction here will depend on what X is - in particular, if it
refers to the possibility of accomplishing something, then I will
reason as follows: "I believe X can't be done, my copy thinks it can.
What would make him think that? I'd say it's because he's figured or,
or been told about, a way to do it! Off the top of my head I can't
think of anything else that would change his mind like that. Alright,
then, I now believe it can be done."
So we won't converge on a middle ground - we'll both end up believing
X is true, but my conviction will be weaker than my copy's, because
it's based on more indirect reasoning.
> What do you think about the claim that in this situation, it is
> impossible, if you both view each other as honest, rational truth-seekers,
> for you two to "agree to disagree"? That you will both eventually
> converge on a common estimate of the probability, even if you are not
> allowed to explain the reasoning and information that led you to your
> opinions? Mere knowledge of the disagreement is enough to eliminate it,
> according to the theorem.
I don't agree there is any general rule to that effect, but you've
presented a situation where the prediction ends up being partly
correct, for reasons specific to the situation.
- Russell
More information about the extropy-chat
mailing list