[extropy-chat] singularity conference at stanford
Robin Hanson
rhanson at gmu.edu
Sat May 20 15:41:44 UTC 2006
[I've been buried grading finals, and my DSL has been down at home.
So I'm a bit late in replying to a bunch of these messages. RH]
On 5/15/2006, Hal Finney wrote:
>our human tendency to be over-confident in our predictions.... It's
>reasonable and appropriate to ... try to ... expand
>our estimates of what is possible and likely. Nick mentioned this,
>and then Max did as well, and I think Eliezer may have touched on it.
>That's fine, but then they asked a question of the panelists, when do
>you think human-level AI will be achieved? Kurzweil gave his answer,
>2029, John Smart said 2070, and a few others answered as well, but Nick
>and Max demurred on the grounds of this result on overconfidence.
>There are two problems with this reasoning. The first is that
>it is technically incorrect: when you recalibrate because of human
>overconfidence, you should expand your range, the confidence interval,
>but not your mean, the center of the range. And the mean is what the
>panelists were being asked to provide.
Overconfidence is usually more fundamentally about your own ability
relative to the ability of others. So it makes more sense to correct it
by moving your whole distributions closer to the distribution of others
this could change your mean as well as your variance.
>But more fundamentally, ... Individually we
>tend to be highly overconfident. But, collectively, our estimates
>are often extremely good. Surowiecki describes classic examples like
>guessing the weight of a pig, or the number of jelly beans in a jar. ...
>... when people refuse to make guesses because
>they have recalibrated themselves into a mental fog, they are no longer
>contributing to the social welfare. ...
>Of course, the speakers were not exactly a model of diversity, and
>probably an even better estimate could have come from the audience. ...
>The lesson I take. ... is this ... Your private estimations should be as
>accurate as possible, taking into consideration known biases and
>attempting to compensate for them. .... But, at the same time, your
>public communications should perhaps .... not necessarily reflect
>all these internal mental adjustments and calibrations.
>... People should not be afraid to guess just because they fear
>being overconfident. Acting as if we don't know that we are overconfident
>may actually be a socially more responsible way to behave.
Yes, in order to promote information aggregation in the group, it is essential
that people state their opinions. But it is not at all essential, and may
even be harmful, for people to provide overconfident opinions. The
key questions are two:
1) whether observers can solve the inverse problem, to infer the information
that people have from the opinions that they express, and
2) whether enough people *want* to solve this inverse problem accurately,
"enough" relative to the information institution they are contributing to.
Knowing that a person is perfectly rational, or that a person completely
ignores the info of others, can both support effective inference in principle.
In practice I'm not sure what sort of over (or under) confidence would make
it easier to solve the inverse problem. But it seems clear that the second
problem is best solved by people who want to be as accurate as possible.
Regarding this particular conference (which I did not attend) other potentials
for overconfidence seem much more important to me. It was a conference
created by people with an extreme view, in order for such extremists
to interact.
The organizers appear confident that they know who is expert in this area,
as there was no call for proposals or open competition for participation.
And in judging such expertize for this academic conference they seem to
put relatively little value on traditional academic credentials in the related
academic areas. (Since the subject is the social consequences of artificial
intelligence, the relevant academic credentials would be in A.I. or social
science.) Relative to such potential biases, a few people not offering a
forecast number seems pretty minor.
Robin Hanson rhanson at gmu.edu http://hanson.gmu.edu
Associate Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326 FAX: 703-993-2323
More information about the extropy-chat
mailing list