[extropy-chat] singularity conference at stanford

Hal Finney hal at finney.org
Mon May 15 06:00:49 UTC 2006


I enjoyed the conference as well, and it was great to have a chance
to meet and chat with several list members afterwards.  Here are some
sites which liveblogged the conference and give a good summary of the
various presentations:

http://crnano.typepad.com/crnblog/activity_updates/index.html
http://blogs.zdnet.com/BTL/index.php?p=3029
http://www.downtheavenue.com/conference_highlights/index.html

I was especially impressed with Eliezer's talk.  I'd never seen him
speak before, and he did a great job.  His writing tends to be dense
and somewhat opaque to me, but as a speaker he was terrific.  He also
had an excellent balance between his slides and what he was saying; not
merely reading the slides, but not ignoring them either - emphasizing
the main points, stating things in slightly different language, so that
both senses were bringing in complementary information.

There were a lot of issues raised worth discussing, but I want to make
a somewhat "meta" comment regarding our methodologies for predicting
the future.  Surpisingly but encouragingly, a topic that popped up a
few times in the talks was our human tendency to be over-confident in
our predictions.

Nick Bostrom quoted a study which asked people to guess at factual
answers by giving a range of possibilities; a range that was supposed
to represent a 98% confidence interval.  Their range should be wide
enough that they'd expect the answer to be outside that range only 2%
of the time.  In fact, the answers were outside the range more like 40%
of the time.

We've discussed this general phenomenon of overconfidence on the list
before.  It's reasonable and appropriate to draw a lesson from this
and to try to recalibrate our estimations, to recognize that we are
probably being too narrow in our thinking and that we need to expand
our estimates of what is possible and likely.  Nick mentioned this,
and then Max did as well, and I think Eliezer may have touched on it.

That's fine, but then they asked a question of the panelists, when do
you think human-level AI will be achieved?  Kurzweil gave his answer,
2029, John Smart said 2070, and a few others answered as well, but Nick
and Max demurred on the grounds of this result on overconfidence.

There are two problems with this reasoning.  The first is that
it is technically incorrect: when you recalibrate because of human
overconfidence, you should expand your range, the confidence interval,
but not your mean, the center of the range.  And the mean is what the
panelists were being asked to provide.

But more fundamentally, according to a book I've read recently, The Wisdom
of Crowds (a phrase Kurzweil used quite a bit) by James Surowiecki, there
is something of a paradox in human estimation ability.  Individually we
tend to be highly overconfident.  But, collectively, our estimates
are often extremely good.  Surowiecki describes classic examples like
guessing the weight of a pig, or the number of jelly beans in a jar.
Collecting guesses from a crowd and averaging them, the result is usually
right to within a few percent.  Often the crowd's result is closer than
any individual guess.

Surowiecki sees this phenomenon as being behind the success of such
institutions as futures markets, including idea futures.  The reason
these institutions work is because they are successful at aggregating
information from a diverse set of participants.  Surowiecki emphasizes the
importance of diversity of viewpoints and describes a number of studies
showing that, for example, ethnically diverse juries do a better job.
He also describes several traps that can arise, such as a copycat effect
where people are polled publicly and sequentially for their guesses,
causing later participants to amend their mental estimates to fall into
line with the emerging consensus.  Markets are sometimes vulnerable to
this but at least the financial incentive is always there to encourage
honesty.

The bottom line is that the wisdom of crowds is one of the best guides
we have to the future, and so when people refuse to make guesses because
they have recalibrated themselves into a mental fog, they are no longer
contributing to the social welfare.  It's much better, when being polled
like this, for people to try to cut through the uncertainty and find that
"50%" point where they feel they are as likely to be too low as too high.
If they can do that, and avoid being influenced by the guesses of those
who speak before them, and if the group is reasonably diverse, you can
get about as good an estimate as you're going to get.  I would have liked
to have received that estimate, and would have found it one of the most
valuable pieces of information I took away that day.

Of course, the speakers were not exactly a model of diversity, and
probably an even better estimate could have come from the audience.
I wish they had been polled as well.  You could have everybody stand up,
then say to remain standing if you think human-level AI will occur before
2100, then before 2070, 2050, 2030, and so on.  At some point it will be
roughly clear when half the audience sits down, and that's your estimate.

The larger community would provide far more diversity and a
correspondingly improved estimate.  I would love to see a student
project ask 100 people walking into a supermarket, what year do you
think computers will be as smart as people, so that you could have a
conversation on the internet with a computer and you wouldn't be able
to tell if it was a person or a machine?  Averaging those results would
produce some interesting data, especially if we were also able to compare
with similar results from various communities with different levels
of expertise.  I think the crowd would be surprisingly accurate on a
question like that.  Everyone interacts with computers, to some degree,
and people probably have some idea of how quickly they are changing.

The lesson I take from reading a variety of sources about the strengths
and weaknesses of human reasoning abilities is this.  You somewhat
have to be of two minds in dealing with uncertainty.  Your private
estimations should be as accurate as possible, taking into consideration
known biases and attempting to compensate for them.  That's what Nick
and Max were doing.  But, at the same time, your public communications
should perhaps be more traditional and should not necessarily reflect
all these internal mental adjustments and calibrations.

A classic example is argumentation.  We've debated at length the
surprising economic result that rational people should not disagree.
But it may be that, even though they don't disagree about the facts,
they should still argue as if they did disagree.  Argumentation brings
out issues and directions of analysis that might not appear if they
just exchanged the minimal amount of data necessary to reach agreement.
It is a rich form of communication which can produce higher quality
agreement than would occur otherwise.

And the same thing applies to guesses.  Even if you have recalibrated
your mental confidence interval to the point where you expect almost
anything to happen, it still may make sense to make a prediction based
on the best guess that you can come up with.  You don't necessarily want
to say, oh, well, humans are so overconfident, our guesses are much less
likely to be true than we think.  It may be correct, but it's unhelpful.
Even though humans are individually overconfident, collectively their
guesses are, and will remain until we get AI, the best guide we have to
the future.  People should not be afraid to guess just because they fear
being overconfident.  Acting as if we don't know that we are overconfident
may actually be a socially more responsible way to behave.

Hal



More information about the extropy-chat mailing list