[extropy-chat] Inside Vs. Outside Forecasts
Eliezer S. Yudkowsky
sentience at pobox.com
Wed Oct 12 21:07:00 UTC 2005
Robin Hanson wrote:
>
> But don't let the best be the enemy of the good. The inside view is so
> often bad that even a crude outside view is typically a big
> improvement. That is the idea of the surprising usefulness of simple
> linear models. Yes, the world isn't linear, but simple linear models
> often beat the hell out of "sophisticated" human inside-view intuition.
Don't let the worse be the argument for the bad. I wasn't saying that
you should use inside views instead. I was proposing that you were just
screwed. Robyn Dawes makes a similar point in his chapter on the
robustness of linear models: when the best linear model predicts only 4%
of the variance (and, of course, sophisticated human intuition doesn't
predict anything at all), it may be that the phenomenon in question is
just not very predictable. What makes people think they *can* predict
student performance ten years out on the basis of a five-minute
interview? What makes people think they can predict the arrival time of AI?
Yes, people seek excuses to reject outside views (typically bearing bad
news) in favor of inside views (typically optimistic). But that the
outside views work is tied to the existence of an obvious reference
class in the experimental tests, making the outside view fast, cheap,
and simple. Now it would be shooting off your own foot to believe that
the inside view somehow works better on hard problems than on easy ones
- naturally it will work worse. But, of course, the outside view will
also tend to work worse as as the problem becomes harder - as the
reference class becomes less obvious, the analogies become more distant
and complex, and people start to argue about the reference class and
introduce motivated cognition into the arguments. With enough
opportunities for motivated cognition, why should selected arguments
from analogies work any better than inside views, even dressed up in
statistics? What evidence is there that this sort of thing will work?
Linear models are great stuff but neither linear models nor human
intuitive judgment will predict next week's lottery numbers. What makes
you think you can do better than chance? Why are you not simply screwed?
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
More information about the extropy-chat
mailing list