[extropy-chat] Inside Vs. Outside Forecasts

Robin Hanson rhanson at gmu.edu
Wed Oct 12 20:36:20 UTC 2005

At 02:02 PM 10/12/2005, Eliezer S. Yudkowsky wrote:
>>>It seems to me that there are *no* past situations similar enough 
>>>to permit an outside view.  For an outside view to work, you need 
>>>data that verges on i.i.d. - independent and identically 
>>>distributed.  ... There *is* no outside view of when AI will arrive.
>>The article says nothing about needing data verging on i.i.d., and 
>>for good reason.  There are lots of ways to make useful comparisons 
>>with other cases without needing such a strong constraint.  ... 
>>Consider the class of situations in which someone predicted an 
>>event that they said was so different from existing events that 
>>nothing else was similar to it.  We could collect data on this and 
>>look at what fraction of the time the predicted event actually 
>>happened, and how far into the future it did happen.
>This is exactly what I think you can't do.  If the reference class 
>isn't immediately obvious, you can't make one up and call that an 
>'outside view'.  Developing a heuristics-and-biases curriculum for 
>high school is not something that shares a few arguable similarities 
>with other attempts to develop high-school curricula.  The reference 
>class was obvious - that's why the outside view worked.  I'm not 
>sure there's any point in trying to list out what makes a reference 
>class "obvious"; ... If you've already formulated your question, 
>much less already have an answer in mind, and *then* you go around 
>formulating "a class of situations" and collecting instances, your 
>intended result will bias what you think is a relevant instance, 
>and, in fact, what you believe to be a relevant reference 
>class.  "The class of situations in which someone predicted an event 
>that they said was so different from existing events that nothing 
>else was similar to it" is, already, an obviously biased reference 
>class where you had a pretty good idea in mind of what sort of data 
>you might collect at the time you chose the reference class.  Why 
>not the class of predictions about technology?  Why not the class of 
>predictions made by people whose name ends in 'N'?  ... That's not 
>an outside view.  Outside views are when you can pick up a 
>statistical paper or turn to a domain expert and find out what 
>happened the last dozen times someone tried pretty much the same 
>thing.  Outside views are fast and cheap and introduce few 
>opportunities to make errors.  Selecting analogies to support your 
>argument is a whole different ballgame, even if you dress it up in 
>statistical lingerie.

The paper we are discussion does not seem to take such a restrictive 
view of "outside view", and it cautions against making too many 
excuses not to use outside views:

>Our analysis implies that the adoption of an outside view, in which 
>the problem at hand is treated as an instance of a broader category, 
>will generally reduce the optimistic bias and may facilitate the 
>application of a consistent risk policy. This happens as a matter of 
>course in problems of forecasting or decision that the organization 
>recognizes as obviously recurrent or repetitive. However, we have 
>suggested that people are strongly biased in favor of the inside 
>view, and that they will normally treat significant decision 
>problems as unique even when information that could support an 
>outside view is available. The adoption of an outside view in such 
>cases violates strong intuitions about the relevance of information. 
>Indeed, the deliberate neglect of the features that make the current 
>problem unique can appear irresponsible. A deliberate effort will 
>therefore be required to foster the optimal use of outside and 
>inside views in forecasting, and the maintenance of globally 
>consistent risk attitudes in distributed decision systems.

I say you can do statistical analysis on *any* dataset.  For any set 
of items with any set of descriptors you can put together a model 
class, assign a prior, and turn the Bayes rule crank.  Of course some 
model classes may work better than others.  And yes, if you throw 
away some items or descriptors or model terms *because* you have some 
idea of which direction including them would change the results, you 
may bias your results.

But don't let the best be the enemy of the good.  The inside view is 
so often bad that even a crude outside view is typically a big 
improvement.  That is the idea of the surprising usefulness of simple 
linear models.  Yes, the world isn't linear, but simple linear models 
often beat the hell out of "sophisticated" human inside-view intuition.

Robin Hanson  rhanson at gmu.edu  http://hanson.gmu.edu
Associate Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326  FAX: 703-993-2323  

More information about the extropy-chat mailing list