[extropy-chat] Inside Vs. Outside Forecasts
Eliezer S. Yudkowsky
sentience at pobox.com
Wed Oct 12 18:02:27 UTC 2005
Robin Hanson wrote:
> At 11:06 PM 10/11/2005, Eliezer S. Yudkowsky wrote:
>>
>> It seems to me that there are *no* past situations similar enough to
>> permit an outside view. For an outside view to work, you need data
>> that verges on i.i.d. - independent and identically distributed.
>> *Analogies* to past situations are not outside views, and they tend to
>> be chosen after the analogizer has already decided what the results
>> ought to be. There *is* no outside view of when AI will arrive.
>> Anyone who tries to pin a quantitative prediction on the date is
>> just... plain... screwed.
>
> The article says nothing about needing data verging on i.i.d., and for
> good reason. There are lots of ways to make useful comparisons with
> other cases without needing such a strong constraint. Yes of course
> there are ways to be biased with outside views, just as if one works
> hard enough one can bias any analysis. But that doesn't mean such
> analysis isn't useful, or better than alternative ways to analyze.
>
> Consider the class of situations in which someone predicted an event
> that they said was so different from existing events that nothing else
> was similar to it. We could collect data on this and look at what
> fraction of the time the predicted event actually happened, and how far
> into the future it did happen if it did. Even if we had no other info
> about the event, that would give a useful estimate of the chance of it
> happening and when. Of course we could also look at other
> characteristics of such predictions and do a multiple regression to take
> all of the characteristics into account together.
This is exactly what I think you can't do. If the reference class isn't
immediately obvious, you can't make one up and call that an 'outside
view'. Developing a heuristics-and-biases curriculum for high school is
not something that shares a few arguable similarities with other
attempts to develop high-school curricula. The reference class was
obvious - that's why the outside view worked.
I'm not sure there's any point in trying to list out what makes a
reference class "obvious"; that will just entice people to take
nonobvious reference classes and argue that they're obvious. Or
alternatively, entice people to argue that trying to develop a
heuristics-and-biases curriculum *doesn't* belong to the obvious
reference class for yada-yada reason. As you say, if one works hard
enough, one can bias any analysis - but that's no excuse for issuing
engraved invitations. This is one of those cases where, if you don't
already have a good commonsense understanding of what constitutes
shooting yourself in the foot, making up complicated lists of criteria
just introduces independent opportunities for motivated cognition on
each list item.
If you've already formulated your question, much less already have an
answer in mind, and *then* you go around formulating "a class of
situations" and collecting instances, your intended result will bias
what you think is a relevant instance, and, in fact, what you believe to
be a relevant reference class. "The class of situations in which
someone predicted an event that they said was so different from existing
events that nothing else was similar to it" is, already, an obviously
biased reference class where you had a pretty good idea in mind of what
sort of data you might collect at the time you chose the reference
class. Why not the class of predictions about technology? Why not the
class of predictions made by people whose name ends in 'N'? And there's
no pre-existing source in which you can look up all the relevant
instances - you'll have to decide what's a relevant instance and what's
not, and it seems likely that your decision about 'relevant instances'
will use an archetype-based retrieval schema that recalls 'successful
predictions of novel events' or 'failed prediction of novel events',
depending on your thesis.
That's not an outside view. Outside views are when you can pick up a
statistical paper or turn to a domain expert and find out what happened
the last dozen times someone tried pretty much the same thing. Outside
views are fast and cheap and introduce few opportunities to make errors.
Selecting analogies to support your argument is a whole different
ballgame, even if you dress it up in statistical lingerie.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
More information about the extropy-chat
mailing list