[ExI] Did Hugo de Garis leave the field?

Anders Sandberg anders at aleph.se
Sun Apr 24 13:26:53 UTC 2011


Eugen Leitl wrote:
> It's just the easiest approach, particularly given the
> chronical hubris in the AI camp. I never understood the
> arrogance of early AI people, it was just not obvious that
> it was easy. But after getting so many bloody noses they 
> still seem to think it's easy. Weird.

Yes, this might be one of the strangest properties of the field. We have 
about 55 years of evidence that it is harder than we think, yet it 
doesn't make us abandon it. Could it be the lack of progress metric that 
does it? For any specific problem we can see progress, yet the overall 
goal might be approached rapidly, slowly or not at all.

Fusion might be similar - unlike AI it has some very firm theory 
underlying the field, all the problems are in engineering. Yet hot 
fusion seems to have the same perennial optimism without radical 
progress as AI.

Overall, it is odd how bad we are at estimating problem difficulties.


A simple "doomsday paradox" style argument might suggest we should 
expect mid-century AI success: if achieving AI takes X years, we should 
expect to find ourselves close to the midpoint. Hence, given that we 
have seen 55 years pass, we should expect success around 2066. This is 
of course a pretty silly argument, but one can dress it up in neater 
formal clothes. For example, we can view the continued failures as 
something akin to the Laplace analysis of the probability of the sun 
rising tomorrow: having seen X sunrises, the probability of another one 
tomorrow is (X+1)/(X+2) (to get this, view the probability as an unknown 
random variable distributed uniformly between 0 and 1, and run a 
Bayesian analysis). So right now the chance of an AI breakthrough next 
year will be 1-56/57, or 1.7%. However, reference class problems abound 
(count years, days or research hours?), and one could argue the proper 
prior should be something like the Jeffreys prior instead... just for 
starters. As Laplace noted, actually understanding the problem and the 
facts influencing it gives much better probability estimates.




-- 
Anders Sandberg,
Future of Humanity Institute
Philosophy Faculty of Oxford University 




More information about the extropy-chat mailing list