[ExI] The End of the Future

Anders Sandberg anders at aleph.se
Tue Oct 4 07:51:17 UTC 2011


I'm sitting at a conference about philosophy and AI right now. 
Yesterday's keynote talk was by Hubert Dreyfus and largely consisted of 
him gloating about how he had been right about the failures of all the 
AI pioneers that were passing by the MIT campus (Minsky, Simon, Lenat, 
Brooks, Dennett...), all suffering from the "first step fallacy" (if you 
have working first step of your architecture, then you can likely build 
the whole thing). The fun part was that he was genuinely surprised by 
the success of Watson - this is suddely a real result that from his 
perspective simply couldn't happen. In any case, he had a very relevant 
point: the real failure of the AI field has been that each new 
generation has not seriously learned from the successes and failures of 
the previous one.

I think this shows a general malaise of many fields: are there 
incentives for progress or incentives for churn? You can get a great 
academic career by investigating what your professor works on, 
eventually inventing a "radical" different interpretation, and generally 
producing plenty of papers - despite these papers not really adding 
much, or dealing with a question that matters. Most AI researchers seem 
to be doing something like this. Companies need to sell things, but if 
they have the choice between making something genuinely new and 
something reliably profitable, then most will sensibly go for the later 
- and the customers are fine with that. And of course government policy 
is almost always churn than real attempts at reform.

So, we might have a situation where we have created a situation where 
incentives largely promote churn over innovation/progress. This likely 
comes from 1) it is hard to distinguish genuine progress from 
good-looking churn, 2) innovation is failure-prone and 
funders/supporters don't want to be left holding the bag, 3) risk 
aversion has been spreading, 4) our society and institutions have become 
very complex, and getting the necessary focus to solve a big task is a 
tough social problem.

To get around these, we need 1) better ways of detecting and 
distinguishing progress from churn, which often involves better 
institutional/societal memory, 2) changes in how incentives are 
distributed (see Ioannidis paper in last week's Nature, or discussions 
about science prizes), 3) making people more willing to take risks and 
follow visions, 4) better forms of organisation (perhaps enabled by new 
tech, perhaps by being tuned to maximizing progress).

-- 
Anders Sandberg,
Future of Humanity Institute
Philosophy Faculty of Oxford University 




More information about the extropy-chat mailing list