[ExI] Robust and beneficial AI

Anders Sandberg anders at aleph.se
Tue Jan 13 12:17:29 UTC 2015


Giulio Prisco <giulio at gmail.com> , 13/1/2015 1:02 PM:
I see that many good friends and respected researchers signed the open
letter. I didn’t sign it (yet), because I think that important
progress in AI, including the development of smarter-than-human AI and
superintelligence, can only emerge from free, spontaneous and
unconstrained research. I don’t disagree with the open letter or the
research priorities document, but setting common priorities is not the
aspect of AI research that I find more interesting at this moment - I
prefer to let a thousand flowers bloom.



In most domains, the importance of stuff has a power-law tail: the most important thing is often several times more important than the second most important thing, and so on. In some domains most value of the entire field is even located in the biggest item. So prioritizing is itself quite important: if your list is off, you might miss a lot of the value of the field by pursuing the less valuable targets. 


Many domains empirically do seem to have very haphazard priorities. In such cases free exploration is good, because there is at least some chance somebody works on the important thing. The alternative, everybody following one random priority list, tends to lead to worse effects. But if you could improve the priority-setting, the value of the list goes *way* up! If the priority list actually is somewhat correlated with value, then random search is a bad idea. We may still want it to counter model error/uncertainty in the priority list (it might still be wrong about something really important).


The FLI idea is not so much as establishing a one true list of What Must Be Done In AI, but try to reorder the priorities of people in the field based on some actual thinking about first-, second-, and higher order issues. The fact that safety for a long time has not even been regarded as a research priority at all should tell us something about how bad the priorities used to be. 

(There are some theorems about the value of metacognition that suggests that we should rationally spend effort up to about half of the difference in value between the top and second best alternative; for most fields this is *a lot* more than is currently done. A few strategy meetings and reports here and there, some semi-philosophical papers by some emeritus, a discussion among funding bodies - that is the typical approach. But if those theorems apply, we ought to spend *billions* on setting research priorities better.)


Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20150113/2d0bd0da/attachment.html>


More information about the extropy-chat mailing list