[ExI] Robust and beneficial AI

Giulio Prisco giulio at gmail.com
Tue Jan 13 11:58:38 UTC 2015


I see that many good friends and respected researchers signed the open
letter. I didn’t sign it (yet), because I think that important
progress in AI, including the development of smarter-than-human AI and
superintelligence, can only emerge from free, spontaneous and
unconstrained research. I don’t disagree with the open letter or the
research priorities document, but setting common priorities is not the
aspect of AI research that I find more interesting at this moment - I
prefer to let a thousand flowers bloom.

On Mon, Jan 12, 2015 at 11:05 PM, Anders Sandberg <anders at aleph.se> wrote:
> Some of what I did during the holidays:
> http://futureoflife.org/misc/open_letter
> http://futureoflife.org/static/data/documents/research_priorities.pdf
>
> Getting AI to be safe and "fit for purpose" (whether driving or being a
> companion species) is slowly becoming mainstream. This list can in true
> hipster fashion claim to have debated it long before it became cool.
>
>
>
> Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford
> University
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>




More information about the extropy-chat mailing list