[ExI] Inevitability of the Singularity (was Re: To Max, re Natasha and Extropy (Kevin Haskell)
Anders Sandberg
anders at aleph.se
Sun Jul 10 07:59:18 UTC 2011
On 2011-07-06 16:32, Stefano Vaj wrote:
> Past success (sometimes against all bets...) may be an encouragement,
> but has no predictive value as to what is going to happen in the
> future. Especially if we chose to rest on our great past laurels and
> expect Kurzweil's curves to land automagically a Singularity or
> another on our lap.
Especially since there are reasons to think that just getting
superintelligence, massive automated manufacturing, mental editing or
other of our favorite things could be very, very harmful without the
right safeguards and social framing.
(consider Robin's paper on upload economics, the corpus of unfriendly AI
analysis, CRNs worries about destabilization from easy weapon
manufacturing with first mover advantages, as well as creepy uses of
cognotech - I am still horrified by seeing how happy a senior government
figure was with a near future scenario we developed where mind control
was becoming feasible)
Optimism about technological progress and acceleration might lead to
complacency that prevents good things from being developed, but optimism
about that the effects will be benign is even worse: it makes people
promote acceleration but not trying to make it safer. Of course, the
above concerns might be too pessimistic, but we better find that out by
further proactive research rather than just being lucky.
--
Anders Sandberg
Future of Humanity Institute
Oxford University
More information about the extropy-chat
mailing list