[ExI] Isn't Bostrom seriously bordering on the reactionary?

Anders Sandberg anders at aleph.se
Thu Jun 16 08:54:40 UTC 2011


Giulio Prisco wrote:
> But the problem is that some repented ex-transhumanists seem _only_
> interested in this, and they discuss it in a way that frankly makes
> them hardly distinguishable from luddites. After reading Nick's paper
> I can easily imagine him supporting a strict worldwide ban on emerging
> tech in pure precautionary-principle zealot, nanny-state bureaucrat
> style.
>
> And this is, in my opinion, the biggest existential risk and the most
> plausible. I believe our species is doomed if we don't fast forward to
> our next evolutionary phase.
>   

Maybe. And maybe we are doomed if we do not advance very cautiously 
towards that phase. Now, how can we get the necessary information to 
make that choice? The problem with luddites and a lot of transhumanists 
is that they think there is no need to (or point in) getting the 
information.

We at FHI are quite interested in and work on the positive side of 
transhumanism - the potential for cognitive enhancement (individual and 
collective), brain emulation, life extension and so on. But the risks of 
transhuman technologies are *extremely* understudied, and as I argued in 
my earlier post, we have some pretty solid arguments why xrisks should 
be taken very seriously. The problem here is that most "normal" 
academics will not understand or care about these technologies, their 
potential nor existential risks. If we do not do the job, who will?

Case in point: for decades the early AI pioneers honestly thought they 
were merely a decade away from human-level AI. How many papers looked at 
the safety issues? Essentially zero. Thinking about AI risk only emerged 
in the late 1990s, and largely from the transhumanist community. The big 
irony is that the risks have been discussed in fiction since the 1920s, 
but completely ignored since that was just fiction. Only the 
transhumanist community took the possibility of AI seriously *and* was 
willing to think more deeply about the consequences than unemployment*.

[ * Interesting deviation: I.J. Good's paper "Speculations concerning 
the first ultraintelligent machine" which defined the intelligence 
explosion contain the lines: "Thus the first ultraintelligent machine is 
the *last* invention that man need ever make, provide that the machine 
is docile enough to tell us how to keep it under control. It is curious 
that this point is made so seldom outside of science fiction. It is 
sometimes worthwhile to take science fiction seriously." But this paper 
had little influence in this respect until recently. ]

Personally I do think that technological stagnation and attempts to 
control many technologies are major threats to our survival and 
wellbeing. But that cannot be defended by merely saying it - it needs to 
be investigated, analysed and tested. Furthermore, there does not appear 
to exist any a priori reason to think that all technologies are alike. 
It might be very rational to slow some down (if it is possible) while 
trying to speed others up. For example, at present most of our 
conclusions suggest that a uploading-driven singularity is more 
survivable than an AI-driven singularity.



-- 
Anders Sandberg,
Future of Humanity Institute 
James Martin 21st Century School 
Philosophy Faculty 
Oxford University 




More information about the extropy-chat mailing list