[ExI] Inevitability of the Singularity (was Re: To Max, re Natasha and Extropy (Kevin Haskell)

Stefano Vaj stefano.vaj at gmail.com
Tue Jul 12 11:59:44 UTC 2011


On 10 July 2011 09:59, Anders Sandberg <anders at aleph.se> wrote:

> On 2011-07-06 16:32, Stefano Vaj wrote:
>
>> Past success (sometimes against all bets...) may be an encouragement,
>> but has no predictive value as to what is going to happen in the
>> future. Especially if we chose to rest on our great past laurels and
>> expect Kurzweil's curves to land automagically a Singularity or
>> another on our lap.
>>
>
> Especially since there are reasons to think that just getting
> superintelligence, massive automated manufacturing, mental editing or other
> of our favorite things could be very, very harmful without the right
> safeguards and social framing.
>

There are two ways to see that.

The first is based on the tacit assumption that we should see things from a
"humankind" point of view. As discussed other times, this is not obvious,
but above all the very concept hardly bears closer inspection. Even ignoring
that, it remains the case that our current chances of dying of old age,
starvation, aggression, incidents or pathologies is pretty close to 100%, so
that it is unclear on which metrics such risk would be lower or more
acceptable than that involved in betting on the (certainly not entirely
predictable or controllable) chances that radical breakthrough might offer
us.

As to the second, if we accept that no center of interest exists that can be
defined as "humankind", what actually risks to be very harmful in
"superintelligence, massive automated manufacturing, mental editing or other
of our favorite things" is the fact of... not having it, in comparison with
competing communities and entities that do.

As Gregory Stock remarks, the prob is usually not with defective, dangerous
or failing technologies, but with technologies that simply *work*, and thus
lend a significant competitive edge to societies controlling them. This was
true with the neolithic revolution, and it seems to have become clear enough
to contemporary entities, such as the Islamic Republic of Iran, which is
principle should incline instead towards traditionalism rather than pursuing
a nuclear programme.

One may of course choose to try and evade such literal or metaphoric "race
to arms" through more or less plausible global governance mechanisms, etc.,
fighting for control and repression of technological developments.

As long as the thresholds for their implementation progressively lowers,
however, this inevitably appears to lead to increasingly Brave-New-Worldish
scenarios, where the very "humanist" values of the proponents are defeated
anyway, only in an ever more stagnant and decadent landscape only waiting
for the first natural event on a large enough scale to wipe it off.

I am certainly not advocating for ignoring safety issues, for implenting all
and everything which can be implemented as soon as possible, or for a total
deregulation of all kind of tech-related activities. I am only pointing out
that the precautionary principle is the long term a sure recipe for
exinction and "dehumanisation", while for the alternative, even for
developments involving some alleged x-risks, well, it remains to be seen.

--
Stefano Vaj
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20110712/2a35babb/attachment.html>


More information about the extropy-chat mailing list