[ExI] Yes, the Singularity is the greatest threat to humanity

Stefano Vaj stefano.vaj at gmail.com
Mon Jan 17 11:43:35 UTC 2011


On 17 January 2011 01:17, Anders Sandberg <anders at aleph.se> wrote:

> There are far more elegant ways of ensuring friendliness than assuming
> Kantianism to be right or fixed axioms. Basically, you try to get the
> motivation system to not only treat you well from the start but also be
> motivated to evolve towards better forms of well-treating (for a more
> stringent treatment, see Nick's upcoming book on intelligence explosions).
> Unfortunately, as Nick, Randall *and* Eliezer all argued today (talks will
> be put online on the FHI web ASAP) getting this friendliness to work is
> *amazingly* hard. Those talks managed to *reduce* my already pessmistic
> estimate of the ease of implementing friendliness and increase my estimate
> of the risk posed by a superintelligence.
>

I am still persuaded that the crux of the matter remains a less superficial
consideration of concept such as "intelligence" or "friendliness".  I
suspect that at any level of computing power, "motivation" would only emerge
if a deliberate effort is made to emulate human (or at least biological)
evolutionary artifacts such as sense of identity, survival istinct, etc.,
which would be certainly interesting, albeit probably much less crucial to
their performances and flexibility than one may think.

This in turns means that AGIs in that sense will be from all practical
purposes *uploaded humans*, be they modelled on actual individuals or on a
patchwork thereof, neither more nor less "friendly" than their models would
be or evolve to be.

Now, both stupid and "intelligent" computers can obviously be dangerous. If
we postulate that intelligent ones would be more so because of their ability
to exhibit "motivations", we should however keep in mind that such feature
may easily be indistinguishably supplied, fyborg-style, by a silicon system
of equivalent power plus a carbon-based human being with a keyboard.

Now, are we really in the business of transhumanism to advocate for the
enforcement of a global, public control of tech progress in the field of
information technology aimed at slowing down its already glacial pace? I
think there are already more than enough people who are only too happy to
preach for the adoption of such measures...

-- 
Stefano Vaj
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20110117/234a4006/attachment.html>


More information about the extropy-chat mailing list