[extropy-chat] Social Implications of Nanotech

Eliezer S. Yudkowsky sentience at pobox.com
Sat Nov 15 18:49:41 UTC 2003


Robin Hanson wrote:

> At 09:45 PM 11/10/2003 +0100, Eugen Leitl wrote:
>> 
>> Sure, Singularity won't have much in the way of specific social 
>> implications. We'll get superhuman AI, it kills/transforms the entire
>> local ecosystem by side effect or malice aforethought, completely
>> remodels the solar system and transforming the entire universe in its
>> lightcone into something we currently can't imagine -- and that's
>> assuming no major new physics. Business as usual, in other words.
> 
> It is crucial to try to distinguish the various causes of things that
> might happen in the future, so we can intelligently ask what would
> happen if some of these causes are realized and others are not.  Would
> the mildest versions of nanotech really, by themselves, induce
> superhuman AI?  It is not obvious.

Well, if you want my own take on the probabilities, you can find it 
(tongue-in-cheek) at:

http://sl4.org/bin/wiki.pl?GurpsFriendlyAI

It is not *necessary* that the mildest versions of nanotech *immediately* 
induce hostile SAI.  It depends on who has access to nanocomputers, who is 
playing with fire, how skilled they are, their degree of unpreparedness 
and incaution, the algorithms they choose.  There may some close scares 
that convince people to play it more cautiously, or the first failure may 
be the last.  I don't know.  It depends on social factors I can't see, 
things I can't predict such as choice of algorithm, some quantities that 
are way the hell beyond my ability to calculate using my current 
knowledge, and so on.  Personally I would bet on simple Moore's Law 
inducing hostile SAI before the mildest versions of nanotech show up.  The 
probability of hostile SAI will increase with time, widening access to 
nanocomputers, algorithmic improvements, advances in cognitive science, 
and improvements in nanocomputers; major scares may decrease the 
probability somewhat, but even so, I would guess, it would only be a 
question of time.

>> Sure, a couple of centuries worth of hitherto progress rolled into a
>> month, or a couple of days. Accelerating up to a rate of 3 kYears of
>> progress within 24 hours
> 
> The mildest versions of nanotech don't seem capable of inducing such
> rapid change.  Again, the point is to try to be as clear as possible
> about what assumptions lead to what conclusions.

The precondition for the end of the world is a large amount of computing 
power and one smart fool with access to that technology.  Smarter fools 
require less computing power.  Other variables in nanotechnology 
development such as manufacturing times, fabrication costs, economic 
adoption speeds, etc., affect timing, but not the outcome.

-- 
Eliezer S. Yudkowsky                          http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence




More information about the extropy-chat mailing list