[extropy-chat] Social Implications of Nanotech

Robin Hanson rhanson at gmu.edu
Tue Nov 11 14:26:59 UTC 2003


At 09:45 PM 11/10/2003 +0100, Eugen Leitl wrote:
> > An easy simple opinion to have is that nanotech won't have much in the way
> > of specific social implications.  In this view, manufacturing will slowly
>
>Sure, Singularity won't have much in the way of specific social implications.
>We'll get superhuman AI, it kills/transforms the entire local ecosystem by
>side effect or malice aforethought, completely remodels the solar system and
>transforming the entire universe in its lightcone into something we currently
>can't imagine -- and that's assuming no major new physics. Business as usual,
>in other words.

It is crucial to try to distinguish the various causes of things that might
happen in the future, so we can intelligently ask what would happen if some
of these causes are realized and others are not.  Would the mildest versions
of nanotech really, by themselves, induce superhuman AI?  It is not obvious.

>Sure, a couple of centuries worth of hitherto progress rolled into a 
>month, or a
>couple of days. Accelerating up to a rate of 3 kYears of progress within 
>24 hours

The mildest versions of nanotech don't seem capable of inducing such rapid
change.  Again, the point is to try to be as clear as possible about what
assumptions lead to what conclusions.

>You'll notice Diamond Age doesn't allow end users to design. Even architects
>are supervised closely (but easy enough to subvert). ...

As I said before, I want to separate hypotheses about the technical abilities
from hypotheses about regulation.  My reference to Diamond age was about the
technical abilities, not the regulations described there.

>Molecular manufacturing that is useless for its own production doesn't
>happen. It just gets eaten alive by the other kind.

That assumes that the other kind exists, and is fast/effective enough.

> > 2.  A big question is by what factor general manufacturing devices are 
> less
> > efficient than specialized manufacturing devices, ...
>
>How does current economy handle production of production means, including
>persons? In an exponential rate? Superhuman persons? I'm not sure current
>economic theory would be a good predictor here.

Current economic theory isn't as tied as you might think to the state of our
current economy.  I think that, if used carefully, it is capable of predicting
what consequences follow from what assumptions.  So I want to get clear on the
assumptions.

>What does it take to produce a good meta-designer? A robust morphogenetic
>code, an evolutionary system, a good nanoscale simulator, and lots of
>computronium to run the above. As embarrassingly parallel as they come.
>And that's about the only metainvention you need to make.

And how damn hard is it to have "lots" of "robust" "good" items like these?

> > It seems to me to make the most sense to choose some sort of baseline
> > scenario, then analyze each substantial change by itself, and only after
> > try to combine these scenarios.  ...
>
>If you look at two subcritical chunks of plutonium, you will not arrive at
>the correct conclusion of what a supercritical assembly of them will bring
>you, unless you factor in adequate level of theory.

Agreed.  But you'd need that same level of theory to predict what a critical
chuck will do.

> > >Could we have an "open source" economy, where people work on whatever
> > >interests them, or create designs that satisfy their own needs, then
> > >make the fruits of their labor available to everyone for free?
> >
> > The form of the market for designs seems a separate issue.  ...
>
>What's the market for gcc? autoconf? GNU/Linux? I notice this particular
>market has been eating several Large Companies alive, yet has not yet found
>adequate treatment in classical economics circles.

I meant that open source counts as a market form.  We have some insight into
it, and some open questions remain, as with all market forms.

> > Sure, if pigs can fly, the sky is going to look different.  But we need to
> > do the analysis step by step, and not jump too quickly to big conclusions.
>
>If your premises are bogus, your conclusions are food for the abovementioned
>airborne porcines.
>I'm not sure whether you're looking for publishable papers, or trying to map
>the more radical implications of comparatively simple technologies like
>molecular nanofacturing (completely ignoring the AI issue for time being).
>If you're realistic, your peers will reject your papers.
>If you're conservative, your fellow transhumanists will point and laugh.
>I'm not sure there's a sweet spot between those two bonfires.

I've just said I want to do analysis step by step, carefully making 
distinctions
and identifying assumptions, in the best spirit of academic study.  I worry 
about
any conclusions drawn by people who think such an approach is too 
conservative or
unrealistic.





Robin Hanson  rhanson at gmu.edu  http://hanson.gmu.edu
Assistant Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326  FAX: 703-993-2323 




More information about the extropy-chat mailing list