[extropy-chat] Singularity Blues

Edmund Schaefer edmund.schaefer at gmail.com
Wed Apr 6 16:52:52 UTC 2005


On Apr 6, 2005 12:21 PM, Adrian Tymes <wingcat at pacbell.net> wrote:
> --- Edmund Schaefer <edmund.schaefer at gmail.com> wrote:
> > Whether
> > we can "ride the wave" of advanced technology is only important if
> > we're imagining that lots of people are going to gain access to it at
> > once, which we can't safely assume.
> 
> Yet it seems the safest outcome, the one in which the greatest number
> of people (used loosely: including uploads, new AIs, et al) are likely
> to survive and prosper.  So, at least from the selfish perspective of
> someone in the present who would like to create a future in which this
> self can survive, that seems the option to steer towards.  (Of course
> we can't safely assume it; if we could, no such steering would be
> required.)

Giving everyone advanced nanotechnology is about as safe as giving
everyone nuclear strike capabilities. Less safe, really. Why is equal
distribution of advanced technology desirable and how do you plan to
make it safe?

> > All the power goes to the
> > superintelligences that invent the stuff, and that's what we should
> > prepare for.
> 
> Debatable, but in any case, setting things up so as to create as many
> superintelligences as possible would seem likely to avoid the many
> problems that have been pointed out with a single intelligence
> dominating everything.

Why is domination by multiple superintelligences any better? What
specific problems would this fix?



More information about the extropy-chat mailing list