[extropy-chat] Re: riots in France

Dirk Bruere dirk.bruere at gmail.com
Mon Nov 14 23:01:10 UTC 2005


On 11/14/05, Jef Allbright <jef at jefallbright.net> wrote:
>
> On 11/14/05, Dirk Bruere <dirk.bruere at gmail.com> wrote:
> >
> >
> > On 11/14/05, Samantha Atkins <sjatkins at mac.com> wrote:
> > > With material plenty do you think this is likely? But wait, I
> thoroughly
> > believe in the right to obtain and bear arms. So we may disagree or
> >
> > Material plenty simply means that the fighting will be over power,
> religion
> > and ideology.
> >
> > > which kinds of things are a problem. A nano-factory cannot produce
> > anything it doesn't have a blueprint for. That is one level of control.
> >
> > How much of a blueprint does a gene machine require to synthesise a
> gene?
> >
> > > Nanofactories could come with certain built-in restrictions giving
> another
> > level of control. The problems could also be addressed by something like
> > the broadcast model proposed by Ralph Merkle
> > (http://www.zyvex.com/nanotech/selfRepJBIS.html ).
> > >
> > >
> > > Generally speaking I am more interested in empowering people and in
> > fighting abuses they actually do commit than in keeping them harmless by
> > decreasing their abilities and access.
> > >
> > >
> > I think that such factories will be common, and that restrictions on
> their
> > use will be just as effective as DRM is in music today.
>
> Dirk expressed the kinds of dangers I had in mind, but the subsequent
> discussion seems to have been about control of threats (and its
> ultimate ineffectiveness) rather than the accelerating growth of
> wisdom I had in mind.
>
> I see technological risk accelerating at a rate faster than the
> development of individual human intelligence (which gives us much of
> our built-in sense of morality), and faster than cultural intelligence
> (from which we get moral guidance based on societal beliefs) but
> maybe--just maybe--not faster than technologically based amplification
> of human values exploiting accelerating instrumental knowledge to
> implement effective decision-making which, as I've explained elsewhere
> in more detail, is a more encompassing concept of morality.


I too think it will be a close run race between PostHuman society and
extinction (or at best a massive dieback and new dark age).

Dirk
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20051114/f9538ffe/attachment.html>


More information about the extropy-chat mailing list