[ExI] Hard Takeoff

Mike Dougherty msd001 at gmail.com
Sat Nov 27 06:53:49 UTC 2010


2010/11/25 Michael Anissimov <michaelanissimov at gmail.com>

>
> Wasn't this point obvious from the get-go?  Isn't this just the beginning
> of what humans must overcome to win against recursively self-improving AI?
>
>
I wonder if we'll ever overcome the US vs THEM mentality?  I'm sure it was
an effective simplification in tribal settings to immediately assume danger
because "we" don't immediately recognize "them."   I feel that the analogy
to winning against recursively improving AI is like a military parent
returning home after many years to find their own child has grown into an
unrecognized teen - and feeling so threatened they constantly feel the need
to win against this home invader  (who conversely feels the adult has no
right to intervene in a household in which they've had no part for the last
decade)

My point is that we should be so well tied to the improving AI that our
collective intelligence is raised along with recursive improvement.  Granted
truly alien motivations of a suddenly explosive takeoff could be
disastrous.  Unexpected nuclear explosions are also disastrous but we don't
eschew electricity produced from nuclear reactors.  We are certainly
concerned that genetic engineering (et al.) have the potential to produce a
plague that also wipes out humanity but it would be unwise to abandon this
medical technology regardless of its potential for curative medicine.

I was thinking prudence should allay our fears.  Then I imagined the
counterpoint would be to investigate if humanity collectively possesses
enough prudence in the first place.

xkcd for prudence: http://xkcd.com/665/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20101127/905e1cd4/attachment.html>


More information about the extropy-chat mailing list