[ExI] Hard Takeoff

Natasha Vita-More natasha at natasha.cc
Sun Nov 14 19:05:02 UTC 2010


Hi Michael, great to hear from you.
 
I looked at your link and have to say that your analysis looks very, very
very much like my Primo Posthuman supposition for the future of brain, mind
and intelligence as related to AI and the Singularity.  My references are
quite similar to yours: Kurzweil, Voss, Goertzel, Yudkowsky, but I also
include Vinge from my interview with him in the mid 1990s.
 
Best,
Natasha
 
 <http://www.natasha.cc/> Natasha Vita-More
 

  _____  

From: extropy-chat-bounces at lists.extropy.org
[mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Michael
Anissimov
Sent: Sunday, November 14, 2010 11:59 AM
To: ExI chat list
Subject: Re: [ExI] Hard Takeoff


Here's a list I put together a long time ago: 

http://www.acceleratingfuture.com/articles/relativeadvantages.htm

Say I meet someone like Natasha or Stefano, but I know they haven't been
exposed to any of the arguments for an abrupt Singularity.  Someone more new
to the whole thing.  I mention the idea of an abrupt Singularity, and they
react by saying that that's simply secular monotheism.  Then, I present each
of the items on that AI Advantage list, one by one.  Each time a new item is
presented, there is no reaction from the listener.  It's as if each
additional piece of information just isn't getting integrated.  

The idea of a mind that can copy itself directly is a really huge deal.  A
mind that can copy itself directly is more different than us than we're
different from most other animals.  We're talking about an area of mindspace
way outside what we're familiar with.  

The AI Advantage list matters to any AI-driven Singularity.  You may say
that it will take us centuries to get to AGI, so therefore these arguments
don't matter, but if you think that, you should explicitly say so.  The
arguments about whether AGI is achievable by a certain date and whether AGI
would quickly lead to a hard takeoff are separate arguments -- as if I need
to say it.  

What I find is that people don't like the *connotations* of AI and are much
more concerned about the possibility of THEM PERSONALLY sparking the
Singularity with intelligence enhancement, so therefore they underestimate
the probability of the former simply because they never care to look into it
very deeply.  There is also a cultural dynamic in transhumanism whereby
interest in hard takeoff AGI is considered "SIAI-like" and implies that one
must be culturally associated with SIAI.  

-- 
michael.anissimov at singinst.org

Singularity Institute

Media Director

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20101114/b5b2892a/attachment.html>


More information about the extropy-chat mailing list