[ExI] Hard Takeoff

spike spike66 at att.net
Fri Nov 19 06:26:39 UTC 2010


...On Behalf Of Mike Dougherty
Subject: Re: [ExI] Hard Takeoff

On Tue, Nov 16, 2010 at 5:31 PM, spike <spike66 at att.net> wrote:
>>>> We know the path to artificial intelligence is littered with the 
>>>> corpses of those who have gone before.  The path beyond artificial 
>>>> intelligence may one day be littered with the corpses of our dreams, 
>>>> of our visions, of ourselves.
>
>>>Gee Spike, isn't it difficult to paint a sunny day with only black paint?
>
>> Mike we must recognize both the danger and promise of AGI.  We might 
>> have only one chance to get it exactly right on the first try, but only
one.

>Agreed.  I just hope vigilance doesn't necessarily have be so gloomy.  :)
...
>I'm just joking (mostly) - as your words made me both laugh and recoil...

Good, then I did it right.  Laughing and recoiling is a good combination.
Mike, emergent AGI is one of very few areas where I get dead serious.  I
once thought nanotech gray goo was the biggest threat to humanity, but since
Ralph Merkel's talk at Nerdfest, I came to realize to my own satisfaction
that humanity would not master nanotech.  Rather, a greater than human AGI
would do it.  

The real risks are not those on which the world seems so focused.  The
commies fizzled out without a bang, global warming isn't coming for us,
Malthusian population growth isn't going to kill us, radicalized
Presbyterians are not coming, certainly not in the time scale we have left
before this singularity is likely to show up.  I am convinced to my own
satisfaction that the most likely scenario is an AGI somehow comes into
existence, then it does what lifeforms do: reproduces to its capacity, by
converting all available metals to copies of itself, with the term metal
meaning everything that is left of the far right hand column of the periodic
chart.  It may or may not upload us.  It may or may not attempt preserve us
in our current form.  

Our current efforts might influence the AGI but we have no way to prove it.
Backing away from the AGI development effort is not really an option, or
rather not a good one, for without an AGI, time will take us all anyway.  I
give us a century, two centuries as a one sigma case.

Mike, given that paradigm, are my previous comments understandable?

spike













More information about the extropy-chat mailing list