[ExI] singularity summit on foxnews

hkhenson hkhenson at rogers.com
Fri Sep 14 00:44:57 UTC 2007


At 02:28 PM 9/13/2007, Stefano wrote:
>On 9/11/07, Brent Allsop <brent.allsop at comcast.net> wrote:
> > As a lessor issue, it is still of my opinion that you are making a big
> > mistake with what I believe to be mistaken and irrational fear mongering
> > about "unfriendly AI" that is hurting the Transhumanist, and the strong
> > AI movement.
>
>I still have to read something clearly stating to whom exactly an
>"unfriendly AI" would be unfriendly and why -

Consider the plant kudzu.  An AI could be unfriendly in the same way 
or other ways that are even more alien to us than a plant.  It might 
be possible to *design* AIs with motivations similar to ours, even 
ones that were subservient.  I explored this in a story originally 
posted in draft to the sl4 list where it received no comment positive 
or negative.  A somewhat updated version is 
here:  http://www.terasemjournals.org/GN0202/henson.html if anyone 
here wants to do something that list did not.

>but above all why you or
>I should care, especially if we were to be (physically?) dead anyway
>before the coming of such an AI.

You can't count on it, not unless you take steps to die real 
soon.  It is very likely someone will be alive at the point AIs reach 
takeoff.  The problem with AIs thinning out the world's excess 
population is that it's hard to imagine a situation where unfriendly 
AIs didn't make a clean sweep.

>It is not that I think that those questions are unanswerable or that
>it would be impossible to find arguments to this effect, I simply
>think they should be made explicit and opened to debate.

The assumption on this list used to be that people intend to live a 
very long time so there were no problems in the future they were not 
concerned about.  (A lot, if not most, of the early Extropians were 
signed up for cryonic suspension in the event they needed it.)

Keith Henson 




More information about the extropy-chat mailing list