[ExI] How could you ever support an AGI?

James Clement clementlawyer at hotmail.com
Sun Mar 2 19:39:46 UTC 2008


Robert;
 
If you've been following transhumanist email lists such as this one and WTA-Talk, then you'll know that we're not blind to the potential negative consequences of an unfriendly AGI.  The WTA in particular supports the concerns and work of the Singularity Institute for Artificial Intelligence (see in particular http://www.singinst.org/upload/artificial-intelligence-risk.pdf by Eliezer Yudkowsky) and the Lifeboat Foundation (see the AIShield program at http://lifeboat.com/ex/ai.shield).  Both of these organizations are working to create research guidelines, safety protocols, and contingency plans dealing with "friendly AI."  As someone who has computer programming knowledge and an obvious concern over these matters, perhaps you would want to join these organizations and participate in their work.  Since you've stated that you don't want to engage in open discussions, I'd be happy to assist you in contacting the above-referenced groups offlist, if you so desire.
 
Personally, I follow Ray Kurzweil's philosophy that you have to look at what's going on in the whole of science and technology and not focus on the problems of just one area as though advances will ONLY take place in that field in isolation of all of the other fields.  Many researchers are working on improving human-level cognition, human-computer interfaces, and other neuroengineering challenges.  I personally think it would be jumping to a conclusion to think that AGI will reach any sort of superhuman intelligence level before we have the ability to plug into such systems and use them directly for our own purposes.  That's not to say it can't happen (hence the importance of the work being done by the above-referenced organizations), just that we shouldn't jump to the most extreme, doomsday, conclusions.
 
Best regards,
James ClementExecutive Director
World Transhumanist Association


Date: Sun, 2 Mar 2008 13:56:41 -0500From: robert.bradbury at gmail.comTo: extropy-chat at lists.extropy.orgSubject: [ExI] How could you ever support an AGI?I have not posted to the list in some time.  And due to philosophical differences I will not engage in open discussions (which are not really open!).But a problem has been troubling me recently as I have viewed press releases for various AI conferences.I believe the production of an AGI spells the extinction of humanity.  More importantly it has what I would call back propagating effects.  Why should iI expend ntellectual energy, time, money, etc. in a doomed species?  Put another way, those of you who have had and/or are investing in children are potentially pursuing a pointless endeavor.  If an AGI develops or is developed their existence is fairly pointless.  Our current culture obviously shows absorption is nearly instantaneous for younger minds.  They will know they are "obsolete" in an AGI world.So given some limited genetic drive to keep making humans, that will last a while.  But I see no way out of the perspective that the general development (vs the managed development) of an AGI leads to the survival of humanity.And so, we must present transhumanism as an "Extinction Level Event"  -- are willing to deal with thiat?Robert
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20080302/0a76bfab/attachment.html>


More information about the extropy-chat mailing list