<html>
<head>
<style>
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
FONT-SIZE: 10pt;
FONT-FAMILY:Tahoma
}
</style>
</head>
<body class='hmmessage'>Robert;<BR>
<BR>
If you've been following transhumanist email lists such as this one and WTA-Talk, then you'll know that we're not blind to the potential negative consequences of an unfriendly AGI. The WTA in particular supports the concerns and work of the Singularity Institute for Artificial Intelligence (see in particular <A href="http://www.singinst.org/upload/artificial-intelligence-risk.pdf">http://www.singinst.org/upload/artificial-intelligence-risk.pdf</A> by Eliezer Yudkowsky)<STRONG> </STRONG>and the Lifeboat Foundation (see the AIShield program at <A href="http://lifeboat.com/ex/ai.shield">http://lifeboat.com/ex/ai.shield</A>). Both of these organizations are working to create research guidelines, safety protocols, and contingency plans dealing with "friendly AI." As someone who has computer programming knowledge and an obvious concern over these matters, perhaps you would want to join these organizations and participate in their work. Since you've stated that you don't want to engage in open discussions, I'd be happy to assist you in contacting the above-referenced groups offlist, if you so desire.<BR>
<BR>
Personally, I follow Ray Kurzweil's philosophy that you have to look at what's going on in the whole of science and technology and not focus on the problems of just one area as though advances will ONLY take place in that field in isolation of all of the other fields. Many researchers are working on improving human-level cognition, human-computer interfaces, and other neuroengineering challenges. I personally think it would be jumping to a conclusion to think that AGI will reach any sort of superhuman intelligence level before we have the ability to plug into such systems and use them directly for our own purposes. That's not to say it can't happen (hence the importance of the work being done by the above-referenced organizations), just that we shouldn't jump to the most extreme, doomsday, conclusions.<BR>
<BR>
Best regards,<BR>
<BR>James Clement<BR>Executive Director<BR>
World Transhumanist Association<BR>
<BLOCKQUOTE>
<HR id=EC_stopSpelling>
Date: Sun, 2 Mar 2008 13:56:41 -0500<BR>From: robert.bradbury@gmail.com<BR>To: extropy-chat@lists.extropy.org<BR>Subject: [ExI] How could you ever support an AGI?<BR><BR>I have not posted to the list in some time. And due to philosophical differences I will not engage in open discussions (which are not really open!).<BR><BR>But a problem has been troubling me recently as I have viewed press releases for various AI conferences.<BR><BR>I believe the production of an AGI spells the extinction of humanity. More importantly it has what I would call back propagating effects. Why should iI expend ntellectual energy, time, money, etc. in a doomed species? Put another way, those of you who have had and/or are investing in children are potentially pursuing a pointless endeavor. If an AGI develops or is developed their existence is fairly pointless. Our current culture obviously shows absorption is nearly instantaneous for younger minds. They will know they are "obsolete" in an AGI world.<BR><BR>So given some limited genetic drive to keep making humans, that will last a while. But I see no way out of the perspective that the general development (vs the managed development) of an AGI leads to the survival of humanity.<BR><BR>And so, we must present transhumanism as an "Extinction Level Event" -- are willing to deal with thiat?<BR><BR>Robert<BR><BR></BLOCKQUOTE></body>
</html>