<HTML><BODY style="word-wrap: break-word; -khtml-nbsp-mode: space; -khtml-line-break: after-white-space; ">Part of what I wrote on another list (which started with some thoughts on an AGI self improving theorem prover) today may clarify my position further:<DIV><BR class="khtml-block-placeholder"></DIV><DIV>"Today we have no real idea how to build a working AGI. We have theories, some of which seem to people that have studied such things plausible or at least not obviously flawed - at least to a few of the qualified people. But we don't have anything working that looks remotely close. We don't have any software system that self-improves in really interesting ways except perhaps for some very, very constrained genetic programming domains. Nada. We don't have anything that can pick concepts out of the literature at large and integrate them.<DIV style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; min-height: 14px; "><BR></DIV><DIV style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; ">To think that a machine that is a glorified theorem prover is going to spontaneously extrapolate all the ways it might get better and spontaneously sprout perfect general concept extraction and learning algorithms and spontaneously come to understand software and hardware in depth and develop a will to be better greater than all other consideration whatsoever but without ever in its self-improvement questioning that will/goal and somehow get or be given all the resources needed to eventually convert everything to highly efficient computational matrix is utterly and completely bizarre when you think about it. It is far more unlikely than gray goo.</DIV><DIV style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; min-height: 14px; "><BR></DIV><DIV style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; ">Can we stop wasting very valuable time and brains on protecting against the most unlikely of possibilities and get on with actually increasing intelligence on this poor besotted rock?"</DIV><DIV style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; "><BR class="khtml-block-placeholder"></DIV>This is similar to what Russell says I guess except I am far less pessimistic about the possibility of strong AI in the near term than he. However I don't believe that we can control the consequences of its arrival with respect to human well being hardly at all. So why am I so interested in AI? Because I believe that without a drastic increase in intelligence on this planet that we have little hope at all of surviving to see the 22nd century. Hopefully much of that intelligence can come from Intelligence Augmentation of humans and groups of humans. But that has its own dangers. </DIV><DIV><BR class="khtml-block-placeholder"></DIV><DIV>So yes unfriendly AI is a concern but I consider lack of sufficient intelligence (and yes, wisdom) to be be a far greater and more pressing concern. </DIV><DIV><BR class="khtml-block-placeholder"></DIV><DIV>- samantha</DIV><DIV><BR class="khtml-block-placeholder"></DIV><DIV><BR class="khtml-block-placeholder"></DIV><DIV><BR><DIV><DIV>On May 27, 2007, at 12:58 PM, Brent Allsop wrote:</DIV><BR class="Apple-interchange-newline"><BLOCKQUOTE type="cite"> <BR> Russell,<BR> <BR> Thanks for adding your camp and support, it looks great! Your 1 hour share of Canonizer LLC has been recorded. Notice that this now anti "Friendly AI" topic has now moved up from the 10th most supported topic to #4.<BR> <BR> Here is a link to the topic as it is currently proposed:<BR> <BR> <A class="moz-txt-link-freetext" href="http://test.canonizer.com/change_as_of.asp?destination=http://test.canonizer.com/topic.asp?topic_num=16&as_of=review/">http://test.canonizer.com/change_as_of.asp?destination=http://test.canonizer.com/topic.asp?topic_num=16&as_of=review/</A><BR> <BR> <BR> <BR> Samantha,<BR> <BR> Thanks for proposing yet another good position statement that looks to me like another good point. Since you disagree with a notion in the supper statement - that "Concern Is Mistaken" we should probably restructure things to better accommodate your camp. You don't want to support a sub camp to a camp containing a notion you disagree with and thereby imply support of the mistaken (in your POV) notion.<BR> <BR> Could we say that one of the points of contention seems to do with Motivation of future AI? Russell, you believe that for the foreseeable future tools will just do what we program them to, and they will not be motivated right? Where as apparently Samantha and I believe tools will be increasingly motivated and thereby share this difference in our beliefs with you?<BR> <BR> Samantha's and my differences have to do with the morality (or friendliness) of such motivations right? I believe, like everything else, the morality of this motivation will be improving so it need be of no real concern. While Samantha differers, believing the morality may not improve along with everything else, so it indeed could be a real concern right Samantha? Yet, since Samantha believes if such were the case, there would be nothing we could do about it, it isn't worth any effort, hence she is in agreement with Russell and I about the lack of worth for any effort towards creating a friendly AI at this time?<BR> <BR> So does anyone think a restructuring like the following would not be a big improvement? Or can anyone propose a better structure?<BR> <BR> <OL> <LI>No benefit for effort on Friendly AI</LI> <OL> <LI>AI will be motivated</LI> <OL> <LI><SPAN style="font-size: 12pt; font-family: " times="" new="" roman";"="">Everything, including moral motivation, will increase. (Brent)<BR> </SPAN></LI> <LI><SPAN style="font-size: 12pt; font-family: " times="" new="" roman";"="">We can’t do anything about it, so don’t waist effort on it. (Samantha)<BR> </SPAN></LI> </OL> <LI>AI will not be motivated (Russell)</LI> </OL> </OL> <BR> Any other POV out there we're still missing before we make a big structure change like this?<BR> <BR> Thanks for your effort folks! This is really helping me realize and understand the important issues in a much more productive way.<BR> <BR> Brent Allsop<BR> <BR> <BR> <BR> <BR> <BR> Samantha Atkins wrote: <BLOCKQUOTE cite="midC8A03AB1-3A08-4963-B16C-2559E749BB01@mac.com" type="cite"><BR> <DIV> <DIV>On May 27, 2007, at 12:59 AM, Samantha Atkins wrote:</DIV> <BR class="Apple-interchange-newline"> <BLOCKQUOTE type="cite"><BR> <DIV> <DIV>On May 26, 2007, at 2:30 PM, Brent Allsop wrote:</DIV> <BLOCKQUOTE type="cite"> My original statement had this:<BR> <BR> Name: <B>Such concern is mistaken</B><BR> One Line: <B>Concern over unfriendly AI is a big mistake.</B><BR> <BR> If you agree, Russell, then I propose we use these for the supper camp that will contain both of our camps and have the first version of the text be something simple like:<BR> <BR> Text: <B>We believe the notion of Friendly or Unfriendly AI to be silly for different reasons described in subordinate camps.</B><BR> <BR> </BLOCKQUOTE> <DIV><BR class="khtml-block-placeholder"> </DIV> <DIV>Subgroup: No effective control.</DIV> <DIV><BR class="khtml-block-placeholder"> </DIV> <DIV>The issue of whether an AI is friendly or not is, when removed from anthropomorphism of the AI, an issue of whether the AI is strongly harmful to our [true] interests or strongly beneficial. It is a very real and reasonable concern. However, it is exceedingly unlikely for a very advanced non-human intelligence that we can exert much leverage at all over its future decision or the effects of those decisions on us. Hence the issue, while obviously of interest to us, cannot really be resolved. Concern itself however is not a "big mistake". The mistake is believing we can actually appreciably guarantee a "friendly" outcome.</DIV> </DIV> <BR> </BLOCKQUOTE> <BR> </DIV> <DIV>There is also some danger that over concern with insuring "Friendly" AI, when we in fact cannot do so, slows down or prevents us from achieving strong AI at all. If the great influx of substantially >human intelligence is required for our survival then postponing AI could itself be a real existential risk or failure to avert other existential risks.</DIV> <DIV><BR class="khtml-block-placeholder"> </DIV> <DIV>- s</DIV> <BR> <PRE wrap=""><HR size="4" width="90%">_______________________________________________
extropy-chat mailing list
<A class="moz-txt-link-abbreviated" href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</A>
<A class="moz-txt-link-freetext" href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</A>
</PRE></BLOCKQUOTE> <BR><DIV style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; ">_______________________________________________</DIV><DIV style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; ">extropy-chat mailing list</DIV><DIV style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; "><A href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</A></DIV><DIV style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; "><A href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</A></DIV> </BLOCKQUOTE></DIV><BR></DIV></BODY></HTML>