<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
<title></title>
</head>
<body bgcolor="#ffffff" text="#000000">
<br>
Russell,<br>
<br>
Thanks for adding your camp and support, it looks great! Your 1 hour
share of Canonizer LLC has been recorded. Notice
that this now anti "Friendly AI" topic has now moved up from the 10th
most supported topic to #4.<br>
<br>
Here is a link to the topic as it is currently proposed:<br>
<br>
<a class="moz-txt-link-freetext"
href="http://test.canonizer.com/change_as_of.asp?destination=http://test.canonizer.com/topic.asp?topic_num=16&as_of=review/">http://test.canonizer.com/change_as_of.asp?destination=http://test.canonizer.com/topic.asp?topic_num=16&as_of=review/</a><br>
<br>
<br>
<br>
Samantha,<br>
<br>
Thanks for proposing yet another good position statement that looks to
me like another good point. Since you disagree with a notion in the
supper statement - that "Concern Is Mistaken" we should probably
restructure things to better accommodate your camp. You don't want to
support a sub camp to a camp containing a notion you disagree with and
thereby imply support of the mistaken (in your POV) notion.<br>
<br>
Could we say that one of the points of contention seems to do with
Motivation of future AI? Russell, you believe that for the foreseeable
future tools will just do what we program them to, and they will not be
motivated right? Where as apparently Samantha and I believe tools will
be increasingly motivated and thereby share this difference in our
beliefs with you?<br>
<br>
Samantha's and my differences have to do with the morality (or
friendliness) of such motivations right? I believe, like everything
else, the morality of this motivation will be improving so it need be
of no real concern. While Samantha differers, believing the morality
may not improve along with everything else, so it indeed could be a
real concern right Samantha? Yet, since Samantha believes if such were
the case, there would be nothing we could do about it, it isn't worth
any effort, hence she is in agreement with Russell and I about the lack
of worth for any effort towards creating a friendly AI at this time?<br>
<br>
So does anyone think a restructuring like the following would not be a
big improvement? Or can anyone propose a better structure?<br>
<br>
<ol>
<li>No benefit for effort on Friendly AI</li>
<ol>
<li>AI will be motivated</li>
<ol>
<li><span style="font-size: 12pt; font-family: "Times New Roman";">Everything,
including moral motivation, will
increase. (Brent)<br>
</span></li>
<li><span style="font-size: 12pt; font-family: "Times New Roman";">We
can’t do anything about it, so don’t waist
effort on it. (Samantha)<br>
</span></li>
</ol>
<li>AI will not be motivated (Russell)</li>
</ol>
</ol>
<br>
Any other POV out there we're still missing before we make a big
structure change like this?<br>
<br>
Thanks for your effort folks! This is really helping me realize and
understand the important issues in a much more productive way.<br>
<br>
Brent Allsop<br>
<br>
<br>
<br>
<br>
<br>
Samantha Atkins wrote:
<blockquote cite="midC8A03AB1-3A08-4963-B16C-2559E749BB01@mac.com"
type="cite"><br>
<div>
<div>On May 27, 2007, at 12:59 AM, Samantha Atkins wrote:</div>
<br class="Apple-interchange-newline">
<blockquote type="cite"><br>
<div>
<div>On May 26, 2007, at 2:30 PM, Brent Allsop wrote:</div>
<blockquote type="cite"> My original statement had this:<br>
<br>
Name: <b>Such concern is mistaken</b><br>
One Line: <b>Concern over unfriendly AI is a big mistake.</b><br>
<br>
If you agree, Russell, then I propose we use these for the supper camp
that will contain both of our camps and have the first version of the
text be something simple like:<br>
<br>
Text: <b>We believe the notion of Friendly or Unfriendly AI to be
silly for different reasons described in subordinate camps.</b><br>
<br>
</blockquote>
<div><br class="khtml-block-placeholder">
</div>
<div>Subgroup: No effective control.</div>
<div><br class="khtml-block-placeholder">
</div>
<div>The issue of whether an AI is friendly or not is, when removed
from anthropomorphism of the AI, an issue of whether the AI is strongly
harmful to our [true] interests or strongly beneficial. It is a very
real and reasonable concern. However, it is exceedingly unlikely for
a very advanced non-human intelligence that we can exert much leverage
at all over its future decision or the effects of those decisions on
us. Hence the issue, while obviously of interest to us, cannot really
be resolved. Concern itself however is not a "big mistake". The
mistake is believing we can actually appreciably guarantee a "friendly"
outcome.</div>
</div>
<br>
</blockquote>
<br>
</div>
<div>There is also some danger that over concern with insuring
"Friendly" AI, when we in fact cannot do so, slows down or prevents us
from achieving strong AI at all. If the great influx of substantially
>human intelligence is required for our survival then postponing AI
could itself be a real existential risk or failure to avert other
existential risks.</div>
<div><br class="khtml-block-placeholder">
</div>
<div>- s</div>
<br>
<pre wrap=""><hr size="4" width="90%">
_______________________________________________
extropy-chat mailing list
<a class="moz-txt-link-abbreviated"
href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>
<a class="moz-txt-link-freetext"
href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a>
</pre>
</blockquote>
<br>
</body>
</html>