<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
<br>
Moral Experts,<br>
<br>
(Opening note: For those that don't enjoy the below religious /
moral / mormonesque rhetoric that I enjoy, I hope you can simply
translate it on the fly to something more to your liking. ;)<br>
<br>
This is a very exciting topic, and I think morally a critically
important one. If we do the wrong thing, or fail to do it right, I
think we're all agreed the costs could be very extreme. Morality
has to do with knowing what is right, and what is wrong, does it
not? I sure desperately want to "Choose The Right" (CTR) as mormons
like to always say. But I feel I desperately need more help to be
more morally capable, especially in this area. It is especially
hard for me to understand, fully grasp, and remember ways of
thinking about things that are very diverse from my current way of
thinking about things. All this eternal yes it is no it is not, yes
it is isn't doing me any good, for sure.<br>
<br>
This is a much more complex issue than I've ever really fully
thought about, and I appreciate the help from people on both sides
of the issue. I may be the only one, but I would find it very
valuable and educational to have concise descriptions of all the
best arguments and issues, and a quantitative ranking, by the
experts, of the importance of them all, and a quantitative measure
of who, and how many people, are in each camp. In other words, I
think the best way for all of us to approach this problem, is to
have some concise, quantitative, and constantly improving
representation of the most important issues, according to the
experts on all sides, so we can all be better educated (with good
references) about what the most important arguments are, why, and
which experts, and how many, are in each camp - going forward, as
ever more scientific data, ever more improved reasoning... - comes
in.<br>
<br>
We've started one survey topic, on the general issue of the
importance of friendly AI, (see: <a class="moz-txt-link-freetext" href="http://canonizer.com/topic.asp/16">http://canonizer.com/topic.asp/16</a> )
which so far shows a somewhat even distribution of experts on both
sides. But this is obviously just a start at what is required so
all of us can be better educated on all the most important issues
and arguments.<br>
<br>
Through this discussion, I've realized that a critical sub component
of the various ways of thinking about this issue is one's working
hypothosis about the possibility of a rapid isolated, hidden, or
remote 'hard take off'. I'm betting that the degree to which one
holds such as a real possibility of a isolated hard take off as
their working hypothosis, the more likely they are to fear or want
to be cautious about AI, and visa versa. So I think it will be very
educational to everyone to more rigorously concisely develop and
measure for the various most important reasons for this particular
sub issue on both sides.<br>
<br>
Towards this end, I'd like to create several new related survey
topics to get a more detailed map of what the experts believe in
this space. First, would be a survey topic on the possibility of
any kind of isolated rapid hard takeoff. We could create two
related topics to capture, concisely state, and quantitatively rank
the importance, and value of (i.e. their ability to be convincing)
the various arguments had relative to each other. We could have one
argument topic ranking reasons why an isolated hard takeoff might be
possible, and another ranking reasons for why it might not be
likely.<br>
<br>
This way, the experts on both sides of the issue could
collaboratively develop the best and most concise description of
each of the arguments, and help rank which are the most convincing
for everyone and why. (It would be interesting to see if the
ranking for each side changed, when surveying those in the pro camp,
verses those in the con camp, and so on)<br>
<br>
As these two pro and con argument ranking topics developed, the
members of the pro and con camps could reference these arguments,
and develop concise descriptions of why the pro or con arguments are
more convincing to them, than the others, and why they are in their
particular camp, or why they currently use the particular pro or con
theory as their working hypothesis. And of course, it would be very
interesting to see if anyone jumps camps, once things start getting
more developed, or when new scientific results or catastrophes, come
in, and so on.<br>
<br>
Would anyone else think this kind of moral expert survey information
would be helpful to them in their effort to make the best possible
decisions and judgments on such important issues? Would anyone else
have any better or additional ways to develop or structure a survey
of critically important information that anyone thinks everyone
interested in this topic needs to know about?<br>
<br>
I'm going to continue developing this survey along these lines,
using what I've heard others say so far here, but there is surely
better ways to go about this, that others can help find or point
out, obviously the more diversity the better, so I would love to
have any other ideas or inputs or help with this process.<br>
<br>
Looking forward to any and all feedback, pro or con, and it wold be
great to at least get a more comprehensive survey of who was in
these camps, starting with the improvement of this one:
<a class="moz-txt-link-freetext" href="http://canonizer.com/topic.asp/16">http://canonizer.com/topic.asp/16</a> .<br>
<br>
And also, I hope for some day achieving perfect justice. Those that
are wrong, are arguably doing great damage compared to the heroes
that are right - the ones that are helping us all to be morally
better. It seems to me to achieve perfect justice, the mistaken or
wicked ones, will have to make a restitution to the heroes, for the
damage they continue to do, for as long as they continue to be wrong
(to sin?). The better we rigorously track all this, the sooner we
can achieve better justice right?<br>
<br>
The more help I get, from all sides, the more capable I'll bee of
being in the right camp sooner, and the more capable I'll bee of
helping others to do the same, and the less restitution I'll have to
clean up for being mistaken longer, and the more reward we will all
reap, sooner, in a more just and perfect heaven.<br>
<br>
Brent Allsop<br>
<br>
<br>
<br>
<br>
<br>
<br>
On 11/15/2010 7:33 PM, Michael Anissimov wrote:
<blockquote
cite="mid:AANLkTikOUAxG-z9fNyZSByM3HOofTR6BJBpLuGS=4BCK@mail.gmail.com"
type="cite">Hi John,<br>
<br>
<div class="gmail_quote">On Sun, Nov 14, 2010 at 9:27 PM, John
Grigg <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:possiblepaths2050@gmail.com">possiblepaths2050@gmail.com</a>></span>
wrote:
<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt
0.8ex; border-left: 1px solid rgb(204, 204, 204);
padding-left: 1ex;">
<div class="im">
<br>
</div>
I agree that self-improving AGI with access to advanced
manufacturing<br>
and research facilities would probably be able to bootstrap
itself at<br>
an exponential rate, rather than the speed at which humans
created it<br>
in the first place. But the "classic scenario" where this
happens<br>
within minutes, hours or even days and months seems very
doubtful in<br>
my view.<br>
<br>
Am I missing something here?</blockquote>
<div><br>
</div>
<div>MNT and merely human-equivalent AI that can copy itself but
not qualitatively enhance its intelligence beyond the human
level is enough for a hard takeoff within a few weeks, most
likely, if you take the assumptions in the Phoenix nanofactory
paper. </div>
<div><br>
</div>
<div>Add in the possibility of qualitative intelligence
enhancement and you get somewhere even faster. </div>
<div><br>
</div>
<div>Neocortex expanded in size by a factor of only about 4 from
chimps to produce human intelligence. The basic underlying
design is much the same. Imagine if expanding neocortex by a
similar factor again led to a similar qualitative increase in
intelligence. If that were so, then even a thousand AIs with
so-expanded brains and a sophisticated manufacturing base
would be like a group of 1000 humans with assault rifles and
helicopters in a world of six billion chimps. If that were
the case, then the Phoenix nanofactory + human-level AI-based
estimate might be excessively conservative. </div>
</div>
<br>
-- <br>
<a moz-do-not-send="true"
href="mailto:michael.anissimov@singinst.org" target="_blank">michael.anissimov@singinst.org</a><br>
<span style="font-family: arial,sans-serif; font-size: 13px;
border-collapse: collapse;">
<div>Singularity Institute<br>
</div>
<div>Media Director</div>
</span><br>
<pre wrap="">
<fieldset class="mimeAttachmentHeader"></fieldset>
_______________________________________________
extropy-chat mailing list
<a class="moz-txt-link-abbreviated" href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>
<a class="moz-txt-link-freetext" href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a>
</pre>
</blockquote>
<br>
</body>
</html>