<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 24/12/2012 20:22, Brent Allsop
wrote:<br>
</div>
<blockquote cite="mid:50D8AB67.10502@canonizer.com" type="cite">
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
<div class="moz-cite-prefix"><br>
But, all intelligence must eventually logically realize the
error in any such immoral, lonely, and will eventually loose,
'slavish' thinking. Obvisly what is morally right, is to
co-operate with everyone, and seek to get the best for everyone
- the more diversity in desires the better.<br>
</div>
</blockquote>
<br>
This is anthropomorphising things a lot. Consider a
utility-maximizer that has some goal (like making maximal
paperclips). There are plenty of reasons to think that it would not
start behaving morally:<br>
<a class="moz-txt-link-freetext" href="http://www.aleph.se/andart/archives/2011/02/why_we_should_fear_the_paperclipper.html">http://www.aleph.se/andart/archives/2011/02/why_we_should_fear_the_paperclipper.html</a><br>
<br>
Typically moral philosophers respond to this by claiming the AI is
not a moral agent, being bound by a simplistic value system it will
never want to change. That just moves the problem away from ethics
to safety: such a system would still be a danger to others (and
value in general). It would just not be a moral villain. <br>
<br>
Claims that systems with hardwired top-level goals will necessarily
be uncreative and unable to resist more flexible "superior" systems
better be followed up by arguments. So far the closest I have seen
is David Deutsch argument that they would be uncreative, but as I
argue in the link above this is inconclusive since we have a fairly
detailed example of something that is as creative (or more) than any
other software and yet lends itself to hardwired goals (it has such
a slowdown that it is perfectly safe, though).<br>
<br>
<blockquote cite="mid:50D8AB67.10502@canonizer.com" type="cite">
<div class="moz-cite-prefix"> And I think that is why there is an
emerging consensus in this camp, that thinks fear of any kind of
superior intelligence is silly, whether artificial, alien, or
any kind of devils, whatever one may imagine them to be in their
ignorance of this necessary moral fact.<br>
</div>
</blockquote>
<br>
I'm not sure this emerging consensus is based on better information
or just that a lot of the lets-worry-about-AI people are just busy
over at SingInst/LessWrong/FHI working on AI safety. I might not be
a card-carrying member of either camp, but I think dismissing the
possibility that the other camp is on to something is premature. <br>
<br>
The proactive thing to do would be for you to find a really good set
of arguments that shows that some human-level or beyond AI systems
actually are safe (or even better, disprove the Eliezer-Omohundro
thesis that most of mindspace is unsafe, or prove that hard takeoffs
are impossible/have some nice speed bound). And the AI-worriers
ought to try to prove that some proposed AI architectures (like
opencog) are unsafe. I did it for Monte Carlo AIXI, but it is a bit
like proving a snail to be carnivorous - amble for your life! - it
is merely an existence proof.<br>
<br>
<blockquote cite="mid:50D8AB67.10502@canonizer.com" type="cite">
<div class="moz-cite-prefix"> So far, at least, there are more
people, I believe experts, willing to stand up and defend this
position, than there are willing to defend any fearful camps.<br>
</div>
</blockquote>
<br>
There has been some interesting surveys of AI experts and their
views on AI safety over at Less Wrong. I think the take home message
is, after looking at prediction track records and cognitive bias,
that experts and consensuses in this domain are pretty useless. I
strongly recommend Stuart Armstrong's work on this:<br>
<a class="moz-txt-link-freetext" href="http://fora.tv/2012/10/14/Stuart_Armstrong_How_Were_Predicting_AI">http://fora.tv/2012/10/14/Stuart_Armstrong_How_Were_Predicting_AI</a><br>
<a class="moz-txt-link-freetext" href="http://singularity.org/files/PredictingAI.pdf">http://singularity.org/files/PredictingAI.pdf</a><br>
<br>
Disaggregate your predictions/arguments, try to see if you can boil
them down to something concrete and testable. <br>
<br>
<pre class="moz-signature" cols="72">--
Anders Sandberg,
Future of Humanity Institute
Philosophy Faculty of Oxford University </pre>
</body>
</html>