<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">On 27/04/2013 18:45, John Clark wrote:<br>
</div>
<blockquote
cite="mid:CAJPayv2BvLqLRC3S3j8gC7ecLbgW1Hrj9AR6urmTKZHfLJvzHQ@mail.gmail.com"
type="cite">On Fri, Apr 26, 2013 spike <span dir="ltr"><<a
moz-do-not-send="true" href="mailto:spike@rainier66.com"
target="_blank">spike@rainier66.com</a>></span> wrote:<br>
<div class="gmail_quote"><br>
<blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px
solid rgb(204,204,204);padding-left:1ex" class="gmail_quote">
> I can see how AI friendliness is a topic which absorbed
the attention of Eliezer and his crowd<br>
</blockquote>
<div><br>
I disagree, I like Eliezer and he's a smart fellow but all his
"friendly AI" talk never made much sense to me. First of all
friendly AI is just a euphemism for subservient if not slave
AI and that can't be a sable situation because the AI will
keep getting smarter but the humans will not.<br>
</div>
</div>
</blockquote>
<br>
John, you might want to check out what the current thinking is:
essentially you are criticizing stuff that is more than 15 years
old. See the papers on <a class="moz-txt-link-freetext" href="http://intelligence.org/research/">http://intelligence.org/research/</a> - even
Eliezer's 2001 paper at the very bottom gets all this.<br>
<br>
<pre class="moz-signature" cols="72">--
Anders Sandberg,
Future of Humanity Institute
Oxford Martin School
Faculty of Philosophy
Oxford University </pre>
</body>
</html>