<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 24/12/2012 19:30, John Clark wrote:<br>
</div>
<blockquote
cite="mid:CAJPayv3zyjt_yFUSkxpHOR6D=k4SK90VNhP8+oV-Q65FCNCyWw@mail.gmail.com"
type="cite">On Mon, Dec 24, 2012 at 5:27 AM, Anders Sandberg <span
dir="ltr"><<a moz-do-not-send="true"
href="mailto:anders@aleph.se" target="_blank">anders@aleph.se</a>></span>
wrote:<br>
<div class="gmail_quote">
<div> </div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
> The best solution would be to have all people involved
get together and pool their knowledge, making a joint decision</blockquote>
<div><br>
But even if you managed to do that it would have no effect on
the real engine of change, and that is a AI that may have very
different values than you. There is no way the stupid
commanding the brilliant can become a stable long term
situation because there is just no way to outsmart something a
thousand times smarter and a million times faster than you. <br>
</div>
</div>
</blockquote>
<br>
You are mixing up the topics. Sure, an external force that
completely changes the situation will make a proposed solution
irrelevant. But until that happens it makes sense to act rationally,
right? Or do you think that the future emergence AI makes money,
fixing government or setting up a sensible safe AI policy irrelevant
*today*?<br>
<br>
Note that if our analysis is right, a rational AI would also want to
follow it. We show what rational agents with the same goals should
do, and it actually doesn't matter much if one is super-intelligent
and the others not (see the Aumann convergence theorem). <br>
<br>
<br>
<pre class="moz-signature" cols="72">--
Anders Sandberg,
Future of Humanity Institute
Philosophy Faculty of Oxford University </pre>
</body>
</html>