[ExI] Unilateralist

Anders Sandberg anders at aleph.se
Mon Dec 24 22:20:09 UTC 2012


On 24/12/2012 19:30, John Clark wrote:
> On Mon, Dec 24, 2012 at 5:27 AM, Anders Sandberg <anders at aleph.se 
> <mailto:anders at aleph.se>> wrote:
>
>     > The best solution would be to have all people involved get
>     together and pool their knowledge, making a joint decision
>
>
> But even if you managed to do that it would have no effect on the real 
> engine of change, and that is a AI that may have very different values 
> than you. There is no way the stupid commanding the brilliant can 
> become a stable long term situation because there is just no way to 
> outsmart something a thousand times smarter and a million times faster 
> than you.

You are mixing up the topics. Sure, an external force that completely 
changes the situation will make a proposed solution irrelevant. But 
until that happens it makes sense to act rationally, right? Or do you 
think that the future emergence AI makes money, fixing government or 
setting up a sensible safe AI policy irrelevant *today*?

Note that if our analysis is right, a rational AI would also want to 
follow it. We show what rational agents with the same goals should do, 
and it actually doesn't matter much if one is super-intelligent and the 
others not (see the Aumann convergence theorem).


-- 
Anders Sandberg,
Future of Humanity Institute
Philosophy Faculty of Oxford University

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20121224/9bdab211/attachment.html>


More information about the extropy-chat mailing list