[ExI] nick's book being sold by fox news

Anders Sandberg anders at aleph.se
Sun Nov 2 11:42:07 UTC 2014


John Clark <johnkclark at gmail.com> , 1/11/2014 6:18 PM:
On Sat, Nov 1, 2014 at 6:21 AM, Anders Sandberg <anders at aleph.se> wrote:

> It is worth noting that Eliezer and everybody else in the FAI crowd[...]
That crowd thinks that the very definition of a "Friendly AI" is one that is enslaved to do exactly precisely what the colossally stupid human beings want it to do until the end of time.
No, they don't. First, the whole concept of "Friendly AI" is getting abandoned except as a shorthand for a certain approach to AI safety. Second, many think that value loading and other descendants of CEV are the way to go: that would mean the AI would be able to say no to human orders that were immoral (whether the AI would be a moral agent is another matter). Third, the degree of subordination may vary a great deal - I think it was Ben Goerzel who suggested a "nanny AI" to run humanity: it might still be a "slave" with a set goal, yet actually run the show (mamluk AI?). 

> The fact that the Halting Problem shows that there is no general way of solving certain large problem classes doesn't tell us anything about the *practical* unworkability of top level goals. 
It tells us that a mind with a rigid and permanent hierarchy of goals is never going to work. Evolution couldn't make a mind like that and humans won't be able to either, Turing proved it.
Please tell me exactly what you think Turing proved. I think you are just hand-waving at the Halting Problem in a way that is inapplicable. But I might be wrong. 
Rigid hierarchies of goals might be extremely effective in some problem domains (consider a chess program that is able to become bored with chess - it exists within a domain where the only reasonable goal is to do chess, and any other goal will be a failure). I think you are claiming that in the real physical world domain rigid hierarchies will always be less able (by some measure) than systems with flexible or messy hierarchies - but this depends heavily on what measure you apply. 
Note that evolution has a rigid goal: maximize fitness. It has been successful so far, creative in a sense, and invents loads of subgoals (reproductory strategies, emotions, intelligence, etc). A kind of evolution that could get bored and change goals (let's say making more pebbles rather than surviving organisms) would likely be less successful.



Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20141102/518d7891/attachment.html>


More information about the extropy-chat mailing list