[ExI] Future of Humanity Institute at Oxford University £1 million grant for AI
anders at aleph.se
Fri Jul 3 16:57:31 UTC 2015
Från: Giulio Prisco <giulio at gmail.com>
They should have sent a couple of hundred bucks my way, and I would
have advised them to leave the rest of the money in the bank.
Superintelligent AIs will do what they want to do. That's the
definition of intelligence, super or not. Trying to program or enforce
behaviors or values in a super-smart AI is like telling your smart and
rebellious kids to stay home and study instead of going out and have
fun. Same thing, and same result.
But the current approach to AI safety is like never talking with the kids about morals, emotions or societal conventions, nor giving them feedback on what they do except instrumental success ("Great work in forcing open the gun cabinet!") What we aim at doing is rather like figuring out what kind of upbringing is less likely to produce school shootings, sociopathy or unhappy career choices.
Also, there are the lesser AIs to be concerned about. You want to make sure they can interpret our intentions, laws or norms in ways that actually works. Superintelligent entities may be smart enough to be safe even when merely "smart" agents are very unsafe (but see the whole analysis of why emergent AI values are not guaranteed to stay close to ours or anything sane; Inceptionist pictures are a pretty good example of what happens when we let AI preferences run free
Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat