<html><head></head><body><strong>Från:
</strong>
Giulio Prisco <giulio@gmail.com>
<strong></strong><br><div><br><blockquote class="mori" style="margin:0 0 0 .8ex;border-left:2px blue solid;padding-left:1ex;">They should have sent a couple of hundred bucks my way, and I would
<br>have advised them to leave the rest of the money in the bank.
<br>Superintelligent AIs will do what they want to do. That's the
<br>definition of intelligence, super or not. Trying to program or enforce
<br>behaviors or values in a super-smart AI is like telling your smart and
<br>rebellious kids to stay home and study instead of going out and have
<br>fun. Same thing, and same result.
<br>
</blockquote></div><br>But the current approach to AI safety is like never talking with the kids about morals, emotions or societal conventions, nor giving them feedback on what they do except instrumental success ("Great work in forcing open the gun cabinet!") What we aim at doing is rather like figuring out what kind of upbringing is less likely to produce school shootings, sociopathy or unhappy career choices. <br><br>Also, there are the lesser AIs to be concerned about. You want to make sure they can interpret our intentions, laws or norms in ways that actually works. Superintelligent entities may be smart enough to be safe even when merely "smart" agents are very unsafe (but see the whole analysis of why emergent AI values are not guaranteed to stay close to ours or anything sane; Inceptionist pictures are a pretty good example of what happens when we let AI preferences run free<br>http://d.ibtimes.co.uk/en/full/1445360/psychedelic-images-generated-by-googles-neural-network.jpg )<br><br><br>Anders Sandberg,
Future of Humanity Institute
Philosophy Faculty of Oxford University<br><br></body></html>