[ExI] robots again

Anders Sandberg anders at aleph.se
Wed May 13 21:18:41 UTC 2015


Incidentally, just saw this:
http://www.digitaltrends.com/computing/a-new-zealand-firm-is-building-an-angry-ai/



"The Touchpoint Group hopes to develop an angry AI, using two years’ worth of customer calls from four of Australia’s largest banks. Over the next six months, a team of data scientists will use these calls to build a model that companies can then use to find the best response to common customer complaints."


If the AI would actually feel anger, I think it would be both unethical and a bad idea. But here we are dealing with a deliberately constructed set of behavioural and verbal responses with nothing behind them. 


Now, some other AI models actually do make plans and have frustration and goal repair mechanisms when they find their plans thwarted. I think they might have a more real shot at having something like emotions. The angry AI doesn't actually *care* about the bad service, but one could imagine one that tries to achieve some goal (say a banking operation) and as a plan-repair function tries to goad an employee into helping it achieve the goal. There might still be instrumental/fake emotions there - sometimes sounding angry works - but there could also be proper/intrinsic states where the AI may do or say things because they might return the internal states of the AI to higher value states. In humans a lot of angry yelling is all about protecting one's self image by lowering the status of the other party; if the AI has some similar concern it might also yell "for real". 


Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20150513/179398ef/attachment.html>


More information about the extropy-chat mailing list