[ExI] robots again

Anders Sandberg anders at aleph.se
Wed May 13 19:57:56 UTC 2015

William Flynn Wallace <foozler83 at gmail.com> , 13/5/2015 8:22 PM:

Will they be introverted or extroverted? Calm or neurotic  (C3PO)?  Open to experience?  Much has been written about the morality of AIs, robots, etc.  What is your take on this?

Or, if you know of some really good writings on these subjects, fact or fiction,  I will be in your debt if you let me know about them.

I think one of the best depictions of an AI personality is Stanislaw Lem's "Golem XIV". The superintelligent computer points out that it doesn't have a personality, but in order to communicate with humans it provides an interface, which humans will interpret as a personality. The thing behind the scenes is totally alien and unknowable to the human mind, as it explains in one of the lectures.

Another good take on "personality as an interface" is the tachikoma spider-tank robots in Ghost in the Shell SAC. Cheerful like little schoolgirls... except that their memories get merged every evening so there is no individuality beyond one day (and they are totally unconcerned with individual robots getting destroyed). Many of the most memorable moments involve seeing beneath the schoolgirl user interfaces to the very different internal processes. 

There is a fair bit of literature on social and emotional robotics trying to produce machines that have this kind of interface.

In neuroscience there are a bundle of research trying to find links between personality traits and neuromodulation (see the work of Cloninger et al.) that also links to reinforcement learning models of AI. One can argue that the eligibility traces, exploration/exploitation trade-off, learning rate and other parameters constitute a kind of personality for the agent. Some agents tire quickly of unrewarding tasks, others persevere. Some perform actions that look good long-term, others don't, and so on - depending on settings. I got an agent that had learned helplessness because it got too strong negative reinforcement from common mistakes, so it learned that the best action was always inaction. Here the "personality" is something that emerges from internal machine learning algorithms rather than a tacked on appearance.

I think the basic problem is that a real AI will likely have a personality in the sense of common behavioural patterns, but the causes and structure of these may be very different from the causes and structures of personality in humans. Yet we will interpret some of these patterns as "calm", "aggressive", or "open-minded" and then get confused when these predictions of future behaviour fail because our models are very wrong.

Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20150513/d5008f81/attachment.html>

More information about the extropy-chat mailing list