[ExI] The point of emotions

Stathis Papaioannou stathisp at gmail.com
Mon Apr 21 11:43:47 UTC 2008


On 21/04/2008, Tom Nowell <nebathenemi at yahoo.co.uk> wrote:

> To add my voice to the debate about programming emotion into AIs -
>  emotional responses make a handy shortcut in decision-making. A couple
>  of popular psychology books recently (eg Blink, and I can't remember
>  the name of the other one I read) have as their central point the sheer
>  volume of decisions you undertake subconsciously and instantly. An AI
>  without emotions would have to process through everything and carefully
>  decide what criteria to judge things on, then use those criteria to
>  carefully weigh up options - this may take up a whopping volume of
>  processing time.
>   People often quote the example of the donkey equally distant from two
>  water sources who can't decide, and dies of thirst (I can't remember
>  the technical name for this, but some philosopher put his name to it).
>  Sometimes irrational factors or random choosing can make a decision
>  where logic struggles. The ability to go "I favour X" without thinking
>  too much about it saves a lot of time and streamlines decision-making.
>  Now, given a lot of processing power you don't need these shortcuts,
>  but for near-term AI these shortcuts would really help.
>   We don't have to make AIs follow our evolutionary psychology - their
>  emotions could be made similar to ours to make it easier for the two
>  types of intelligence to communicate, or we could deliberately tailor
>  theirs to be better attuned to what they are for (territoriality and
>  defending the group would be fantastic emotions for an AI helping
>  design computer firewalls and anti-virus software, but useless for a
>  deep space probe).
>   To summarise, I think people trying to make the "being of pure logic"
>  type of AI are making an uphill struggle for themselves, only to create
>  an intelligence many humans would have difficulty communicating with.

But it is conceivable that an AI could mimic any sort of emotional
decision-making without experiencing the emotion, isn't it? Some sort
of rule could kick in so that it chooses randomly, or randomly with
certain weightings ("likes" and "dislikes"), if it would take too long
to work through all the consequences fully. My question is, would an
AI behaving in this way ipso facto have emotions: real likes and
dislikes, the way we experience them?




-- 
Stathis Papaioannou



More information about the extropy-chat mailing list