[ExI] Watson on NOVA
rpwl at lightlink.com
Wed Feb 16 00:02:34 UTC 2011
> spike wrote:
> I am human. If we succeed in making an AGI with human emotions and human
> motives, then it does as humans do. I can see it being more concerned about
> its offspring than its parents. I am that way too. It's offspring may or
> may not care about its grandparents and much as it's parents did. Our
> models are not sufficiently sophisticated to predict that, but Richard, I am
> reluctant to bet the future of humankind on it, even if I know that without
> it humankind is doomed anyway.
The *type* of motivation mechanism is what we would copy, not all the
The type is stable. Some of the content leads to empathy. Some leads
to other motivations, like aggression.
The goal is to choose an array of content that makes it empathic without
being irrational about its 'children'.
This seems entirely feasible to me.
More information about the extropy-chat