[ExI] Empathic AGI
Stefano Vaj
stefano.vaj at gmail.com
Sat Feb 12 15:20:04 UTC 2011
2011/2/9 John Clark <jonkc at bellsouth.net>:
> On Feb 7, 2011, at 12:16 PM, Stefano Vaj wrote:
> If we accept that "normal" human-level empathy (that is, a mere
> ingredient in the evolutionary strategies) is enough, we just have to
> emulate a Darwinian machine as similar as possible
>
> Two difficulties with that:
> 1) The Darwinian process is more like history than mathematics, it is not
> repeatable, very small changes in initial conditions could lead to huge
> differences in output.
Of course. Being a human being, an oak, a rabbit, an amoeba are all
plausible Darwinian strategy. But if one wants something where
"aggression", "empathy", "selfishness" etc. have a meaning different
from that which may be applicable to a car or to a spreadsheet any
would be both necessary and sufficient, I guess.
> 2) Human-level empathy is aimed at Human-level beings, the further from that
> level the less empathy we have. We have less empathy for a cow than a person
> and less for an insect than a cow. As the AI's intelligence gets larger its
> empathy for us will get smaller although its empathy for its own kind might
> be enormous.
Yes. Or not. Human empathy is a fuzzy label for complex adaptative or
"spandrel" behaviours which do not necessarily have to do with
"similarity".
For instance, gender differences in our species are substantial
enough, but of course you have much more empathy in average for your
opposite-gender offspring than you may have for a human individual of
your gender with no obvious genetic link to your lineage, and/or
belonging to a hostile tribe.
I suspect that an emulation of a human being may well decide and
"feel" to belong to a cross-specific group (say, the men *and* the
androids of country X or of religion Y) or perhaps imagine something
along the lines of "proletarian AGIs all over the world, unite!". As
long as they are "intelligent" in the very anthropomorphic sense
discussed here, there would be little new in this respect. In fact,
they would by definition be programmed as much as we are to make such
choices.
Other no-matter-how-intelligent entitities which are neither evolved,
nor explicitely programmed to emulate evolved organisms, have of
course no reason to exhibit self-preservation, empathy, aggression or
altruism drives in any sociobiological sense.
--
Stefano Vaj
More information about the extropy-chat
mailing list