[ExI] Organizations to "Speed Up" Creation of AGI?

Keith Henson hkeithhenson at gmail.com
Mon Dec 26 18:51:49 UTC 2011


Per Spike

 On Sun, Dec 25, 2011 at 5:07 PM,
Anders Sandberg <anders at aleph.se> wrote:

 snip

> But now you are assuming the AGI thinks like a human. Humans are social
> mammals that care a lot what other humans think about them, their
> ability to form future alliances and have built-in emotional macros for
> social reactions. An AGI is unlikely to have these properties unless you
> manage to build them into it.

 While I have am down on raw emulations because I think humans have
 some psychological mechanisms I *don't* want to see in machines, AIs
 that had "carefully selected human personality characteristics such as
 seeking the good opinion of its peers (humans and other of its kind
 alike)" seems like a very good idea.

> The AGI might just note that you did an
> attempt to secure it, and even though you failed, this is merely
> information to use for future decisions, nothing to lash back about.

 I would hope the AI would note that we tried to build into them the
 *best* human characteristics and they would not want to change
 themselves.

 I know this may be hopelessly optimistic.

 Keith



More information about the extropy-chat mailing list