[ExI] Universally versus 'locally' Friendly AI

Kelly Anderson kellycoinguy at gmail.com
Sun Mar 13 08:30:47 UTC 2011

On Wed, Mar 9, 2011 at 6:35 AM, Ben Zaiboc <bbenzai at yahoo.com> wrote:
> Hm, I'm thinking here how easy it is for even a pretty normal human to not feel like 'one of
>us'.  I've even felt a touch of that myself, on occasion (and I bet there are quite a few people
>nodding their heads as they read this).  Sometimes it doesn't take much of a difference to
>make you feel totally alienated from other people.  I know that that's mostly just a subjective
>thing, and finding your peer group helps a lot, but where's the peer group for the first AI?

I think we're only going to get one chance at this. I think that's why
it's so important that we select really good parents to raise these
first AGIs. They might even find themselves watched, like in the
Truman Show, to make sure they get it right.

> I expect that the first efforts at full AI (What some people call 'AGI') will be dysfunctional or unbalanced, maybe full-blown psychotic.  This is a separate issue from the 'friendliness problem' though.  It's just about learning to make a stable mind.  Once you've got that, *then* you have the - probably insoluble - problem of guaranteeing it's 'friendliness'.

If we screw up on the first generation of AGI, then humanity is toast, IMHO.


More information about the extropy-chat mailing list