[ExI] Universally versus 'locally' Friendly AI

Ben Zaiboc bbenzai at yahoo.com
Wed Mar 9 13:35:31 UTC 2011


Kelly Anderson <kellycoinguy at gmail.com> wrote:

> I really do look at AGIs as our "children" and I honestly
> believe that
> they will be raised (trained) in a home setting for a few
> years. I
> believe this is the best way to achieve "friendly" AI. Make
> them think
> they are one of us, because they are. Just a different
> substrate.


Hm, I'm thinking here how easy it is for even a pretty normal human to not feel like 'one of us'.  I've even felt a touch of that myself, on occasion (and I bet there are quite a few people nodding their heads as they read this).  Sometimes it doesn't take much of a difference to make you feel totally alienated from other people.  I know that that's mostly just a subjective thing, and finding your peer group helps a lot, but where's the peer group for the first AI?

I expect that the first efforts at full AI (What some people call 'AGI') will be dysfunctional or unbalanced, maybe full-blown psychotic.  This is a separate issue from the 'friendliness problem' though.  It's just about learning to make a stable mind.  Once you've got that, *then* you have the - probably insoluble - problem of guaranteeing it's 'friendliness'.

Ben Zaiboc


      




More information about the extropy-chat mailing list