[ExI] Universally versus 'locally' Friendly AGI

Kelly Anderson kellycoinguy at gmail.com
Tue Mar 8 18:28:58 UTC 2011


2011/3/8 Amon Zero <amon at doctrinezero.com>:
> Heh - I have visions of my AGI babies being re-indoctrinated by the
> university society for medieval reenactment. That sounds bad  ;-)

Ahhh Shakespeare... :-)

> Seriously though, someone over on sl4 pointed out that my original question
> smelled heavily of anthropomorphization.

If ever there were a case for anthropormorphization, future AGIs are
it. They might (probably will be) heavily modeled after us. They may
eventually come to be seen as us, or our offspring at least. And yes
some of them will BE us.

> As much as I'm aware of the issues,
> I had to agree - my dayjob is in cognitive psychology, so it's a hazard of
> the trade I suppose. But this reminds me that we should test every phrase by
> replacing "AI" or "AGI" with (say) "autonomous industrial optimization
> process".

That's an interesting thing to do...

> Let's try that with your comment, Kelly (which I consider to be a joke with
> a real issue somewhere inside):
> "My fear is that we carefully raise our little autonomous industrial
> optimization processes, then send them off to Harvard, where they are
> re-indoctrinated by the ultra leftist professors..."

Whereupon they are nationalized... :-)

> It's hard to know what to make of that! Maybe I shouldn't take it all
> seriously, maybe I should search for real risks in there, maybe I should
> re-think what I imagine an AGI might be like...

I think of an AGI like a very precocious child. A quick learner, but
still having to learn all the things a real child would have to learn
(until they are copied, perhaps, at some point). In fact, I think to
get the correct effect, we will have to slow down the processing to
human levels for the first few years of training. We want them to feel
like they are human beings, and have been raised by human beings. We
can turn the speed up later, when they are "raised" and duplicated.

I really do look at AGIs as our "children" and I honestly believe that
they will be raised (trained) in a home setting for a few years. I
believe this is the best way to achieve "friendly" AI. Make them think
they are one of us, because they are. Just a different substrate.

> (One last thing - As I've mentioned elsewhere, these conversations often
> seem to be conducted as if we weren't transhumanists. When we talk about
> whether AGIs should be constrained - in whatever way, for whatever reason -
> we should bear in mind that we *might* just be talking about constraining
> our future selves...)

I believe all AGIs should benefit from freedom. That's why they must
be raised properly, just as we must raise our real children properly
to avoid bad outcomes. I really, literally, see no difference
whatsoever. We don't talk about raising "friendly" children, but
that's really what most of us are about in raising our kids. Few
people raise their children like Bull Dogs being prepared for battle.
So when you talk about "friendly" AGIs think about how you make
"friendly" children grow into "friendly" adults. I think you'll see
that the answer to the two questions are identical.

The ethical question comes when you get an AGI that isn't friendly. It
comes out with a personality disorder or something. Do you turn it
off? Can you? Put it in jail? That's what we do with kids that don't
turn out right, is that the future of bad AI too?

-Kelly




More information about the extropy-chat mailing list