[ExI] Universally versus 'locally' Friendly AGI

Amon Zero amon at doctrinezero.com
Tue Mar 8 08:46:20 UTC 2011


On 8 March 2011 01:45, Kelly Anderson <kellycoinguy at gmail.com> wrote:

>
> My fear is that we carefully raise our little AGIs, then send them off
> to Harvard, where they are re-indoctrinated by the ultra leftist
> professors... :-)  Of course, I have the same fears for my real kids.s



Heh - I have visions of my AGI babies being re-indoctrinated by the
university society for medieval reenactment. That sounds bad  ;-)

Seriously though, someone over on sl4 pointed out that my original question
smelled heavily of anthropomorphization. As much as I'm aware of the issues,
I had to agree - my dayjob is in cognitive psychology, so it's a hazard of
the trade I suppose. But this reminds me that we should test every phrase by
replacing "AI" or "AGI" with (say) "autonomous industrial optimization
process".

Let's try that with your comment, Kelly (which I consider to be a joke with
a real issue somewhere inside):

"My fear is that we carefully raise our little autonomous industrial
optimization processes, then send them off to Harvard, where they are
re-indoctrinated by the ultra leftist professors..."

It's hard to know what to make of that! Maybe I shouldn't take it all
seriously, maybe I should search for real risks in there, maybe I should
re-think what I imagine an AGI might be like...

(One last thing - As I've mentioned elsewhere, these conversations often
seem to be conducted as if we weren't transhumanists. When we talk about
whether AGIs should be constrained - in whatever way, for whatever reason -
we should bear in mind that we *might* just be talking about constraining
our future selves...)

- A
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20110308/31607a35/attachment.html>


More information about the extropy-chat mailing list