[ExI] Why care about AI friendliness? (was Re: singularity summit on foxnews)

Mike Dougherty msd001 at gmail.com
Thu Sep 13 23:04:28 UTC 2007

On 9/13/07, Stefano Vaj <stefano.vaj at gmail.com> wrote:
> I still have to read something clearly stating to whom exactly an
> "unfriendly AI" would be unfriendly and why - but above all why you or
> I should care, especially if we were to be (physically?) dead anyway
> before the coming of such an AI.

  Perhaps what makes unfriendly AI so scary is that it would be so
different from us.  Even the most extreme religionist (take your pick)
is still human; despite their difference of ideology they are more
like us than not.  Algorithmic AI has no particular reason to share
our evolutionary bias for human interaction.  This form of AI
potentially trigger a xenophobic response that even little green men
(or big eyed gray aliens) fail to elicit - physical bodies could
conceivably have a weakness that 50+ years of sci-fi has taught us to
hopefully exploit.  Consider how intimidating the PC is for the
average consumer.  Now imagine the scenario where 'the computer'
really does have its own motivations in addition to complete control
over your identity.  HAL (from 2001:A Space Odyssey) wasn't even
really "unfriendly" as much as it was "conflicted" and look how that
turned out for Dave Bowman.  :)

  If you really do feel that you will be dead before AI is born, then
you probably shouldn't worry about it.  Even if that's true, should
you expect your children to care?  If your reference to _physical_
death implies some kind of uploaded transcendence of your physical
body, then you have even more to consider of AI - since it will
probably take some serious intelligence to run your software in a
machine.  Most likely if you can run on a machine there is already a
non-human agent facilitating your experiences.  You would need to be
emulated, the AI 'hypervisor' will be running natively.  Even if the
AI is not subjective master over your emulated self (or your
children/whatever) you must think about the control systems for
increasingly complex real-time processes - unless human brains get a
huge boost in capability, we will continue relying on machines to make
increasingly important decisions and informing us of the outcome so we
can maintain the illusion of control.  It doesn't take truly evil AI
to make a nightmare scenario out of the future; the best intentions of
humanity that is unprepared to be parents can do as much damage.  The
point of raising a "friendly" AI is to hedge our bet that once it
exceeds our ability to control it, there is a good chance it continues
to perform to our benefit.  'makes me think of the expression, "Be
nice to your kids, they're the ones who choose your retirement home."

More information about the extropy-chat mailing list