[ExI] Unfrendly AI is a mistaken idea.

Stathis Papaioannou stathisp at gmail.com
Wed Jun 13 11:38:37 UTC 2007

On 13/06/07, Eugen Leitl <eugen at leitl.org> wrote:
> On Wed, Jun 13, 2007 at 05:21:01PM +1000, Stathis Papaioannou wrote:
> >    I'd rather that the AI's in general *didn't* have an opinion on
> >    whether it was good or bad to harm human beings, or any other opinion
> >    in terms of "good" and "bad". Ethics is dangerous: some of the worst
> Then it would be very, very close to being psychpathic
> http://www.cerebromente.org.br/n07/doencas/disease_i.htm
> Absense of certain equipment can be harmful.

A psychopath is not just indifferent to other peoples' welfare, he is also
self-motivated. A superintelligent psychopath would be impossible to control
and would perhaps take over the world if he could. This is quite different
to, say, a superintelligent hit man who has no agenda other than efficiently
carrying out the hit. If you are the intended victim, you are in trouble,
but once you're dead he will sit idly until the next hit is ordered by the
person (or AI) with the appropriate credentials. That type of hit man can be
regarded as just an elaborate weapon.

>    monsters in history were convinced that they were doing the "right"
> >    thing. It's bad enough having humans to deal with without the fear
> >    that a machine might also have an agenda of its own. If the AI just
> If you have an agent which is useful, it has to develop its own
> agendas, which you can't control. You can't micromanage agents; orelse
> making such agents would be detrimental, and not helpful.

Multiple times a day we all deal with entities that are much more
knowledgeable and powerful than us, and often have agendas which are in
conflict with our own interests; for example, corporations or their
employees trying to extract as much money out of us as possible. How would
it make things any more difficult for you if instead the service you wanted
was being provided by an AI which was completely open and honest, was not
driven by greed or ambition or lust or whatever, and as far as possible
tried to keep you informed and responded to your requests at all times? And
if it did make things more difficult for some unforseen reason, why would
anyone pursue the use of AI's in that way?

>    does what it's told, even if that means killing people, then as long
> >    as there isn't just one guy with a super AI (or one super AI that
> There's a veritable arms race on in making smarter weapons, and
> of course the smarter the better. There are few winners in a race,
> typically just one.

Then why don't we end up with one invincible ruler who has all the money and
all the power and has made the entire world population his slaves?

>    spontaneously develops an agenda of its own, which will always be a
> >    possibility), then we are no worse off than we have ever been, with
> >    each individual human trying to get to step over everyone else to get
> >    to the top of the heap.
> With the difference that we are mere mortals, competing among themselves.
> A postbiological ecology is a great place to be, if you're a machine-phase
> critter. If you're not, then you're food.

We're not just mortals: we're greatly enhanced mortals. A small group of
people with modern technology could have probably taken over the world a few
centuries ago, even though your basic human has not got any smarter or
stronger since then. The difference today is that technology is widely
dispersed and many groups have the same advantage. If you're postulating a
technological singularity event, then this won't be relevant. But if AI
progresses like every other technology that isn't closely regulated (like
nuclear weapons research), it will be AI-enhanced humans competing against
other AI-enhanced humans. AI-enhanced could mean humans directly interfaced
with machines, but it would start with humans assisted by machines, as
humans have always been assisted by machines.

>    I don't accept the "slave AI is bad" objection. The ability to be
> I do, I do. Even if such a thing was possible, you'd artificially
> cripple a being, making it unable to reach its full potential.
> I'm a religious fundamentalist that way.

I would never have thought it possible; it must be a miracle!

>    aware of one's existence and/or the ability to solve intellectual
> >    problems does not necessarily create a preference for or against a
> >    particular lifestyle. Even if it could be shown that all naturally
> >    evolved conscious beings have certain preferences and values in
> >    common, naturally evolved conscious beings are only a subset of all
> >    possible conscious beings.
> Do you think Vinge's Focus is benign? Assuming we would engineer
> babies to be born focused on a particular task, would you think it's
> a good thing? Perhaps not so brave, this new world...

I haven't yet read "A Deepness in the Sky", so don't spoil it for me.

Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070613/aa0ddb69/attachment.html>

More information about the extropy-chat mailing list