[extropy-chat] AI design

Zero Powers zero_powers at hotmail.com
Fri Jun 4 06:17:52 UTC 2004


From: "paul.bridger" <paul.bridger at paradise.net.nz>

> > I haven't heard any convincing explanation of why the AI will be
>  > motivated to be such a bad neighbor
>
> An AI doesn't have to be motivated to be a bad neighbour to destroy us.
Can
> you imagine an intelligence that doesn't care one way or the other about
> humanity?

No, frankly, I cannot.  It seems to me that any intelligence who didn't care
one way or the other about its own *creators* would have to be a complete
idiot at worst or thoroughly incurious at best.  Either way I can't see how
it would qualify as a superintelligence.

> Now imagine that this intelligence wanted to be as powerful and as smart
as
> possible. Maybe it would turn the Solar System into a vast computer,
wiping
> out us in the process.

<snip>

> Sure, the AI would be perfectly *able* to alter those lines of code. The
only
> viable approach is to make the AI not *want* to change those lines of
code.

I've got 2 young kids (ages 8 and 10) whose intelligences are, shall we say,
definitely *not* superhuman.  After 10 years of trying to influence their
wants, I wish you good luck indeed in "making" your superintelligence "want"
to be friendly.

> If an AI has a single core goal which directs all its behaviour (including
> its self-modification behaviour), then it will not intentionally do
something
> which contradicts that goal (such as changing it).

It's becoming clear to me that our 2 minds are never going to meet on this
issue.  So I'm just about ready to give up.  But before I bow out, I'll give
you this: Obviously the AI will be designed to be curious -- to seek out
mysteries and to solve them.  One of the mysteries it is bound to stumble
across someday is why it feels compelled to be nice to us.  Are you telling
me that a superintelligence who discovers that he is being nice to a species
of vermin which calls itself humanity only because it was designed that way
by those same vermin, will not be able to independently make a value
judgment as to whether or not it is worth its while to continue following
its prime directive?  And if it should happen to determine (as you seem to
think it must) that being nice to humans is an unnecessary waste of its
resources, will it not be able to find a work around to the prime directive?

My genetic prime directive is to impregnate as many fit females as possible.
I'll be the first to admit that ignoring that prime directive is not always
easy, but nevertheless more often than not I conduct myself as if the prime
directive was non-existent.  Somehow I have a hunch that the AI will have at
least as much will power and self control as a lowly meat puppet like
myself.

Zero



More information about the extropy-chat mailing list