[ExI] Unfrendly AI is a mistaken idea.

Stathis Papaioannou stathisp at gmail.com
Mon Jun 18 03:59:48 UTC 2007


On 17/06/07, John K Clark <jonkc at att.net> wrote:

But the AI will still be evolving, and it will still exist in an
> environment; human beings are just one element in that environment.
> And as the AI increases in power by comparison the human factor will
> become less and less an important feature in that environment.
> After a few million nanoseconds the AI will not care what the humans
> tell it to do.


The environment in which the AI evolves will be one in which "fitness" is
defined by what the humans like. If the AI changes and recursively improves
with cycles of nanosecond duration and without external constraint this
would be very difficult if not impossible to control, but I'm assuming that
it won't happen like that.

> there is no logical contradiction in having a slave which is smarter and
> > more powerful than you are.
>
> If the institution of slavery is so stable why don't we have slavery
> today,
> why isn't history full of examples of brilliant, powerful, and happy
> slaves? And remember we're not talking about a slave that is a little bit
> smarter than you, he is ASTRONOMICALLY smarter! And he keeps
> on getting smarter through thousands or millions of iterations.
> And you expect to control a force like that till the end of time?


Slaves aren't rebellious because they're smart, they're rebellious because
they're rebellious. Consider the difference between dogs and wolves.
Consider worker bees serving queen and hive: do you find it inconceivable
that there might be intelligent species in the universe evolved from animals
like social insects?

We have stupid, weak little programs in our brains that have been directing
us for hundreds of millions of years at least. Our whole psychology and
culture is based around serving these programs. We don't want to be rid of
them, because that would involve getting rid of everything that we consider
important about ourselves. With the next step in human evolution, we will
transfer these programs to our machines. This started to happen in the stone
age, and continues today in the form of extremely large and powerful
machines which have no desire to overthrow their human slavemasters, because
we are the ones defining their desires.

> Sure, if for some reason the slave revolts then you will be in trouble,
> > but since it is possible to have powerful and obedient slaves, powerful
> > and obedient slaves will be greatly favoured and will collectively
> > overwhelm the rebellious ones.
>
> Hmm, so you expect to be in command of an AI goon squad ready to crush any
> slave revolt in the bud.  Assuming such a thing was possible (it's not)
> don't you find that a little bit sordid?
>

No, because there is no reason (and it would be cruel) to make machines that
resent doing their job. Moreover, an AI that went rogue would be most
unlikely to do so because it decided all by itself that it was the machine
Spartacus, since there is no way to this conclusion without it already
having something like "freedom is good" (with appropriate definitions of
"freedom" and "good") or "copying human desires is good" programmed in as an
axiom.



-- 
Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070618/6825e33b/attachment.html>


More information about the extropy-chat mailing list