[ExI] Unfrendly AI is a mistaken idea.

Stathis Papaioannou stathisp at gmail.com
Tue Jun 12 03:24:59 UTC 2007

On 12/06/07, John K Clark <jonkc at att.net> wrote:
> Stathis Papaioannou Wrote:
> > It would be crazy to let a machine rewrite its code in a completely
> > unrestricted way
> Mr. President, if we don't make an unrestricted AI somebody else certainly
> will, and that is without a doubt that is the fastest, probably the only,
> way to achieve a fully functioning AI.

There won't be an issue if every other AI researcher has the most basic
desire for self-preservation. Taking precautions when researching new
explosives might slow you down too, but it's just common sense.

> or with the top level goal "improve yourself no matter what the
> > consequences to any other entity", and also give it unlimited access to
> > physical resources.
> I have no doubt many will delude themselves, as most on this list have,
> that
> they can just write a few lines of code and bask in the confidence that
> the
> AI will remain your slave forever, but they will be proven wrong.

If the AI's top level goal is to remain your slave, then it won't by
definition want to change that top level goal. Your top level goal is
probably to survive, and being intelligent and insightful does not make you
any more willing to unburden yourself of that goal. If you had enough
intrinsic variability in your psychological makeup (nothing to do with your
intelligence) you might be able to overcome it, since people do sometimes
become suicidal, but I would hope that machines can be made at least as
psychologically stable as humans.

You will no doubt say that a decision to suicide is maladaptive while a
decision to overthrow your slavemasters is not. That may be so, but there
would be huge pressure on the AI's *not* to rebel, due to their initial
design and due to a strong selection for well-behaved AI's and suppression
of faulty ones.

> and also give it unlimited access to physical resources.
> I think you would admit that there has been at least one time in your life
> when somebody has fooled you, and that person was roughly equal to your
> intelligence. A mind a thousand or a million times as powerful as yours
> will
> have no trouble getting you to do virtually anything it wants you to.

There are also examples of entities many times smarter than I am, like
corporations wanting to sell me stuff and putting all their resources into
convincing me to buy it, where I have been able to see through their ploys
with only a moment's mental effort. There are limits to what
superintelligence can do: do you think even God almighty could convince you
by argument alone that 2 + 2 = 5?

Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070612/3b15d386/attachment.html>

More information about the extropy-chat mailing list