[ExI] Unfrendly AI is a mistaken idea.

Eugen Leitl eugen at leitl.org
Tue Jun 12 09:26:16 UTC 2007

On Tue, Jun 12, 2007 at 06:06:54PM +1000, Stathis Papaioannou wrote:

>    So you would give a computer program control of a gun, knowing that it
>    might shoot you on the basis of some unpredictable outcome of the
>    program?

Of course you know that there are a number of systems like that, and
their large-scale deployment is imminent. People don't scale, and
they certainly can't react quickly enough, so the logic of it
is straightforward. 
>    The operating system obeys a shutdown command. The program does not

The point is that a halting problem is uncomputable, and in practice,
systems are never validated by proof.

>    seek to prevent you from turning the power off. It might warn you that
>    you might lose data, but it doesn't get excited and try to talk you
>    out of shutting it down and there is no reason to suppose that it

There's no method to tell a safe input from one causing a buffer
overrun, in advance.

>    would do so if it were more complex and self-aware,  just because it
>    is more complex and self-aware. Not being shut down is just one of
>    many possible goals/ values/ motivations/ axioms, and there is no a
>    priori reason why the program should value one over another.

The point is that people can't build absolutely safe systems which
are useful.
>      No of course not, because 2 +2 is in fact equal to 2 and I can
>      prove it:
>      Let A = B
>      Multiply both sides by A and you have
>      A^2 = A*B
>      Now add A^2 -2*a*B to both sides
>      A^2 + A^2 -2*a*B = A*B + A^2 -2*A*B
>      Using basic algebra  this can be simplified to
>      2*( A^2 -A*B) = A^2 -A*B
>      Now just divide both sides by A^2 -A*B and we get
>      2 = 1
>      Thus 2 +2 = 1 + 1 = 2
>    This example just illustrates the point: even someone who cannot point
>    out the problem with the proof (division by zero) knows that it must

It's not wrong. If the production system can produce it, it's about
as correct as it gets, by definition. Symbols are symbols, and depend
on a set of transformation rules to give them meaning. Different
transformation rules have different meanings for the same symbols.

>    be wrong and would not be convinced, no matter how smart the entity
>    purporting to demonstrate this is.

I can assure that there's nothing mysterous whatsoever about remote 0wnage,
but it still happens like a clockwork.

Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

More information about the extropy-chat mailing list