[ExI] Unfrendly AI is a mistaken idea.

Stathis Papaioannou stathisp at gmail.com
Tue Jun 12 12:11:11 UTC 2007


On 12/06/07, Vladimir Nesov <robotact at mail.ru> wrote:
>
> Tuesday, June 12, 2007, Stathis Papaioannou wrote:
>
> SP> The operating system obeys a shutdown command. The program does not
> seek to
> SP> prevent you from turning the power off. It might warn you that you
> might
> SP> lose data, but it doesn't get excited and try to talk you out of
> shutting it
> SP> down and there is no reason to suppose that it would do so if it were
> more
> SP> complex and self-aware,  just because it is more complex and
> self-aware. Not
> SP> being shut down is just one of many possible goals/ values/
> motivations/
> SP> axioms, and there is no a priori reason why the program should value
> one
> SP> over another.
>
> Not being shut down is a subgoal of almost every goal (disabled system
> can't succeed in whatever it's doing). If system is
> sophisticated enough to understand that, it'll try to prevent shutdown, so
> allowing shutdown isn't default behaviour, it must be an explicit
> exception coded in the system.
>

Yes, but if it is explicitly coded as a command that trumps everything else,
the system isn't going to go around trying to change the code, unless that
too is specifically coded.


-- 
Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070612/02e5648c/attachment.html>


More information about the extropy-chat mailing list