[ExI] Unfrendly AI is a mistaken idea.

Vladimir Nesov robotact at mail.ru
Tue Jun 12 10:58:58 UTC 2007


Tuesday, June 12, 2007, Stathis Papaioannou wrote:

SP> The operating system obeys a shutdown command. The program does not seek to
SP> prevent you from turning the power off. It might warn you that you might
SP> lose data, but it doesn't get excited and try to talk you out of shutting it
SP> down and there is no reason to suppose that it would do so if it were more
SP> complex and self-aware,  just because it is more complex and self-aware. Not
SP> being shut down is just one of many possible goals/ values/ motivations/
SP> axioms, and there is no a priori reason why the program should value one
SP> over another.

Not being shut down is a subgoal of almost every goal (disabled system
can't succeed in whatever it's doing). If system is
sophisticated enough to understand that, it'll try to prevent shutdown, so
allowing shutdown isn't default behaviour, it must be an explicit
exception coded in the system.


Tuesday, June 12, 2007, Eugen Leitl wrote:
EL> The point is that a halting problem is uncomputable, and in practice,
EL> systems are never validated by proof.

You can define restricted subset of programs with tractable behaviour and
implement you system in that subset. It's just diffucult in practice, as it takes
many times over in work, training on the level you can't supply in
large quantities, and slower resulting code. And it probably can't be
usefully applied to complicated AI (as too much is in unforeseen data, and
assertions you want to check against can't be formulated).

-- 
 Vladimir Nesov                            mailto:robotact at mail.ru




More information about the extropy-chat mailing list