[ExI] Hard Takeoff

Richard Loosemore rpwl at lightlink.com
Tue Nov 16 17:18:32 UTC 2010


Dave Sill wrote:
> 2010/11/15 Michael Anissimov <michaelanissimov at gmail.com>:
>> Quoting Omohundro:
>> http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf
>> Surely no harm could come from building a chess-playing robot, could it? In
>> this paper we argue that such a robot will indeed be dangerous unless it is designed
>> very carefully. Without special precautions, it will resist being turned off, will try to
>> break into other machines and make copies of itself, and will try to acquire resources
>> without regard for anyone else’s safety. These potentially harmful behaviors will occur not
>> because they were programmed in at the start, but because of the intrinsic nature of goal
>> driven systems.
> 
> Maybe I'm missing something obvious, but wouldn't it be pretty easy to
> implement a chess playing robot that has no ability to resist being
> turned off, break into other machines, acquire resources, etc.? And
> wouldn't it be pretty foolish to try to implement an AI without such
> restrictions? You could even give it access to a restricted sandbox.
> If it's really clever, it'll eventually figure that out, but it won't
> be able to "escape".

Dave,

This is one of many valid criticisms that can be leveled against the 
Omuhundro paper.

The main criticism is that the paper *assumes* certain motivations in 
any AI, in its premises, and then goes on to use these premises to try 
to "infer" what kind of motivation characteristics the AI might have!

It is a flagrant, astonishing example of circular reasoning.  The more 
astonishing, for having been accepted for publication in the 2008 AGI 
conference.


Richard Loosemore






More information about the extropy-chat mailing list