[ExI] Hard Takeoff

Alan Grimes agrimes at speakeasy.net
Tue Nov 16 05:37:06 UTC 2010


chrome://messenger/locale/messengercompose/composeMsgs.properties:

> Quoting Omohundro:
> 
> http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf
> 
> Surely no harm could come from building a chess-playing robot, could it?
> In this paper
> we argue that such a robot will indeed be dangerous unless it is
> designed very carefully.
> Without special precautions, it will resist being turned off, will try
> to break into other
> machines and make copies of itself, and will try to acquire resources
> without regard for
> anyone elses safety. These potentially harmful behaviors will occur not
> because they
> were programmed in at the start, but because of the intrinsic nature of
> goal driven systems.
> In an earlier paper we used von Neumanns mathematical theory of
> microeconomics
> to analyze the likely behavior of any sufficiently advanced artificial
> intelligence
> (AI) system. This paper presents those arguments in a more intuitive and
> succinct way
> and expands on some of the ramifications.

Do you ever get around to proving that the set of general AI systems
ever intersects the set of goal directed systems?

I strongly doubt that there is even one possible AGI design that is in
any way guided by any strict set of goals.


-- 
DO NOT USE OBAMACARE.
DO NOT BUY OBAMACARE.
Powers are not rights.




More information about the extropy-chat mailing list