[ExI] friendly fire, unfriendly AI

Richard Loosemore rpwl at lightlink.com
Fri Apr 4 14:39:29 UTC 2008


ablainey at aol.com wrote:
> This is why all military systems need more than one failsafe before 
> firing and should never be fully autonomous. The last failsafe should 
> always being a human (even that wasn't good enough in this case). I'd 
> like to think it's also a pretty good example of why an AGI will be bad. 
> Not because it will use weapon like this, but just because it could 
> cause major damage with the systems it may have access to. Damage caused 
> not through desire or design, Just because it can. Or more correctly, 
> because we didn't account for it doing something we didn't think about.

As I have argued at length elsewhere, the conclusion you just arrived at 
  - that this is "a pretty good example of why an AGI will be bad" - is 
critically dependent on many assumptions about what the architecture of 
an AGI would look like.  The default assumption that everyone makes 
about how an AGI will be controlled (namely, that it will be controlled 
by a 'Goal-Stack drive mechanism') would support your conclusion.

However, this GS drive mechanism is not only a bad way to drive an 
intelligent system, it may not even scale to the type of system that we 
refer to as an AGI.  That means that there may *never* be such a thing 
as a real, human-level, autonomous AGI system that is govern by a 
Goal-Stack architecture.

By contrast, the type of drive mechanism that I have referred to in the 
past as a 'Motivational-Emotional System' would be immune to such problems.

You know those human beings that you wanted to use as the last failsafe? 
  They use an MES drive mechanism, but this particular type fo MES has 
some obvious design flaws which are clearly not enough to make humans 
immune to the problem of going on a rampage.  But that means that in our 
experience we have *never* encountered any type of intelligent system 
whose design was so good that all individuals of that type could be said 
to be "immune to such problems".  Because we never see such intelligent 
systems, we assume that it is ridiculous for anyone to make the claim 
that an AGI design could be "immune to such problems".

Nevertheless, this is exactly what is claimed:  it is possible to build 
an AGI in such a way that it would be not only as safe as the most 
trustworthy human being you old imagine, but a great deal more so.  And 
that goes for both the safety problem (unintentional mistakes) and the 
friendliness problem (intentional melevolence).

You can find lengthier discussions of these issues in the archives of 
the AGI and Singularity lists, but I am also in the process of 
collecting all this material into a more accessible form.



Richard Loosemore





> Undoubtably this tradgedy is down to a design flaw somewhere, HW or SW. 
> I just hope the lesson is learned.
> 
> Alex
> 
> -----Original Message-----
> From: Jeff Davis <jrd1415 at gmail.com>
> To: ExI chat list <extropy-chat at lists.extropy.org>
> Sent: Wed, 2 Apr 2008 19:55
> Subject: [ExI] friendly fire, unfriendly AI
> 
> Sorry guys, but I found this just too compelling to pass up.
> 
> Robot Cannon Kills 9, Wounds 14
> http://blog.wired.com/defense/2007/10/robot-cannon-ki.html
> 
> Best, Jeff Davis
> 
> "White people are clever, but they are not wise."
>           Ishi, the last of the Yahi nation
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
> 
> ------------------------------------------------------------------------
> AOL's new homepage has launched. Take a tour 
> <http://info.aol.co.uk/homepage/> now.
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat




More information about the extropy-chat mailing list