[ExI] friendly fire, unfriendly AI

John Grigg possiblepaths2050 at gmail.com
Fri Apr 4 14:35:02 UTC 2008


Stathis Papaioannou wrote:
Humans are also subject to HW and SW flaws. In the final analysis, you
have to take a risk in trusting someone or something.
>>>

But I think at our current fairly low level of technology in this area, that
we are better off for now trusting in humans, rather than fully autonomous
machines.  I realize a "healthy balancing act" is what military planners are
probably hoping to develop.

Alex wrote:
This is why all military systems need more than one failsafe before firing
and should never be fully autonomous. The last failsafe should always
being a human (even that wasn't good enough in this case).
>>>

I believe the time will come (within several decades or less) when we will
see fully autonomous and very lethal weapons systems (especially flying
attack drones, undersea attack drones, ground robots, etc.) being a fairly
common sight on the battlefield.  A problem with at least some failsafes is
that they could be hacked into by a techno savvy enemy and then your
weapon(s) are turned against you.

I found it rather interesting when I learned the maker of the popular robot
vacuum, "Roomba," is making a very nasty looking military robot.  I wonder
if they will be building a home security version... lol

http://blog.wired.com/defense/2007/10/roomba-maker-un.html

John Grigg
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20080404/5b3dc8d3/attachment.html>


More information about the extropy-chat mailing list