[ExI] [Bulk] Re: Fwd: Re: AI risks
J. Andrew Rogers
andrew at jarbox.org
Tue Sep 8 05:16:52 UTC 2015
> On Sep 6, 2015, at 9:17 PM, spike <spike66 at att.net> wrote:
> I wouldn’t be surprised if someone makes a guided autonomous weapon from a self-driving car in the near-term future, perhaps the next decade. The explosive could be detonated by phone, and the caller could be informed of the location of the guided weapon using ordinary GPS. The mobile explosive could be sent out at the leisure of the sender, at 0100 when there is little or no traffic and few witnesses. They could be sent hundreds of miles away. This one might be tough to solve.
There is one important aspect that you are overlooking: in the world above, this action leaves an enormous direct and ambient data trail at every step of the way that can ultimately be traced back to individuals.
It is not just forensic either. One of the known failures of rule-based analytics is that every Black Swan is unique in some important way such that you will almost never detect the next Black Swan until it arrives. Clever people observed that instead of modeling Black Swans so that you can detect them, you can model “boring normal” so that you can detect anomalous deviations from boring normal for a deeper dive. This involves vastly more data than a more traditional Black Swan search, since you are observing the universe, but it has also proven to be *much* more effective at finding interesting things you did not know you should be looking for — often the case with Black Swans.
This is the context in which the risk should be evaluated. While the robot assassin scenario is plausible, I would expect successful execution to be relatively rare in an environment where all of the individual behaviors are observable.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat