[ExI] Terrorist? Who can tell?

Lee Corbin lcorbin at rawbw.com
Sun Aug 24 04:07:02 UTC 2008


Harvey writes (forgive me for chopping up your email and replying to
various parts, perhaps out of order)

> Someone comes up with some face-recognition program, or 
> terrorist detection algorithm, or threat estimation theory, that they claim 
> is 99% accurate.  It recognizes the terrorists 99% of the time, and only 
> gets a false-positive on a non-terrorist 1% of the time.  Sounds great.  It 
> gets implemented.  Then it fails miserably in the field.
> 
> Why?
> 
> Because for every terrorist going through an airport, there are probably a 
> million non-terrorists.  That means:
> - 1 real terrorist gets identified (because it's 99% accurate)
> - 10,000 non-terrorists get identified (because it's 1% false-positive)
> ... so your system only works 1/10,000th of the time.  When it identifies a 
> person as a terrorist, the odds are 10,000-to-1 that they're innocent.  This 
> terrorist detection system won't actually work in the field.

Well, to keep the issues straight, you are suggesting that
if a known criminal X (of whatever race) passes through
an airport that today's present face-recognition programs
are pretty much useless?  (Whereas human spotters, I
presume, are not at all useless.)

Or, contrariwise, do you mean terrorist-spotting software,
e.g. which for argument's sake say are quite effective at
distinguishing Middle-Eastern young men from Indian young
men, are useless because of these statistical facts you adduce?

And (on the same point as this last paragraph) terrorist-spotting
humans at, say the Tel Aviv airport---using every clue they can
---get way too many false positives to be of any use? 


Also, let me weaken that entirely separate claim a little bit to two
questions, A and B:

A: "Are telling me that a row of six or more recent convicted
terrorist bombers could not be distinguished at the ninety-percent
level of confidence from a numerically similar row of Londoners
picked at random"  Surely you agree that I'm right about *that*,
but I grant that this was not the correct meaning to take from my
missive, and you did jump at the correct meaning, namely

B: "are you telling me that a row of six or more recent convicted
terrorist bombers could not be distinguished at the ninety-percent
level of confidence from a numerically similar row of Londoners
of completely matching age and sex?"

Best regards,
Lee

----- Original Message ----- 
Sent: Saturday, August 23, 2008 8:07 PM
Subject: Re: [ExI] Terrorist? Who can tell?

> "Lee Corbin" <lcorbin at rawbw.com> wrote,
>> Why is there
>> no mention whatsoever of *probabilities*?  Or are you trying
>> to tell me that a row of recent convicted terrorist bombers
>> would not in fact stand out compared to a random sample
>> of people from London?  That a six year old would be unable
>> to tell which group was which?
> 
> Yes, that is precisely right.  Probabilities don't work as well as you would 
> expect, due to Bayesian statistics.  I run into this in the security field 
> all the time.  Someone comes up with some face-recognition program, or 
> terrorist detection algorithm, or threat estimation theory, that they claim 
> is 99% accurate.  It recognizes the terrorists 99% of the time, and only 
> gets a false-positive on a non-terrorist 1% of the time.  Sounds great.  It 
> gets implemented.  Then it fails miserably in the field.
> 
> Why?
> 
> Because for every terrorist going through an airport, there are probably a 
> million non-terrorists.  That means:
> - 1 real terrorist gets identified (because it's 99% accurate)
> - 10,000 non-terrorists get identified (because it's 1% false-positive)
> ... so your system only works 1/10,000th of the time.  When it identifies a 
> person as a terrorist, the odds are 10,000-to-1 that they're innocent.  This 
> terrorist detection system won't actually work in the field.
> 
> Consider:
> - 99% accurate, 1% false-postive --> 10,000:1 falsely accusing the innocent
> - 99.9% accurate, 0.1% false-postive --> 1000:1 falsely accusing the 
> innocent
> - 99.99% accurate, 0.01% false-postive --> 100:1 falsely accusing the 
> innocent
> - 99.999% accurate, 0.001% false-postive --> 10:1 falsely accusing the 
> innocent
> - 99.9999% accurate, 0.0001% false-postive --> 1:1 falsely accusing the 
> innocent (50/50 chance of working)
> - 99.99999% accurate, 0.00001% false-postive --> 1:10 falsely accusing the 
> innocent (better than even chance or working)
> 
> You would need a system that is 99.99999%  accurate with only 0.00001% 
> false-postive rate to have it actually catch more terrorists than innocent 
> people.  Nothing is that perfect with that low an error rate.  No 
> "probabilities" dealing with random human persnalities are that precise. 
> Random human variation acts as noise that obscures what you are trying to 
> measure.  It simply doesn't work.
> 
> --
> Harvey Newstrom <www.HarveyNewstrom.com>
> CISSP CISA CISM CIFI GSEC IAM ISSAP ISSMP ISSPCS IBMCP
> 
>



More information about the extropy-chat mailing list