[ExI] computers now outperform humans at facial recognition

Anders Sandberg anders at aleph.se
Mon Jun 2 21:51:57 UTC 2014


BillK <pharos at gmail.com> , 1/6/2014 9:22 PM:

And, of course, as sure as night follows day...... 
 
<http://mobile.nytimes.com/2014/06/01/us/nsa-collecting-millions-of-faces-from-web-images.html> 
Quote: 
The National Security Agency is harvesting huge numbers of images of 
people from communications that it intercepts through its global 
surveillance operations for use in sophisticated facial recognition 
programs, according to top-secret documents. 
Of course. They have been doing it for a while now. 
However, facial recognition has different uses. Face verification, looking for one person in a database of images (or many cameras) is very different from identifying who is who, face recognition. 
The first is in principle doable: if you have 98.52% chance of correct identification and a thousand images where the person shows up in ten, you should expect nearly ten out of ten hits. The false positive rate, given the ROC curve in the paper, is about 1%. So you would also get 10  false positives. This is manageable for this example, as some other characteristic or a human could separate them. For a lot of pictures things get worse: if the target appears in a fraction f of N pictures there will be 0.9852fN correct hits, but 0.01N false positives. If f is smaller than one in hundred the false positives will outweigh the true positives - and potentially by a huge factor (just imagine Facebook: N=300 million pictures per day). 
The second case of face recognition is worse: now you have to repeat this for every person in the set N. In 1.48% of the cases there will be no match, and in 1% a false positive as person A is identified as B. So in the end, there will be 2.48% errors in the identification: 25 of those 1000 pictures will be wrongly assigned.  In general recognition is also far harder when you have large probe sets; looking for person A has a bigger accuracy than A to Z.
This will not stop NSA, Facebook or anybody else from trying. In many applications a few false positives are not a big deal - advertisers can handle noisy data. However, sending SWAT teams to every place where Most Wanted du Jour appears is problematic. Same thing with false negatives: no problem for the advertiser, a big problem when trying to enter your high security Lair of Doom. The real solution is data fusion: combine the images with gait analysis, keyboard rhythm, stylometrics, voiceprints and whatever sensors you have, do a Bayesian estimate, and you have something fairly robust. I fully expect NSA to do the 21st century version of Stasi archival: try to get as much data as possible, one day it will be all possible to weigh into a probability map. Shame about those errors that cause false positives even in such systems...
Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20140602/3409d59a/attachment.html>


More information about the extropy-chat mailing list