"The number of system-generated false positives was excessive," concluded the report, recently obtained under freedom of information regulations.Articles like this only make sense with reference to the expectations of the people that write them and users of the technology.
The airport had installed two separate facial recognition systems at security checkpoints at the airport. However, they failed to detect volunteers posing as terrorists 96 times during the three months that the trial was running, despite successfully picking them up 153 times.
Logan Airport was where 10 of the 19 terrorists involved in the 11 September terror attacks on New York boarded their flights.
Catching a bad guy in 153 out of 249 chances (a 61.4% success rate) is obviously worth something, especially if the chance of catching that bad guy was 0% before the addition of the new technology. [Note: this analysis only makes sense if it is assumed that the subjects involved are on a watch list.]
The article also mentions the September 11 hijackers, begging the question: would the attacks have been possible if 11 or 12 (out of nineteen) of the attackers had been detected on the day?
Maybe, maybe not. Security protocols, not technology, would determine the answer. But a smart security protocol might say that if six terrorists are caught entering the same airport within a couple of hours of each other, certain measures should be taken.
False positives and low accuracy are real concerns to be overcome by improved performance of facial recognition systems. In fact, they are interchangeable problems. SecurLinx can provide a system that rarely provides false positives, but will fail to alert the user to a larger proportion of accurate matches, or we can provide a system that will catch more matches but will generate more false positives. We work with end users to help them determine their own "sweet spot" for the kinds of matches to which they want to be alerted. Facial recognition in surveillance applications isn't like fingerprint biometrics.
The key to getting these things right for the customer and delivering the goods on a Return-on-Investment basis, is good communication about system capabilities, good training, and the application of the technology at the appropriate job function level.
If the system was sold as a bulletproof terrorist finder, a 39% failure rate is a flop. If it was sold as a 61% chance of preventing disaster, isn't that worth something?
See also: Biometrics & ID infrastructure: Perfect is the enemy of good