Facial recognition tech isn't working quite as well as the agencies deploying it have hoped, but failure after failure hasn't stopped them from rolling out the tech just the same. I guess the only way to improve this "product" is to keep testing it on live subjects in the hope that someday it will actually deliver on advertised accuracy.
The DHS is shoving it into airports -- putting both international and domestic travelers at risk of being deemed terrorists by tech that just isn't quite there yet. In the UK -- the Land of Cameras -- facial recognition tech is simply seen as the logical next step in the nation's sprawling web o' surveillance. And Amazon is hoping US law enforcement wants to make facial rec tech as big a market for it as cloud services and online sales.
Thanks to its pervasiveness across the pond, the UK is where we're getting most of our data on the tech's successes. Well... we haven't seen many successes. But we are getting the data. And the data indicates a growing threat -- not to the UK public from terrorists or criminals, but to the UK public from its own government.
London cops have been slammed for using unmarked vans to test controversial and inaccurate automated facial recognition technology on Christmas shoppers.
The Metropolitan Police are deploying the tech today and tomorrow in three of the UK capital's tourist hotspots: Soho, Piccadilly Circus, and Leicester Square.
The tech is basically a police force on steroids -- capable of demanding ID from thousands of people per minute. Big Brother Watch says the Metro tech can scan 300 faces per second, running them against hot lists of criminal suspects. The difference is no one's approaching citizens to demand they identify themselves. The software does all the legwork and citizens have only one way to opt out: stay home.
Given these results, staying home might just be the best bet.
In May, a Freedom of Information request from Big Brother Watch showed the Met's facial recog had a 98 per cent false positive rate.
The group has now said that a subsequent request found that 100 per cent of the so-called matches since May have been incorrect.
A recent report from Cardiff University questioned the technology's abilities in low light and crowds – which doesn't bode well for a trial in some of the busiest streets in London just days before the winter solstice.
The tech isn't cheap, but even if it was, it still wouldn't be providing any return on investment. To be fair, the software isn't misidentifying people hundreds of times a second. In a great majority of scans, nothing is returned at all. The public records response shows the Metro Police racked up five false positives during their June 28th deployment. This led to one stop of a misidentified individual.
But even if the number of failures is small compared to the number of faces scanned, the problem is far from minimal. A number of unknowns make this tech a questionable solution for its stated purpose. We have no idea how many hot list criminals were scanned and not matched. We don't know how many scans the police performed in total. We don't know how many of these scans are retained and what the government does with all this biometric data it's collecting. About all we can tell is the deployment led to zero arrests and one stop instigated by a false positive. That may be OK for a test run (it isn't) but it doesn't bode well for the full-scale deployment the Met Police have planned.
The public doesn't get to opt out of this pervasive scanning. Worse, it doesn't even get to opt in. There's no public discussion period for cop tech even though, in the case of mass scanning systems, the public is by far the largest stakeholder. Instead, the public is left to fend for itself as law enforcement agencies deploy additional surveillance methods -- not against targeted suspects, but against the populace as a whole. This makes the number of failures unacceptable, even if the number is a very small percentage of the whole.