Give Now

We must close the loophole that allows law enforcement to buy our personal data without a warrant.

Protecting Civil and Human Rights
|
Analysis

Facial Recognition Surveillance Doesn't Necessarily Make You Safer

Police officers are always in search of the next tool to transform investigations, and as this week's revelations that ICE is searching driver's license databases, many think facial recognition could be that tool. But as a former homicide detective, I’ve seen firsthand that rushing to rely on technology without understanding its limits or having robust policies governing its use can be counterproductive, and even dangerous.

If we don’t recognize facial recognition’s problems and implement policies to mitigate them, the technology will sweep innocent individuals into investigations, undercut police-community relations, and waste law enforcement resources.

If we don’t recognize facial recognition’s problems and implement policies to mitigate them, the technology will sweep innocent individuals into investigations, undercut police-community relations, and waste law enforcement resources.

Almost any new technology comes with hyperbolic descriptions of what it can do. For police, this can result in investigators mismanaging leads or working backwards to fill in a narrative that fits what their seemingly impartial and authoritative tool tells them, rather than following evidence from the ground up. We’ve already seen this: in one case police ignored proper procedures for photo arrays and instead asked an eyewitness, “Is this the guy?” with a single photo obtained from a facial recognition search. This sort of tunnel vision is one of the main drivers of false accusations and wrongful convictions.

Facial recognition technology has plenty of problems. One of the most concerning is that it’s often inaccurate. Numerous studies, including one coauthored by an FBI expert, have found this is especially true in identifying and classifying women and people with darker skin tones. This endangers already over-policed communities, increasing the risk that people of color will improperly become investigative targets. Failure to deal with this problem could also expose police departments to discrimination lawsuits.

Facial recognition’s problems are exacerbated when law enforcement uses unreliable methodologies. Even a well-designed facial recognition system can produce unreliable data if its settings permit it to return low-confidence data. Gizmodo revealed earlier this year that Amazon—a major seller of facial recognition technology to law enforcement—had been advising police to set up facial recognition systems to return matches based on low confidence levels, even though the company’s own public statements say doing so is unreliable.

And some police departments conduct facial recognition scans based on fringe techniques like plugging in police sketches and celebrity lookalikes, or using a computer to generate facial features. These techniques are extremely unreliable: Facial recognition systems don’t “see” faces the way people do, and precise mapping of the actual location and dimensions of specific facial features is the only way to get reliable results.

Too often, use of new police technologies are impacted by a “CSI Effect,” where exaggerated TV portrayals of forensic tools lead to real-world assumptions of infallibility. Take the example of an information systems analyst at one department adding “an unnecessary purple ‘scanning’ animation whenever a deputy uploaded a photo—a touch he said was inspired by cop shows like ‘CSI.’” More troublingly, though, this belief in TV-tech forensics can lead to cutting corners: The same information systems analyst who created that CSI-style animation also created a facial recognition program for field use that “dropped the search-confidence percentages and designed the system to return five results, every time,” meaning results would come back as top possible matches even if they were unreliable.

Vendors of facial recognition technology tend to tout its abilities, but they don’t talk much about its uncertainties and limits. And putting these tools in the hands of all officers, rather than limiting access to trained specialists that examine results independently from other aspects of investigations, can cut out a vital layer of skepticism and interpretive nuance. 

Of course, I’ve seen firsthand that technology can transform investigative work for the better. Tools like DNA analysis, GPS tracking, and video recording allow detectives to solve cases that would have eluded them decades ago.

Facial recognition—used wisely—could certainly join this list by helping police confirm or discover the identity of a suspect. But it’s important, as with any new tool, to recognize what it can and can’t do, and to deploy it appropriately within those bounds. And law enforcement must recognize that facial recognition raises unique policy and legal questions.

As with any new tool, it's important to recognize what it can and can’t do, and to deploy it appropriately within those bounds.

That’s why placing limits on facial recognition will actually help investigations. I recently joined with a group of law enforcement, privacy, technology, and civil rights experts to suggest reforms to guard against facial recognition’s risks. Common-sense policies like clarifying the weight investigators should give facial recognition evidence, clearly defining when and how it can be used, and incorporating human review in the form of independent analysis of automated results or even requiring a warrant in some circumstances, will help ensure investigators aren’t following unreliable tangents. They will also help assuage the public’s reasonable fear that misidentification could harm them.

Law enforcement would be better off taking these problems head-on and putting clear limits on facial recognition now, rather than kicking the can down the road and encountering the serious harms that will inevitably flow from unrestricted use of this technology.