Give Now

We must close the loophole that allows law enforcement to buy our personal data without a warrant.

Protecting Civil and Human Rights
|
Testimony

Facial Recognition Technology: Strong Limits Are Necessary to Protect Public Safety & Civil Liberties

(Illustration: CJ Ostrosky / POGO)

Thank you for the opportunity to testify before the Presidential Commission on Law Enforcement and Administration of Justice’s Technology Working Group on the issue of law enforcement use of facial recognition technology.

I am Jake Laperruque, senior counsel for The Constitution Project at the Project On Government Oversight (POGO). POGO is a nonpartisan independent watchdog that investigates and exposes waste, corruption, abuse of power, and when the government fails to serve the public or silences those who report wrongdoing. We champion reforms to achieve a more effective, ethical, and accountable federal government that safeguards constitutional principles. The Constitution Project at POGO strives to protect individuals from improper and overbroad surveillance, including unchecked facial recognition surveillance.

Last year, our organization convened a task force of expert stakeholders including law enforcement officials, academics, tech experts, and civil rights and civil liberties advocates to examine the impact of facial recognition surveillance.1 This group concluded that if law enforcement uses facial recognition, its use of the technology should be subject to checks and limits. The task force also provided a set of recommendations for implementing these changes.

If law enforcement does not mitigate the risk of misidentifications in its use of this technology, face recognition will endanger public safety, civil liberties, and police-community relations.

Facial recognition is an immensely powerful technology. It offers some opportunities to streamline operations and aid law enforcement, such as confirming identifications during booking, preventing forgery of IDs, and finding missing persons. However, it also creates unprecedented potential for surveillance. Authoritarian regimes such as those in China and Russia have already shown how facial recognition can be used to stockpile records of individuals’ daily lives and suppress vital activities such as protests.2 But abuse is not limited to those nations; misuse of facial recognition has already occurred in the United States.3 Further, facial recognition can be highly prone to error based on circumstance and manner of use. Misidentifications pose a serious threat to public safety, civil liberties, effective law enforcement operations, and police-community relations.

In order to prevent these harms, if law enforcement intends to use facial recognition, it should abide by three overall principles, and implement policies in conjunction with these principles. First, facial recognition must be viewed as a technology that fundamentally alters police power, and should not be brought into use without public debate. Second, facial recognition must be treated as a forensic tool that requires careful use and precise application, not an all-purpose tool that can be wielded bluntly or casually. And third, facial recognition must be checked by limits that will prevent improper applications and abuse.

Facial Recognition Fundamentally Alters Police Power, and Requires Public Debate

Facial recognition technology does not merely offer a slight increase in police capabilities, but rather can profoundly redefine the power of police in unprecedented ways.

One misconception about facial recognition is that it is comparable to a police officer possessing photographic memory. It is worth noting, first, that not only is photographic memory extremely rare in adults, its limits in accuracy and scale make perfect recall of a large quantity of images and corresponding names virtually impossible.4

But even it was possible for a person to have flawless and limitless photographic memory, it would be impossible to for them to process images on a scale comparable with facial recognition systems. For example, Texas police can use facial recognition to search over 24 million mugshots and 24 million DMV photographs.5 If an officer with perfect photographic memory processed one photo per second, it would take over 555 days of photo review to process all these images. Facial recognition systems conduct this review nearly instantaneously. This technology is fundamentally different from human analysis, and it should be treated as such in formulating new rules and guidelines.

Facial recognition also differs from other identification tools in important ways. It allows law enforcement to identify individuals absent notification or consent, a significant departure from traditional ways of checking identification. And while stopping someone on the street and checking their identification requires reasonable suspicion,6 the vast majority of law enforcement entities that use facial recognition are not required to possess any suspicion of wrongdoing.7

Facial recognition also fundamentally differs from other noninteractive identification tools such as fingerprint and DNA collection. Facial recognition scans can be conducted on a far vaster scale than identification checks can be done through these established methods. Additionally, whereas individuals’ fingerprints and DNA profiles typically are not held by law enforcement absent an arrest, many law enforcement agencies are able to conduct face recognition scans using images in a state’s DMV database,8 or conduct scans of anyone who has a Facebook or Instagram profile.9

Absent limits, these factors open the door to abuse and monitoring of the population on a massive scale. The Supreme Court has imposed limits on law enforcement’s use of cellphone location monitoring, another relatively new technology, by requiring law enforcement to obtain a warrant before using this form of surveillance in order to protect the constitutional right to privacy, even in public. As the court noted in its 2018 Carpenter v. United States decision, “as technology has enhanced the Government’s capacity to encroach upon areas normally guarded from inquisitive eyes, this Court has sought to assure preservation of that degree of privacy against government that existed when the Fourth Amendment was adopted.”10

All branches and levels of government should similarly strive to preserve the right to privacy when considering implementing innovative surveillance technologies such as facial recognition.

If law enforcement wishes to use facial recognition, it should not simply adopt this technology without the input and consent of communities. Rather, law enforcement bodies should announce their desire to use facial recognition before they begin implementing systems, and allow the community to debate the issue and decide what limits it wants on the technology. Creating a facial recognition system absent public notification or input can lead to confusion and mistrust.11

Face Recognition Must Be Treated as a Forensic Tool Requiring Careful Use and Precise Application

Facial recognition’s potential for misidentification and user error makes the prospect of law enforcement using it on a large scale all the more worrisome. Like any forensic tool, it should be used carefully and precisely. In order to minimize the frequency of misidentifications, law enforcement must be aware of this tool’s limits and the factors that may affect accuracy; rules and guidelines should be enacted that promote careful and precise application rather than casual use.

Misidentifications are among the most prevalent and troubling risks face recognition creates. If law enforcement does not mitigate the risk of misidentifications in its use of this technology, face recognition will likely endanger public safety, civil liberties, and police-community relations by implicating improperly identified individuals in investigations or police action. The effectiveness of facial recognition is highly dependent on circumstances. Poor use-practices and unreasonable applications can significantly diminish accuracy.

Face recognition’s tendency to misidentify women and people of color at a higher rate than other people is an acute concern. Studies by the National Institute of Standards and Technology; the Massachusetts Institute of Technology, Microsoft, and AI Now Institute researchers; the American Civil Liberties Union; and an FBI expert all concluded that face recognition systems misidentify women and people of color more frequently.12 Most recently, the National Institute of Standards and Technology found that some systems were 100 times more likely to misidentify people of East Asian and African descent than white people.13 Failure to recognize the significance of this problem—and account for it in selection and review of software, training, and auditing—will undermine investigations and seriously harm civil rights.

The accuracy of face recognition in general is also subject to technical limitations. Because the technology centers on comparing features in photographs, image quality is essential to obtaining reliable results.14 Specifically, face recognition compares “probe” images, from which law enforcement seeks to identify individuals, to reference images, which contain previously identified faces.15 Reference images are typically high-resolution photos of a person directly facing a camera at close range, such as for a mug shot photo. But probe images are derived from a huge range of situations, which creates potential for low image quality and erroneous results.

Bad lighting, indirect angles, distance, poor camera quality, and low image resolution all will undermine reliability of matches.16 And these poor image conditions are much more likely when photos and videos are taken in public, such as with a CCTV camera. However, images like these often serve as face recognition probe images used in investigations, without due consideration for their diminished utility.17

Law enforcement should also be aware that vendors may exaggerate their technology’s capabilities and may fail to highlight its limits in order to advertise their product as more broadly useful than it truly is. One major vendor boasts in marketing materials that “facial recognition gives officers the power to instantly identify suspects. … Officers can simply use their mobile phones to snap a photograph of a suspect from a safe distance. If that individual is in their database it can then positively identify that person in seconds with a high degree of accuracy.”18 This is a highly inflated characterization given the limits that lighting and angle would impose in such a situation. Many other vendors also claim facial recognition would offer a positive identification—rather than provide a set of possible but uncertain matches—but that claim is at odds with how most responsibly designed facial recognition systems operate in practice.19

The reliability of face recognition also varies based upon the confidence threshold of potential matches.20 Confidence thresholds are a metric used to compare which proposed matches within a system are more likely to be accurate. The lower the confidence threshold, the more likely the “match” is actually a false positive. So, if law enforcement entities set face recognition systems to always return potential matches—no matter how low confidence the threshold—they will receive untrustworthy data. Troublingly, some law enforcement entities do just that.21

For example, one police department designed its facial recognition system so that for field use it “dropped the search-confidence percentages and designed the system to return five results, every time,” meaning results would come back as top possible matches even if they were unreliable, introducing the likelihood that officers would receive untrustworthy information amid encounters with individuals.22 This risk could be eliminated by employing minimum accuracy standards that prevent low confidence threshold “matches” from appearing.

This is another area where law enforcement should be wary that vendors may downplay facial recognition’s limits. An investigation by the outlet Gizmodo revealed that even as Amazon stated publicly that it recommended its law enforcement clients only use its systems when it found matches with a 99% confidence threshold, it was advising at least one department to deploy a top-five-match system that would always return results, even if possible matches were well below that 99% threshold.23

Finally, there are numerous irresponsible techniques for using facial recognition that severely undercut its reliability and exacerbate the risks of misidentification. Some law enforcement agencies have engaged in the highly questionable practice of scanning police sketches of suspects in lieu of using actual probe images of suspects; using computer editing to artificially fill in pieces of a face that were not caught on camera; or even discarding the desired individual’s photo entirely, in favor using a photo of a celebrity lookalike.24 The police chief of a major department last year went so far as to defend entirely building half of individuals’ faces from artificial imaging for facial recognition scans, claiming it was a valid use of the system.25 Asking systems to analyze inauthentic and manufactured data will produce inauthentic and unreliable results. Computer programs do not see faces the way humans do. Artificially adding data to be used in a face recognition scan is the equivalent to drawing in lines on a smudged fingerprint.

It is important to resist the temptation to shrug off the risks misidentification poses by claiming that facial recognition could just be used for leads, rather than as the backbone of a prosecution.26 Using untrustworthy information as the foundation of an investigation is dangerous, regardless of whether that information is introduced in court. If law enforcement guidelines recommended basing investigations on contaminated DNA samples, it would be of little comfort that this tainted evidence was “just used for leads.”

It is important to resist the temptation to shrug off the risks misidentification poses by claiming that facial recognition could just be used for leads.

Simply being targeted in an investigation can be disruptive and potentially traumatic, and can raise the prospect of an individual being harmed even if charges or a conviction never follow. And an individual could in fact be charged in part based on how a match produced by a facial recognition system affects the direction of an investigation early on, especially when having a match promotes confirmation bias or sloppy follow-up. For example, in one reported incident, New York City Police Department officers took a single possible match from a facial recognition system, and then texted a witness, “Is this the guy…?” along with the photo, rather than following proper procedure by using a photo array.27 In this situation, facial recognition played a major role in an arrest, and it is far from an isolated incident.28

Law enforcement must make responsible decisions about how they use facial recognition with the knowledge that a host of factors—reliability of the algorithm in general and specifically for the demographics of the suspect, image quality, whether the image has been altered, and the confidence threshold permitted—affect whether the “matches” their systems produce are well founded or implicate innocent individuals.

Law enforcement should not rely on a technology with significant, known, and still-unmitigated flaws for investigative work. If law enforcement uses facial recognition, this use must be within the bounds of policies that limit use to proper situations and prevent irresponsible applications.

Finally, real-time facial recognition poses uniquely significant accuracy problems. This type of system does not attempt to identify a single probe image. Rather, real-time facial recognition scans every person in a crowd caught on camera by a video feed, and produces an alert if anyone scanned is identified as a match against a preexisting watchlist.

Real-time face recognition takes all the misidentification risks of using standard facial recognition and multiplies them by conducting scans of groups of individuals. Early testing has shown these risks are not hypothetical. In pilot programs in the United Kingdom, South Wales police had a 91% error rate and London Metropolitan Police had a 98% error rate.29

Given that real-time face recognition is far more likely to produce misidentifications than genuine matches, its implementation should be kept on hold. In its current state real-time face recognition will only serve to endanger innocent individuals and undermine public trust in law enforcement.

Facial Recognition Should Be Checked by Limits to Prevent Improper Applications and Abuse

In addition to facial recognition’s accuracy problems, use of the technology remains unchecked by necessary rules or policies to prevent improper applications and abuse—this must change if law enforcement wishes to use this technology. As the Supreme Court recently noted, the right to privacy—and the role of privacy rights in limiting government surveillance—does in fact extend to public places.30 Protecting this right—and establishing guardrails that prevent the government from stockpiling information about the intimate details of its citizens’ lives—is essential to the functioning of democratic society.

Recent American history shows significant abuses of surveillance powers that targeted individuals based on race, religious affiliation, and political activity.31 Facial recognition could easily be exploited to target individuals in the same way.

Unrestricted, facial recognition could allow law enforcement to scan crowds during large protests, political events, or religious ceremonies, and catalog individuals’ engagement in these First Amendment-protected activities. Using facial recognition with no restrictions to curb abuse presents the potential that discretion could become a tool for selective prosecution. The ability to effortlessly pull up potential facial recognition matches for individuals with an active bench warrant for a low-level offense has in fact already led to abusive targeting, such as against individuals attending protests, political events, and religious ceremonies. Several years ago, Baltimore police used facial recognition amid protests to find individuals with “outstanding warrants and arrest[ed] them directly from the crowd,” in a selective effort that appeared to be aimed at disrupting, punishing, and discouraging demonstrators from protesting.32

Many other areas of people’s lives are also vulnerable to invasive facial recognition surveillance. With facial recognition, the government could potentially catalog everyone who goes to a mental health clinic, seeks help at a substance abuse treatment center, or attends a union meeting. These kinds of sensitive data about people’s lives could be stockpiled and used for an immense array of future government activities, ranging from profiling, to selective law enforcement investigations, to evaluations for civil service employment opportunities.

Even without abuse occurring, the mere ability to collect and freely use this type of highly personal and sensitive information could chill participation in political, religious, and a variety of other constitutionally protected activities. Research has shown that surveillance does in fact chill participation in basic activities, especially when directed at sensitive activities and groups vulnerable to persecution.33

In order to prevent and mitigate the numerous harms that could result from improper and unfettered use of facial recognition, it is necessary to place regulations on how facial recognition is used. Fortunately, strong rules that will prevent abuse and build public confidence can be enacted in a manner that will not undercut the uses of facial recognition that offer the most potential value.

Unrestricted, facial recognition could allow law enforcement to scan crowds during large protests, political events, or religious ceremonies, and catalog individuals’ engagement in these First Amendment-protected activities.

The most important limitation to place on the use of facial recognition to prevent misuse is a warrant requirement. Probable cause warrants are our most important shield against improper surveillance, providing both independent oversight and ensuring that invasive surveillance is directed only at individuals suspected of wrongdoing. This is why there are warrant requirements for use of powerful electronic surveillance tools such as wiretapping, cellphone location tracking, thermal imaging devices, GPS tracking devices, and cell-site simulators. Indeed, neither the public nor the courts would find it acceptable for law enforcement to track someone’s cellphone for a month absent any authorization or limits, even if law enforcement stated that this surveillance would just be used for leads.

A warrant rule would provide significant benefits in preventing facial recognition from being used to catalog sensitive but innocuous activities and associations, and would increase public trust that law enforcement was using the technology responsibly. And because most investigative applications of facial recognition focus on identifying an individual whose image is recorded in commission of a crime, meeting this requirement should not be a significant barrier to legitimate use. A warrant requirement could also include reasonable exceptions modeled on existing rules, such as those governing identification of victims and missing persons, as well as exceptions that could be made in exigent circumstances.

Another critical regulation to prevent abuse is a requirement that use of facial recognition be limited to investigation of serious offenses. Such a rule is necessary to prevent law enforcement from using the unprecedented discretionary power facial recognition offers for selective enforcement, as occurred during the Baltimore protests.34 This rule would also be a key means of ensuring that law enforcement uses this tool exclusively in cases that receive the highest level of scrutiny and review. And it would prevent facial recognition from being used for dragnet surveillance, something that is already occurring in China as facial recognition constantly monitors all public activity, ostensibly in an effort to curb petty offenses.35

POGO’s task force on facial recognition recommended that departments center a serious crime limit on Uniform Crime Reporting Title 1 crimes, with some additions and deviations.36 This would allow law enforcement to use facial recognition in order to respond to significant crimes and threats, while limiting the risk of selective enforcement and abuse.

Finally, it is essential that defendants are given notice when facial recognition is used as a basis for their arrest and prosecution. This must occur even if facial recognition is used as the basis for obtaining future evidence, rather than itself serving as direct evidence in court proceedings. In some jurisdictions, law enforcement uses facial recognition thousands of times per month, but defendants almost never receive notice of its use in investigations.37 As the New York Times described in a detailed investigation of how law enforcement agencies use facial recognition:

Any technological findings presented as evidence are subject to analysis through special hearings, but facial recognition results have never been deemed reliable enough to stand up to such questioning. The results still can play a significant role in investigations, though, without the judicial scrutiny applied to more proven forensic technologies …. In some of the Florida cases The Times reviewed, the technology was not mentioned in initial warrants or affidavits. Instead, detectives noted “investigative mean” or an “attempt to identify” in court documents, while logging the matters as facial recognition wins in the Pinellas County records.38

As discussed above, the reliability of facial recognition is highly dependent upon circumstance and manner of use. While enacting the recommendations provided in reference to this issue would mitigate risk, defendants still have the right to review whether it was used in a proper manner with reasonably reliable results, as with any other forensic technology.

Recommendations

POGO offers the following recommendation on use of the technology, which are largely based on the limits recommended by our Task Force on Facial Recognition.

In order to enhance community confidence in law enforcement and any use of facial recognition technology, we recommend that if law enforcement entities wish to use facial recognition, they do the following:

1. Encourage community debate before deploying facial recognition. Law enforcement entities should inform their local community of their intention to use facial recognition before implementing a system, and facilitate a public debate so that their adoption of facial recognition is preceded by establishment of community-based standards.

In order to reduce the risk misidentifications, improve accuracy, and ensure proper treatment of facial recognition as a forensic technology, we recommend that if law enforcement entities use facial recognition, they do the following:

2. Require independent testing and prioritize high accuracy standards. Law enforcement should only use software that is regularly tested by independently entities—such as the National Institute of Standards and Technology—and have higher levels of accuracy both in general and among various demographic groups.

3. Require extensive training for officers using facial recognition. Before allowing an officer to use a facial recognition system, departments should require extensive training on the technology, including on its overall limits, the impact of image quality, the impact of confidence thresholds, and the tendency for higher rates of error for certain demographic groups.

4. Treat facial recognition as a forensic tool rather than an all-purpose tool. Use specially trained and designated staff that are not involved in investigations related to the scans they run.

5. Do not use unreliable matches with low confidence thresholds. Require high confidence thresholds in order to use potential matches, and do not deploy standards that automatically return a set number of matches even if they have a low confidence threshold.

6. Prohibit the use of junk science methods based on artificial imagery. Prohibit the use of facial recognition with images that are partially computer generated, or images such as sketches or images of lookalikes that are not photographs of the individual being sought.

7. Require independent verification of matches. Require independent verification of matches produced by facial recognition, andplace a limit on the value matches can have—such as considering them equivalent to an anonymous 911 call—as the basis for officers’ actions and investigations.

8. Prohibit the use of real-time facial recognition. Real-time facial recognition and crowd scanning should not be permitted while this form of facial recognition is so prone to error.

In order to provide proper safeguards against abuse and improper targeting that invasive surveillance tools such as facial recognition can be misappropriated for, we recommend that if law enforcement entities use facial recognition, they do the following:

9. Require probable cause warrants. Law enforcement should be required to obtain a probable cause warrant from a judge in order to use facial recognition for investigative purposes. Exceptions can be made for situations such as identifying victims, locating missing persons, or responding to exigent circumstances.

10. Establish a serious crimes limit. Use of facial recognition should be limited to serious crimes, such as Uniform Crime Reporting Title 1 crimes.

11. Require disclosure to defendants. Defendants should receive notice whenever facial recognition was used in an investigation that led to their prosecution, including when facial recognition served as the basis for obtaining derivative evidence.

12. Require strict auditing with public transparency. Law enforcement should maintain a regular auditing system with public reporting on results to ensure these rules are followed and the public has confidence in them, and publish reports on how often facial recognition is sought and used.