Protecting Civil and Human Rights
|
Analysis

Key Facts About Face Recognition for Policymaking

(Illustration: Renzo Velez / POGO)

Face recognition is a widespread and dangerous surveillance technology, and communities are finally fighting back against it. Over a dozen cities have banned police from using it, and this year Maine became the first state to set strict statewide limits that include a probable cause rule and a requirement that face recognition technology can only be used to investigate serious crimes. Last month, the House of Representatives Judiciary Committee held its first hearing on face recognition, which reflected a strong bipartisan interest in placing limits on the tech.

As we move closer to creating safeguards for the use of face recognition, there are several key facts that policymakers and the public should keep in mind.

Face recognition doesn’t work in a uniform way; its ability to function — and the risk of its misfunctioning — is highly situational.

Face recognition is often portrayed in crime dramas — and more disturbingly, by vendors selling the technology — as something that can be applied to any photo in any situation, with consistently accurate results. In reality, face recognition’s ability to deliver reliable matches depends upon a huge range of factors.

First, the quality of face recognition algorithms can vary significantly. Notably, many algorithms misidentify women and people of color at a higher rate. According to a National Institute of Standards and Technology study, some systems are 100 times more likely to misidentify people of East Asian and African descent than white people.

Second, image quality impacts the accuracy of matches. Bad lighting, indirect angles, distance, poor camera quality, and low image resolution all make misidentifications more likely. And these poor conditions are frequently present for the CCTV crime scene images that law enforcement often uses for face recognition scans.

In reality, face recognition’s ability to deliver reliable matches depends upon a huge range of factors.

Third, system settings can lead to errors. This is true even if top tier software and higher quality images are used. For example, if law enforcement allows results with confidence thresholds (a metric that categorizes how likely it is photos within a system are matches) that are low, it becomes more likely that a “match” is actually a misidentification. But the risk posed by lax settings often goes unaccounted for: Some vendors promote the use of low confidence thresholds to trigger matches, and some police departments set their systems to return a set of potential matches for any scan, no matter how low the confidence threshold is.

Disturbingly, the vendors that sell face recognition software often exaggerate how broadly the technology functions, thereby encouraging irresponsible use that will cause more misidentifications.

  • Claim: According to Clearview AI, “you will almost never get a false positive. You will either get a correct match or nor [sic] results.”

    Fact: Any system, no matter how advanced its algorithm, can produce false positives.
  • Claim: FaceFirst stated, “Facial recognition gives officers the power to instantly identify suspects. …
    “… Officers can simply use their mobile phones to snap a photograph of a suspect from a safe distance. If that individual is in their database it can then positively identify that person in seconds with a high degree of accuracy.”

    Fact: Field conditions for this scenario — such as lighting, angle, and distance — significantly increase the likelihood of misidentifications.
  • Claim: DataWorks Plus pitched its systems as tech that “uses facial recognition technology to positively match photos of an individual” with capabilities such as “discovering a person’s identity during investigations.”

    Fact: Face recognition can offer sets of possible matches, but should not be relied on to offer a definitive, positive match.
  • Claim: Matches with low confidence thresholds can be acceptable. Amazon worked with police to develop a system that will always “return the top 5 leads based on confidence levels” (meaning the Amazon-recommended setting will return matches no matter how low the confidence threshold is) and touted the fact that police are “willing to trade a lower confidence level for more leads.”

    Fact: Using an unrestricted setting that always returns matches — no matter how low the confidence threshold — creates serious risk of misidentifications.
  • Claim: Clearview AI said, “A photo should work even if the suspect grows a beard, wears glasses, or appears in bad lighting.”

    Fact: Photo conditions that limit ability to scan facial features impact the ability to accurately obtain matches. Obstructions and low lighting make misidentifications more likely even for high performing systems.

Even if it’s only used to generate leads, face recognition often plays an outsized role in investigations — and can cause serious harm.

Law enforcement often defends face recognition by claiming they just use it for leads — rather than as the center of a prosecution — but this is not a meaningful safeguard. Using untrustworthy information in investigations is always dangerous, regardless of whether that information is introduced in court. There are numerous reasons the “just used for leads” defense is cold comfort to those who have been wrongfully subjected to police actions based on ill-informed investigations.

First, the notion that face recognition is only one component of investigations is often simply untrue. There are already threedocumentedcases where individuals were wrongfully arrested — with two spending time in jail — based entirely on bad face recognition matches. According to a 2020 New York Times investigation of face recognition systems in Florida, “Although officials said investigators could not rely on facial recognition results to make an arrest, documents suggested that on occasion officers gathered no other evidence.” And there are likely many similar instances of face recognition being the sole basis for an arrest we don’t know about, because use of face recognition in investigations is often hidden from arrestees and defendants.

Second, even when face recognition is just used for leads, a mismatch could still lead investigators down the wrong path and toward an improper arrest. Using unreliable evidence always creates danger of improper arrests and convictions, something we’ve seen repeatedly from outdated forensics and sketchy lie detector tests.

Individuals could be charged in part based on how a face recognition match affects the direction of an investigation early on. Law enforcement overconfidence in the accuracy of matches can promote confirmation bias and sloppy follow-up, limiting the ability to identify face recognition errors. For example, in one incident, New York City Police Department officers allegedly took a face recognition match, and then rather than try to legitimately confirm or disconfirm its accuracy, instead texted a witness, “Is this the guy…?” along with a single photo rather than following proper procedure to use a photo array.

Using untrustworthy information in investigations is always dangerous, regardless of whether that information is introduced in court.

Third, misidentifications being used for leads can still cause serious harm in the course of investigations, even if errors in face recognition systems are eventually discovered and accounted for. Face recognition mismatches can form the basis of individuals becoming investigative targets and a variety of disruptive and potentially traumatic police actions, such as being stopped, searched, regularly monitored, or detained and questioned. These harms will be disproportionately borne by people of color so long as algorithmic bias is present in face recognition systems, and more generally so long as systemic bias impacts policing and our criminal justice system.

Face recognition plays a significant a role in policing, and the tech being just used for leads offers no assurance that errors will be remedied before they can harm innocent individuals. We need to implement strict limits on how face recognition is used and on the role it plays in investigations, and we need to give judges and defendants a role in reviewing its use to prevent sloppy applications and mistakes.

The risk that face recognition surveillance will be abused is not hypothetical: The technology has already been abused to target and hamper First Amendment-protected activities.

You don’t need to imagine a scary dystopian future or look to China’s authoritarian surveillance state to see how face recognition could be misused; given the absence of basic rules, it’s already happening in the United States.

A Sun Sentinel investigation recently revealed how, in 2020, local police in Florida repeatedly used face recognition to identify and catalog civil rights protesters. Fort Lauderdale police ran numerous face recognition searches to identify anyone who might be a “possible protest organizer” or an “associate of protest organizer” at a Juneteenth event to promote defunding the police. Boca Raton police ran face recognition scans on half a dozen occasions throughout May 2020 targeting Black Lives Matter protesters during peaceful events. And the Broward Sheriff’s Office ran nearly 20 face recognition searches during this same time period for the purpose of “intelligence” collection, rather than to investigate any criminal offense.

Unless Congress implements strong limits, these problems will continue and likely grow even worse.

Face recognition is also used to selectively target individuals who are protesting. In 2015 during mass protests in Baltimore in response to the killing of Freddie Gray in police custody, police used face recognition to scan demonstrators and find individuals with “outstanding warrants and arrest them directly from the crowd” in a selective effort that appeared to be aimed at disrupting, punishing, and discouraging Americans from exercising their right to protest.

Unless Congress implements strong limits, these problems will continue and likely grow even worse. Face recognition could be abused to identify and catalog every attendee at a religious service or political rally, akin to a hyper-powered version of the “mosque crawlers” the New York City Police Department deployed for its surveillance of Muslim Americans, or the plants and informants the FBI used to spy on civil rights and antiwar activists for COINTELPRO.

Law enforcement officials often use face recognition in ways that aren’t responsible or scientifically supported, and ways that push boundaries.

Even as we confront a litany of problems from face recognition systems that are being used as intended, law enforcement officials also frequently jerry-rig the tech for uses that are not scientifically based and that push boundaries.

Face recognition’s ability to provide matches depends entirely on the software’s ability to compare facial contours and features. Yet we know some police departments use pseudoscience shortcuts to get systems to spit out match results without having actually scanned faces. In 2019 the New York City Police Department commissioner bragged about how the department used photo editing and computer generated imagery to fill in pieces of faces for scans, or even artificially generated half a face to scan. Some police departments allow officers to replace photos with forensic sketches, and to scan drawings as though they were photographs. In one instance, police replaced a low-resolution image with a photo of a celebrity lookalike, something law enforcement expert and former DeKalb County, Georgia, Chief of Police Cedric L. Alexander described as “egregious.”

We’re also seeing vendors and law enforcement continue to push the boundaries of how face recognition can be used. One huge risk is using untargeted face recognition systems to continuously scan entire crowds. This type of system has been tried with disastrous results in the United Kingdom: Pilot programs had a 91% error rate in South Wales and a 98% error rate in London. Despite these problems, U.S. law enforcement may be moving forward with this especially invasive form of the tech. Building face recognition into police body cameras should set off alarm bells as well, given the risk that rapid in-field misidentifications could lead to an arrest or use of force. Yet, while this danger has led one major body camera vendor to cancel its plan to build in face recognition, other vendors are charging ahead despite the impending harm. Face recognition will soon be built into drones and aerial surveillance systems as well, potentially chilling engagement in public activities.

The need to act is urgent. We need strong safeguards on the use of face recognition not only to prevent future abuses but also to stop misuses that are already occurring.

The dangers of face recognition don’t just come from the sketchiest software and sellers — the tech itself poses comprehensive dangers that require comprehensive policy solutions.

As we work to fix face recognition, it’s critical that we don’t just focus on the most flawed systems. Policymakers need to account for the fact that even if outlier cases get fixed, many problems with face recognition occur in more well-designed systems.

It’s not sufficient to create policies that just react to especially problematic vendors, like Clearview AI. Unlike other systems, Clearview AI builds its photo database by scraping billions of photos from social media. This practice violates the terms of use for the websites hosting the photos and, more importantly, violates the consent and expectations of privacy of users who place images on their accounts with the promise that they will not be grabbed en masse and misused.

But even if law enforcement were totally cut off from using Clearview AI — one of several good measures that the proposed Fourth Amendment Is Not For Sale Act would accomplish — the overall dangers of face recognition surveillance would remain. Standard face recognition systems built on reference photo databases of mugshots or driver’s license photos, for instance, can still produce misidentifications for a variety of reasons. They can also be abused to target and catalog sensitive activity, like protests.

Another standout problem for some face recognition systems is their propensity to misidentify women and people of color far more often than other people. Policymakers should address this issue head on by prohibiting the use of systems that contain these flaws. But even if systems can overcome algorithmic bias, it won’t eliminate the broader need for safeguards against misidentifications.

Since misidentification is often due to poor image quality, even top tier systems will remain vulnerable to error. And as long as inequalities exist across our criminal justice system, the harms of face recognition misidentification will be disproportionately borne by people of color.

The time to act is now.

These various issues show how important it is to set broad limits on face recognition, as well as to do so soon. If we don’t create safeguards now, this surveillance tech will be more powerful, more pervasive, and more entrenched in policing and in our daily lives. The only way we can stop the dangers posed by face recognition is with strong and comprehensive policy solutions.