Protecting Civil and Human Rights
|
Analysis

Face Recognition Is Far from the Sci-Fi Super-Tool Its Sellers Claim

New Clearview AI Docs Show How Exaggerating Tech’s Effectiveness Endangers Individuals
(Illustration: Leslie Garvey / POGO)

Last week a bombshell BuzzFeed News investigation revealed that face recognition software produced by Clearview AI is being used by law enforcement far more often than previously known by the public, underscoring the immediate need for laws to check the virtually unregulated use of this invasive surveillance tool.

Clearview AI is already controversial for building its systems around scraping billions of photos from social media sites. The BuzzFeed report focused on Clearview AI’s practice of handing out free trial accounts to officers, which has led to police using face recognition without agencies developing guidelines for its use, and sometimes without departments knowing that officers were using it in investigations.

The documents published and the interviews conducted for the news report also highlight another important phenomenon: Clearview AI’s face recognition does not function nearly as well as the company claims in its marketing pitches to police. The company’s exaggerated claims of the software’s effectiveness could be leading users to rely too heavily on the technology, which is all the more concerning at a time when face recognition mismatches are leading police to arrest innocent individuals.

Clearview AI’s pitches to law enforcement disclosed in public records requests are shockingly boastful. The company claims to have “the most accurate facial identification software worldwide” and to consistently produce accurate results “in less than five seconds.” The company even goes so far as to tell police that using its software will make them “realize you were the crazy one” for not believing face recognition would perform the same as it does in outlandish TV depictions like “NCIS, CSI, Blue Bloods.”

These claims have always been hard to vet because unlike many other vendors, Clearview AI generally does not submit to testing from independent entities like the National Institute of Standards and Technology—though that hasn’t stopped the company from lying about achieving strong results in an ACLU-designed test it did not actually take.

But the BuzzFeed report offers a glimpse of Clearview AI’s limits, from those who actually use the technology. According to numerous accounts from police departments, the face recognition software often fails to deliver accurate results.

According to interviews BuzzFeed recorded in its research on Clearview AI, numerous departments found the face recognition software produced errors and inaccurate results:

  • “[An Information Services Department employee] did this in February 2020 using our own staff, who were made aware and provided permission, as the test subjects, and found it to be very inaccurate, so passed on any further research.” —Melinda McLaughlin, public information officer, Eugene Police Department
  • “It worked with good quality images taken directly from Facebook, it would pull up the exact image used for identification. We just found that unless the quality of the image was great (unlike 90% of video surveillance we obtain), results came back inconclusive.” —Shannon Darling, detective, Aberdeen Police Department
  • “Photos entered were of known individuals, including themselves and family members. The software did not yield accurate results and they ceased using it prior to the end of the 30 day trial period.” —Kathy Ferrell, public information officer, Smyrna Police Department
  • “We didn’t find it to be very useful so we stopped using it. Half the searches were on us to see what it would pull up. We were getting very poor results.” —Barry Wilkerson, police chief, St. Matthews Police Department
  • “Some detectives had tried the program and found that it wasn’t very useful and did not pursue it any further.” —Lieutenant Michael Bruno, Monterey Police Department
  • “They cleared it with their supervisor and was basically the guinea pig for our agency. They did not find any value in the program to help us with any of our investigations, so they discontinue[d] their trial use.” —Tony Botti, public information officer, Fresno County Sheriff’s Office
  • “We had one detective who had access two or three years ago after attending a social media investigation course. He said he used it several times but was never successful in finding accurate matches and discontinued its use.” —Brian Gulsby, spokesperson, Daphne Police Department
  • “I set up an account to see if it worked. I ran two known persons to see if they can [sic] back with any useful info. I didn’t think it worked they way the ad said it would.” —Lieutenant Billy Hilliard, Wilson’s Mills Police Department

Perhaps most telling is the statement from a detective with the Aberdeen Police Department in North Carolina, who stated that Clearview AI worked when used with high-quality images (likely meaning well-lit and with the subject directly facing the camera, much like the images the company uses in demos to bolster its claims about quality). However, the detective added, the technology was much less effective when used on photos taken from live field situations: video surveillance, which often features poor image resolution, low lighting, bad angles, or a combination of these flaws.

This is not surprising, as dependence on image quality is one of the most important ways that the reality of face recognition diverges from the fictional portrayals on TV. As we’ve extensively written, the accuracy of face recognition varies based on features such as lighting, angle, distance, and image resolution.

Yet this is another area where Clearview AI is dishonestly pitching its technology. An FAQ the company provided to law enforcement claims, “a photo should work even if the suspect grows a beard, wears glasses, or appears in bad lighting,” then adds, “you will almost never get a false positive. You will either get a correct match or no result.”

This is a false and incredibly dangerous claim.

If police take it as true, they may be inclined to put immense weight on any face recognition match they receive through Clearview AI software. But a basic feature of this technology is sliding scales of accuracy, with varying levels of reliability depicted through confidence thresholds (the likelihood that potential matches were made in error), and based in large part on photo conditions that vary significantly from image to image.

And even Clearview AI seems to know it’s overstating how its software should be used. While the company’s marketing FAQ tells police that face recognition is immune to image-quality limits and will never give inaccurate matches, Clearview AI says the opposite in a company memo, included among the documents published in the BuzzFeed report.

This document acknowledges that conditions such as low lighting, poor angle, low resolution, and obstructions like hats and glasses all reduce the likelihood of an accurate match. It even says users “should exercise additional scrutiny … and should expect lower rates of successful identification” in these situations, an admission in direct contrast to its pitches to police.

As long as law enforcement’s use of face recognition is allowed without any safeguards or rules, overreliance and improper use will continue.

The truth is that all face recognition systems can produce misidentifications, and need to be treated with caution.

Unfortunately, the public has limited information about how much Clearview AI’s exaggerations about the technology’s effectiveness are misleading police and causing them to rely excessively on face recognition. Law enforcement use of face recognition is often hidden, with its role in investigations rarely disclosed to defendants, or the public. As a consequence, defendants are often blocked from learning how face recognition was used by the police, for example, whether low-quality images were scanned, whether the search process for images was flawed, whether other potential matches were reviewed, and whether erroneous matches may have played a major role in an investigation.

But we have seen how severe the consequences of over-reliance on face recognition can be. In numerous cases, face recognition mismatches have led to improper arrests and even jail time. But because of how secretively law enforcement treats its use of the technology, we don’t know if other improper arrests and police action based on bad face recognition matches number in the dozens, hundreds, or thousands. 

As long as law enforcement’s use of face recognition is allowed without any safeguards or rules, overreliance and improper use will continue. It is long past time for policymakers to step up and place strong limits on this complex technology that puts individuals’ rights and safety at risk.