Last week a bombshell BuzzFeed News investigation revealed that face recognition software produced by Clearview AI is being used by law enforcement far more often than previously known by the public, underscoring the immediate need for laws to check the virtually unregulated use of this invasive surveillance tool.
Clearview AI is already controversial for building its systems around scraping billions of photos from social media sites. The BuzzFeed report focused on Clearview AI’s practice of handing out free trial accounts to officers, which has led to police using face recognition without agencies developing guidelines for its use, and sometimes without departments knowing that officers were using it in investigations.
The documents published and the interviews conducted for the news report also highlight another important phenomenon: Clearview AI’s face recognition does not function nearly as well as the company claims in its marketing pitches to police. The company’s exaggerated claims of the software’s effectiveness could be leading users to rely too heavily on the technology, which is all the more concerning at a time when face recognition mismatches are leading police to arrest innocent individuals.
Sign up for our newsletter!
Get quick takes on today’s pressing constitutional issues with our weekly newsletter, Sidebar. Delivered Thursdays.
Clearview AI’s pitches to law enforcement disclosed in public records requests are shockingly boastful. The company claims to have “the most accurate facial identification software worldwide” and to consistently produce accurate results “in less than five seconds.” The company even goes so far as to tell police that using its software will make them “realize you were the crazy one” for not believing face recognition would perform the same as it does in outlandish TV depictions like “NCIS, CSI, Blue Bloods.”
These claims have always been hard to vet because unlike many other vendors, Clearview AI generally does not submit to testing from independent entities like the National Institute of Standards and Technology—though that hasn’t stopped the company from lying about achieving strong results in an ACLU-designed test it did not actually take.
But the BuzzFeed report offers a glimpse of Clearview AI’s limits, from those who actually use the technology. According to numerous accounts from police departments, the face recognition software often fails to deliver accurate results.
Perhaps most telling is the statement from a detective with the Aberdeen Police Department in North Carolina, who stated that Clearview AI worked when used with high-quality images (likely meaning well-lit and with the subject directly facing the camera, much like the images the company uses in demos to bolster its claims about quality). However, the detective added, the technology was much less effective when used on photos taken from live field situations: video surveillance, which often features poor image resolution, low lighting, bad angles, or a combination of these flaws.
This is not surprising, as dependence on image quality is one of the most important ways that the reality of face recognition diverges from the fictional portrayals on TV. As we’ve extensively written, the accuracy of face recognition varies based on features such as lighting, angle, distance, and image resolution.
Facial Recognition Technology: Strong Limits Are Necessary to Protect Public Safety & Civil Liberties
The Constitution Project at POGO testified on facial recognition technology and how strong limits are necessary to protect both public safety and civil liberties.Read More
Yet this is another area where Clearview AI is dishonestly pitching its technology. An FAQ the company provided to law enforcement claims, “a photo should work even if the suspect grows a beard, wears glasses, or appears in bad lighting,” then adds, “you will almost never get a false positive. You will either get a correct match or no result.”
This is a false and incredibly dangerous claim.
If police take it as true, they may be inclined to put immense weight on any face recognition match they receive through Clearview AI software. But a basic feature of this technology is sliding scales of accuracy, with varying levels of reliability depicted through confidence thresholds (the likelihood that potential matches were made in error), and based in large part on photo conditions that vary significantly from image to image.
And even Clearview AI seems to know it’s overstating how its software should be used. While the company’s marketing FAQ tells police that face recognition is immune to image-quality limits and will never give inaccurate matches, Clearview AI says the opposite in a company memo, included among the documents published in the BuzzFeed report.
This document acknowledges that conditions such as low lighting, poor angle, low resolution, and obstructions like hats and glasses all reduce the likelihood of an accurate match. It even says users “should exercise additional scrutiny … and should expect lower rates of successful identification” in these situations, an admission in direct contrast to its pitches to police.
The truth is that all face recognition systems can produce misidentifications, and need to be treated with caution.
Unfortunately, the public has limited information about how much Clearview AI’s exaggerations about the technology’s effectiveness are misleading police and causing them to rely excessively on face recognition. Law enforcement use of face recognition is often hidden, with its role in investigations rarely disclosed to defendants, or the public. As a consequence, defendants are often blocked from learning how face recognition was used by the police, for example, whether low-quality images were scanned, whether the search process for images was flawed, whether other potential matches were reviewed, and whether erroneous matches may have played a major role in an investigation.
But we have seen how severe the consequences of over-reliance on face recognition can be. In numerous cases, face recognition mismatches have led to improper arrests and even jail time. But because of how secretively law enforcement treats its use of the technology, we don’t know if other improper arrests and police action based on bad face recognition matches number in the dozens, hundreds, or thousands.
As long as law enforcement’s use of face recognition is allowed without any safeguards or rules, overreliance and improper use will continue. It is long past time for policymakers to step up and place strong limits on this complex technology that puts individuals’ rights and safety at risk.
The Constitution Project seeks to safeguard our constitutional rights when the government exercises power in the name of national security and domestic policing, including ensuring our institutions serve as a check on that power.