Earlier this month, three of the largest companies that sell facial recognition software—IBM, Microsoft, and Amazon—announced moratoriums on sales of the technology to police. The companies cited concerns about the potential for unethical use of the technology and its discriminatory patterns in misidentifying people, and called for Congress to create regulations. Coming amid nationwide scrutiny of policing practices, and following years’ worth of concerns from tech experts and civil liberties advocates about the risks facial recognition poses, it’s significant that companies that develop and sell the tech are finally acknowledging its problems are so serious that they’re now asking for regulations.
“In a country where at least one in four police departments already have the ability to use facial recognition, it’s long past time for Congress to act.”
So the next move is clear: Congress needs to step up and pass legislation that will prevent government use of facial recognition that is susceptible to misuse and costly mistakes. The best way to begin is by pressing pause and by placing a moratorium on all government use of the technology, so that it cannot be used improperly while lawmakers examine what uses might be permissible within a strict set of limits.
In a country where at least one in four police departments already have the ability to use facial recognition, it’s long past time for Congress to act. As The Constitution Project at the Project On Government Oversight (POGO) has been highlighting for years, facial recognition’s threats to civil rights and civil liberties should be of great concern to lawmakers and the general public alike. That this surveillance tool has gone almost wholly unregulated is concerning both because of its potential for abuse, and because its algorithmic biases make it inherently more likely to misidentify people of color, particularly Black Americans, Asian Americans, Native Americans, and Pacific Islanders. Misidentifications can lead to improper police action, such as stops, searches, and even arrests targeting innocent people. And at protests in Baltimore in 2015 following the death of Freddie Gray in police custody, police used facial recognition to build lists of individuals at protests for future retaliation. It is not difficult to imagine that happening again, especially during the current wave of police misconduct against protesters.
Serious Questions Remain for IBM, Microsoft, and Amazon
The temporary holds on new sales of facial recognition software leave open a series of important questions whose answers will have a significant impact on how meaningful the companies’ actions are.
First, will these companies—which now acknowledge the need for regulation—publicly disclose which law enforcement entities they’ve already sold facial recognition to? If the moratoriums are more than a public relations stunt and are based on a genuine belief that regulations are critical, the companies behind them should also demonstrate they believe that the communities where facial recognition is already deployed have a right to know and to respond. There’s no excuse for companies—or law enforcement—to keep the public in the dark. IBM, Microsoft, and Amazon shouldn’t get credit for helping guard against irresponsible uses of facial recognition unless they provide transparency about the agencies they have sold their wares to. And Congress should consider enacting transparency requirements so that all law enforcement agencies using facial recognition—regardless of the vendor they purchased it from—must disclose its use.
Second, will these moratoriums apply to federal law enforcement agencies as well as state and local police? It would be nonsensical to block sales to municipal police departments but still sell to federal law enforcement, yet all three company announcements were ambiguous as to how broadly their new policy would apply. Heightened skepticism is also warranted given that Amazon has been deceptive in its responses to research that has proven that facial recognition systems, including its own, tend to misidentify people with “darker-skinned faces” at higher rates. The FBI has built up its own massive facial recognition system, but other federal agencies could well be interested in buying the tech. This question is particularly significant given that a 2018 POGO investigation revealed that Amazon was pitching its facial recognition tech to Immigration and Customs Enforcement (ICE), which former ICE officials have said could lead to abuse. And documents recently obtained by the American Civil Liberties Union show that Microsoft similarly tried to sell its software to the Drug Enforcement Administration.
Third, will these companies submit to independent assessments of their facial recognition systems before they resume sales to law enforcement? IBM and Microsoft currently allow the National Institute of Standards and Technology to examine their systems for general accuracy and algorithmic bias, but Amazon thus far has refused to allow the institute, which is part of the Commerce Department, to test its software. This is unacceptable for any company that wishes to sell facial recognition. If policymakers are going to confront problems of algorithmic bias in facial recognition software, it’s essential to know exactly how serious those problems are. Research on algorithmic bias in these systems, such as that of Joy Buolamwini, has already brought much-needed attention to this issue, and underscores the need for continuing scrutiny and testing. (And although the companies did not acknowledge this research, it likely played a role in their decisions to pause sales to police.)
And finally, how long will these moratoriums last? While Amazon has stated its moratorium will last one year in order to allow lawmakers to create effective rules and limits, IBM and Microsoft have not provided any information on the duration of their bans, something they need to do as a starting point. We also need answers from all three companies about whether they would consider extending their moratoriums if, for example, laws are still under development when the clock runs out. It would be unjustifiable for these companies to dive back into selling facial recognition if regulation is not yet in place when they now appear to admit that it is too dangerous to be used absent regulation.
At least one lawmaker has taken note of the too-limited information the companies provided. Last week, Representative Jimmy Gomez (D-CA) sent Amazon a letter asking for answers on these and several other questions about its facial recognition software and moratorium announcement.
Congress Needs to Step In
Even if the companies address all of these questions, any actions they may take to increase transparency or mitigate harms would not be an adequate substitute for congressional action. Facial recognition poses enormous threats to civil rights and civil liberties, and we cannot rely on Big Tech to regulate itself in the face of these threats.
The most straightforward step Congress can take is to place a moratorium on federal law enforcement use of facial recognition, not just on new sales.
There are dozens of facial recognition vendors; many focus solely on law enforcement clients and so seem unlikely to be motivated to enact a moratorium that would hurt their bottom line. And even if all vendors did enact an indefinite moratorium, some government entities could simply develop their own facial recognition software, such as the powerful system that the FBI maintains, or piggyback off of the FBI’s system.
The Constitution Project at POGO has long called on Congress to enact a comprehensive set of laws that impose strong limits on facial recognition (you can check out our task force report and recent testimony for more details). Fortunately, we are seeing movement in the right direction. On June 25, Senators Jeff Merkley (D-OR) and Ed Markey (D-MA) and Representatives Ayanna Pressley (D-MA) and Pramila Jayapal (D-WA) introduced legislation—which POGO endorsed—to place a moratorium on government use of facial recognition.
Lawmakers have also proposed strong regulatory limits on the use of the technology. Last year, Representative Yvette Clarke (D-NY) and Senator Corey Booker (D-NJ) introduced a bill to prohibit any use of facial recognition in public housing. Earlier this year, Senators Booker and Jeff Merkley (D-OR) introduced legislation that would prohibit the use of facial recognition without a warrant. And this month, Representative Don Beyer (D-VA) put forward a bill to ban the use of facial recognition in combination with police body cameras. This is a valuable measure that our task force also recommended, as facial recognition in body cameras is especially dangerous because it could prompt officers to act immediately on unreliable “matches.” There are numerous other areas where facial recognition can cause serious harm. These measures encompass only some of the policies required to prevent the dangers facial recognition poses, but they would nevertheless be a strong start.
We and other civil society advocates are working to ensure that any laws Congress passes constitute substantive reforms, rather than measures that would do little or nothing to address the dangers facial recognition poses. One particularly concerning piece of legislation, introduced by Senators Chris Coons (D-DE) and Mike Lee (R-UT), would only require law enforcement to obtain a warrant for using facial recognition to continuously track individuals for three or more days. This means that all targeted use of facial recognition and dragnet real-time facial recognition systems that scan entire crowds—all of which requires far less than three days of tracking—would be permitted without any restrictions. The bill amounts to a rubber stamp for surveillance technology that is prone to error and easily abused.
“Facial recognition poses enormous threats to civil rights and civil liberties, and we cannot rely on Big Tech to regulate itself in the face of these threats.”
At the state level, Microsoft has been aggressively pushing a lax regulatory policy framing it as a solution that would make facial recognition safe. But as privacy advocates have highlighted, this industry-generated policy would do little to address the most significant threats the technology poses. Anyone concerned about privacy and civil liberties should be wary of a similar measure gaining footing at the federal level as well. And any legislation on facial recognition that is supported by the vendors selling it but not privacy and tech experts focused on the risks it poses should be met with skepticism. Legislation like that is more likely to be window dressing than a meaningful solution.
Congress should take this moment to halt all government use of facial recognition. This would stop improper uses of the software while policymakers more rigorously examine the technology, its risks, and how to mitigate them. If Congress opts instead to begin by regulating facial recognition, it should start with strong limits such as those proposed by our Task Force.