Senator Eldridge, Representative Day, and members of the committee, I am submitting this written testimony on behalf of The Constitution Project at the Project On Government Oversight in support of H.135 and S.47, An Act to Regulate Face Surveillance. We respectfully ask this committee to favorably report this legislation, which limits government use of face surveillance and emerging biometric surveillance technologies in a manner consistent with civil rights, civil liberties, equity, and racial justice. However, we also ask the committee to incorporate additional safeguards into the bill that will strengthen it to further these important goals.
Founded in 1981, the Project On Government Oversight (POGO) is a nonpartisan independent watchdog that investigates and exposes waste, corruption, abuse of power, and when the government fails to serve the public or silences those who report wrongdoing. We champion reforms to achieve a more effective, ethical, and accountable federal government that safeguards constitutional principles. The Constitution Project at POGO centers its work on issues such as guarding against improper and overbroad surveillance, including unchecked face recognition. In 2019, The Constitution Project convened a task force of expert stakeholders including academics, tech experts, civil rights and civil liberties advocates, and law enforcement officials to examine the impact of face recognition surveillance.1 Our group concluded that any law enforcement use of face recognition should be subject to strong limits, and it provided a set of policy recommendations to support legislatures in the creation of reasonable but necessary limits.
In the 2020 omnibus police reform bill, the House and Senate adopted a strong framework for government use of face surveillance in the Commonwealth, with meaningful safeguards against abuse. However, these safeguards were significantly weakened by Governor Charlie Baker before he signed it into law. So, while Massachusetts has taken an important first step, the existing statute is insufficient to address the problems face surveillance poses to civil rights and civil liberties.
Fortunately, lawmakers have addressed these concerns in legislation filed this session in each chamber: H.135, An Act to Regulate Face Surveillance, sponsored by Representatives David Rogers and Orlando Ramos, and S.47, An Act to Regulate Face Surveillance, sponsored by Senator Cynthia Stone Creem.
This bill would improve rules on face recognition in a variety of ways, and put Massachusetts at the forefront of efforts around the country to effectively limit face surveillance. It is essential the law only allow police to use face recognition technology to investigate serious crimes by searching images to identify a suspect, with appropriate judicial authorization, structural safeguards, and due process. The law should not permit law enforcement to use face recognition for generalized surveillance, or for selective enforcement of low-level offenses that could harm marginalized and over-policed communities.
In particular, we believe that legislation limiting face recognition should center on at least five key policy priorities, many of which are effectively incorporated into this legislation:
- Requiring that face recognition searches are based on probable cause.
- Limiting use of face recognition to the investigation of serious crimes.
- Prohibiting face recognition from being used as the sole basis for arrests.
- Requiring notice to defendants whenever face recognition is used.
- Prohibiting face recognition from being used for untargeted surveillance.
The Law Should Require All Face Recognition Searches Be Based on Probable Cause
Requiring that law enforcement demonstrate probable cause that an unknown person in question has committed a crime before they use face recognition to identify that individual is a critical safeguard for preventing abuse. The primary police use for face recognition is to scan photographs of individuals taken during commission of a crime; demonstrating probable cause in such scenarios should not be an onerous burden for supporting legitimate law enforcement goals.
This requirement is essential to stopping face recognition from being used to catalog and target individuals engaged in constitutionally protected activities such as protesting, participating in political rallies, or attending religious services. The danger of this surveillance technology being misused in such a manner is not theoretical: Police have used face recognition on multiple occasions in recent years to identify peaceful civil rights protesters.2 Without a probable cause requirement, police could also use face recognition as a dragnet surveillance tool, scanning, identifying, and cataloging individuals’ activities on a mass scale.
While last year’s legislation as originally introduced would have created strong requirements for police use of face recognition, the weaker version requested and signed into law by Governor Baker fails to properly establish adequate safeguards from abuse. In order to run face recognition scans under current rules, law enforcement merely needs to show that “the information sought would be relevant and material to an ongoing criminal investigation.”3 This is an overbroad standard that could facilitate fishing expeditions and abuse, such as identifying protesters at a large gathering en masse if an act of vandalism occurred nearby.
The probable cause requirement in H.135 and S.47 would prevent mass surveillance and abuse without hampering legitimate law enforcement uses. Further, the legislation makes sensible exceptions, such as an emergency involving immediate danger or the identification of a deceased person.4 Raising the standard to probable cause would still support law enforcement needs while closing the door to fishing expeditions and excess scans to identify people, including during sensitive activities and interactions.
The Law Should Limit Use of Face Recognition to Investigating Serious Crimes
Another key pillar of effective safeguards on face recognition is limiting its use to the investigation of serious crimes. The concept of limiting use of powerful surveillance tools to top-tier investigations has clear precedent: It has been applied for over 50 years in similar surveillance contexts such as wiretapping.5
Face recognition should not be used to stockpile suspects for investigation of minor offenses.
In many places, police already use face recognition to investigate minor offenses such as shoplifting less than $15 of goods or stealing a pack of beer.6 These low-profile cases often receive little scrutiny, so it is more likely that erroneous uses of face recognition — which can stem from a variety of causes such as algorithmic bias, poor image quality, lax software settings, or even pseudo-scientific techniques7 — will go unnoticed. For minor offences, it’s also more likely that potentially exculpatory evidence will not be sought out. This is especially concerning because face recognition can be notoriously inaccurate, especially for women and people of color.8
“Without a serious crime limit, face recognition could be targeted at marginalized communities, political dissidents, and other vulnerable individuals.”
A serious crime limit for face recognition would also prevent the misuse of discretionary powers, including selective targeting of marginalized communities and dissidents. For example, in 2015, as demonstrators protested the death of Freddie Gray in police custody, Baltimore police used face recognition to target protesters, scanning the crowd with the technology to find and arrest anyone who had an outstanding warrant for any offense.9 Without a serious crime limit, face recognition could be used in this manner on a broad scale, weaponized for selective enforcement of bench warrants for minor offenses and targeted at marginalized communities, political dissidents, and other vulnerable individuals. This can already be seen in autocratic regimes such as China, which uses face recognition for social control, deploying the technology to catalog minor offenses and then to engage in public shaming.10
By restricting use of the technology to investigating violent felonies, H.135 and S.47 prevent these risks while still permitting limited use for investigating offenses, such as homicides, that are genuine public safety priorities.11
The Law Should Prohibit Face Recognition from Being Used as the Sole Basis for an Arrest
In addition to the measures above designed to prevent abuse and excess surveillance, effective safeguards on face recognition must also include policies that prevent excess reliance on the technology, which can be prone to error. There are already numerous documented instances when a face recognition misidentification led to a wrongful arrest.12 While a probable cause warrant to run scans provides significant value, it does not prevent the harms that can arise when law enforcement excessively relies on the results of searches.
Factors that reduce image quality, such as bad lighting, indirect angles, distance, poor cameras, and low image resolution, all make misidentifications more likely. Lax system settings, such as employing a lower confidence threshold to trigger matches13 or having broad sets of matches appear in search results, increase the potential that law enforcement will receive erroneous matches as well. Even as face recognition software improves in quality — and even if algorithmic bias dissipates — there will always be situation-based limits to how effective the technology is. And there will always be a danger in giving too much credence to matches that could misidentify innocent individuals.
Unfortunately, as currently written, H.135 and S.47 do not remedy this problem. We recommend these bills be strengthened to prevent face recognition matches from being the sole basis for an arrest, or on their own represent probable cause for a search. This is a key safeguard to preventing harm from over-reliance on face recognition misidentifications, and a reform that has already been enacted as a rule in multiple states.14
The Law Should Require Defendants Be Given Notice When Face Recognition Is Used
Like any other complex forensic tool, face recognition’s effectiveness can depend on technical factors and manner of use. That is why it is critical that defendants are notified and given the opportunity to examine face recognition technology whenever it is used in an investigation.
Defendants have a vested interest in reviewing a variety of factors, such as algorithm quality, the software settings police used, and whether any other potential matches were discovered or investigated, that could provide exculpatory or mitigating evidence. Guaranteeing access to this information is not only critical for due process rights but also acts as an important safeguard to deter corner cutting and sloppy use of face recognition during investigations.
“In Massachusetts, current law does not provide any due process protections for criminal defendants who have been subject to the use of facial recognition systems.”
Despite the importance of disclosure, it rarely occurs.15 In some jurisdictions, law enforcement uses facial recognition thousands of times per month, and defendants almost never receive notice of its use in investigations.16 Yet even as law enforcement relies on the technology for investigations, they obscure it from examination in court by defendants and judges.17 In Massachusetts, current law does not provide any due process protections for criminal defendants who have been subject to the use of facial recognition systems.
H.135 and S.47 require that law enforcement agencies and district attorneys make available to criminal defendants and their attorneys all records and information pertaining to face recognition searches performed or requested during investigations.18 This requirement would protect due process rights and strongly incentivize law enforcement to adhere to the highest standards in their use of face recognition.
The Law Should Prohibit Face Recognition from Being Used for Untargeted Surveillance
One especially dangerous form of face recognition is in its use not to identify a designated person in an image but rather to conduct untargeted scans of all individuals on video streams.
“Untargeted face recognition is notoriously inaccurate.”
According to early testing, untargeted face recognition is notoriously inaccurate: In pilot programs in the United Kingdom, South Wales Police had a 91% error rate, and London Metropolitan Police had a 98% error rate.19 In the United States, the technology has not received any type of comparable tests or vetting.
Yet even if untargeted face recognition improves in accuracy, it would still present a serious threat to civil rights and civil liberties. This type of surveillance system could effortlessly monitor and catalogue individuals’ movements, interactions, and activities on a mass scale.
Current law does not regulate the use of face recognition for purposes of surveillance of public spaces, leaving open the possibility of video surveillance systems incorporating untargeted face recognition in the future. H.135 and S.47 would preempt this danger by prohibiting the use of untargeted face recognition for general public surveillance.20
We encourage you to favorably report H.135 and S.47, with the recommended changes described above, in order to protect Massachusetts residents from unchecked face surveillance. We need strong safeguards to ensure that this technology does not infringe on civil rights and civil liberties, and this legislation offers an effective path for achieving that important goal.