Give Now

We must close the loophole that allows law enforcement to buy our personal data without a warrant.

Protecting Civil and Human Rights
|
Analysis

Next Steps After Stopping IRS Face Recognition

Here are three priorities for stopping overbroad surveillance
(Photos: Getty Images; Illustration: Leslie Garvey / POGO)

Last week, we saw a series of rapid positive developments in our effort to fight back against the IRS’s problematic plan to require face recognition to access basic services. First, the IRS announced it would be abandoning its planned use of face recognition. Then, the following day, the IRS’s chosen vendor — ID.me — stated it would begin offering alternatives to face recognition for its verification services, which are used by over two dozen states for accessing government services.

This is excellent news. Face recognition is a dangerous technology that has no business being required for government services. As we’ve often highlighted, it’s prone to error, and more likely to misidentify women and people of color. And even when it does work well, this technology creates serious risks of pervasive surveillance. The IRS’s shift will have a substantial impact in protecting Americans’ privacy, and the quick turnaround shows the difference we can make when rallying to protect civil rights and civil liberties.

But as we celebrate the end of this particular program, it is also important to keep in mind the work that we still need to do. The IRS’s plans include a “transition” period that advocates like POGO will need to keep a close watch on, especially with many Americans already working to file their taxes for the year. Meanwhile, the fact that ID.me will still push face recognition as an option (likely its main option, given problems the company has experienced with person-to-person verification) elicits serious concerns that users will still be pushed to use this problematic tech rather than face wait times and hurdles with the company’s human verification systems. That’s why this past Monday, POGO joined 45 other organizations in calling for government agencies to halt their use of face recognition technology and other biometric verification services — a measure several senators are now pressing for.

There are some important, related priorities we should keep in mind to protect privacy, civil rights, and civil liberties going forward.

First, face prints and biometric profiles aren’t the only type of sensitive information we should worry about the government obtaining. Location data can be incredibly revealing as well. It provides not just a map of your movements, but it can paint a picture of the most intimate details of your life, exposing your most sensitive activities and interactions. This is why the Supreme Court ruled four years ago that the government must get a warrant to collect your cellphone location data. But agencies have been exploiting a loophole by simply going to data brokers and buying that location data. There is legislation under consideration — the Fourth Amendment Is Not For Sale Act — that would close this loophole. We need to push Congress to act on it.

Second, there are government contractors building services around face recognition that have problems just as serious as those associated with ID.me, if not more so. Chief among them is Clearview AI, the surveillance vendor that has collected billions of photos off social media (without users’ consent) and put them into a face recognition database offered out to law enforcement. Despite this problematic practice, numerous federal agencies, as well as hundreds of police departments, use Clearview AI. Fortunately, because Clearview AI’s databases fall under “illegitimately obtained information” as defined in the Fourth Amendment Is Not For Sale Act, if passed the bill would ban government entities from using these databases.

We need comprehensive safeguards and strong limits in place to rein in face recognition surveillance, which right now largely operates in a rules-free zone.

Finally, as we’ve noted before, while Clearview AI is a standout, the dangers of face recognition extend far beyond the sketchiest software and sellers. We need comprehensive safeguards and strong limits in place to rein in face recognition surveillance, which right now largely operates in a rules-free zone. We’ve been examining effective policies on this issue for years. States are now advancing those and other ideas into law — and Congress seems interested in following suit.

There’s lots to be done, but as the last few weeks have shown, raising our voices for privacy, civil rights, and civil liberties can make a huge impact. We’re eager to take on the work ahead.