DHS Watchdog Repeatedly Misled Congress, Federal Probe Finds.

Protecting Civil and Human Rights
|
Analysis

Face Recognition in the Hands of Stalkers, Harassers, and Vigilantes

Danger Isn’t Just from Government Abuse
(Illustration: Renzo Velez / POGO)

Over the past few years we’ve focused our discussion of face recognition on government use, and with good reason. It’s a powerful and dangerous surveillance tool, and in the last decade a huge number of law enforcement entities have begun using it. But now we also need to pay attention to another emerging application of the technology that could have severe ramifications for the privacy and safety of private citizens: unrestricted personal use.

How big a risk is general access to face recognition? We break down the dangers of this tech in the hands of stalkers, doxxers, and vigilantes, and how to stop it.

Recently an app called PimEyes has been garnering attention for its plans to offer consumer access to face recognition: Subscribers can run a photo of any individual they want through PimEyes’ systems, which will attempt to identify the person by returning every photo it can find online that might be a match. Face matches across the web—from blogs, online photo albums, YouTube, even pornographic websites—will be accessible to anyone for just $30 a month. PimEyes operates by scraping images from across the internet (often in violation of sites’ terms of use meant to protect users). This is much like Clearview AI—a face recognition company with a problematic history of misrepresentations on how its tech works—that is considering expanding to public consumer use as well.

Expanding use of face recognition and placing it in the hands of individuals everywhere presents several serious risks that policymakers need to grapple with before it becomes a widely accessible tool for abuse.

Stalking and Harassment

The first major risk from consumer-focused face recognition is how it could be weaponized by stalkers and for harassment. This has already become a problem abroad with FindFace, an app created by a Russian company that works similarly to PimEyes. It’s not hard to imagine how a technology that turns your smartphone into a personal encyclopedia about anyone you see in a bar or coffee shop could be abused. For instance, individuals seeking to remain hidden from former partners who threatened or engaged in domestic violence could easily be put at risk.

Yet despite these problems being both obvious and previously seen in systems like FindFace, PimEyes dismisses concerns about these dangerous uses. The company has said, “Our privacy policy prevents people from using our tool for this case.” This response is not so much a plan to prevent or mitigate the dangers as it is a declaration that the company views itself beyond accountability when harm does occur.

Misidentification During Background Checks and Vigilantism

One of the most significant problems with law enforcement use of face recognition is the danger of misidentification, and that will certainly carry over to consumer-focused systems, although with a range of different consequences.

Background checks conducted on individuals during important activities such as applications for jobs, home leases, or credit related services might begin to include face recognition scans as a standard practice. A quick glance at erroneous “matches” from such a service—for example, multiple recent news reports on PimEyes found that it returned faulty matches from pornographic websites—could have life altering consequences.

As a tool that’s quick and easy to use but unreliable in its results, face recognition could be the perfect storm for web sleuthing disasters.

Early uses of face recognition for background checks have already yielded harmful results. In the United Kingdom, Uber’s use of face recognition caused numerous misidentified individuals to lose their jobs. And over 20 states trust the background check service ID.me to use face recognition to run fraud checks on recipients of unemployment benefits, but this system has erroneously labeled individuals as fraudsters, “leading to massive delays in life-sustaining funds.”

Finally, misidentifications will likely become a danger when consumer-grade face recognition is deployed by online sleuths seeking to identify criminals. As a tool that’s quick and easy to use but unreliable in its results, face recognition could be the perfect storm for web sleuthing disasters. This field, which is rife with vigilantism, already has problems that have led to dangerous misidentifications. One case from 2019 shows how posting face recognition mismatches online could spiral out of control: After the 2019 Sri Lanka Easter bombings, authorities included a U.S. college student—based on a face recognition mismatch—on its public list of suspects, causing her to face a wave of death threats.

These risks are all amplified by the fact that face recognition has repeatedly been shown to misidentify women and people of color at higher rates, which means that individuals who already face the hurdles of systemic sexism and racism in institutions ranging from housing to medical care might soon have even more unjust and life-altering obstacles built into daily life.

Public De-anonymizing and Doxxing

Another serious risk a consumer-grade face recognition system poses is how it could amplify efforts to de-anonymize individuals and to dox people (an internet age technique of publishing someone’s personal information with the goal of generating harassment) who are engaged in potentially sensitive public activities.

We’ve already seen this play out in the realm of sex work. FindFace has been used to de-anonymize and stalk sex workers and adult film actresses. In 2019, a Chinese programmer claimed to have developed a custom face recognition program to identify and publicly catalog 100,000 women on adult websites explicitly to enable men to track whether their girlfriends appeared in adult films. After public outcry, the programmer reportedly shut down the program.

Important public activities could also be targeted. Doxxing is already becoming a weapon deployed against protesters and public health workers, while tracking license plates has been a long-running de-anonymizing tactic deployed against abortion patients. If face recognition technology becomes widely available to the public, these doxxing tactics will become supercharged. An app on any phone would make any sensitive activity—such as union organizing, attending an alcoholics anonymous meeting, or going to a political event—one that could leave an individual’s identity exposed.

What Actions Should We Take?

Consumer-grade face recognition is emerging because current law is not keeping our online biometric data safe from misuse. That needs to change.

Face recognition previously required access to a database of “reference images”—high quality images with a known ID to run comparisons against—such as mugshot or driver’s license databases, which limited the use of face recognition technology to the government. That’s no longer the case. Web scraping to gather data from social media has changed this. Companies like Clearview AI and PimEyes are using the internet as a reference database, grabbing billions of photos from social media sites. Users don’t approve of their photos being taken en masse like this, and in fact social media sites prohibit it: This scraping violates the terms of service for how (even publicly accessible) photos can be used.

Just because images are generally accessible does not mean they are “public information” that can be used in any malicious way a company wants. Users put photos on social media sites with terms of service protecting how those photos can be used. Breaking those terms to harvest images doesn’t just violate rules for sites like Facebook and YouTube, it violates the consent of users whose images are gathered.

However, the best path forward cannot be a total ban of scraping when it breaks websites’ rules. There are situations when web scraping can be a vital public service: It can be useful for research, academia, and journalism. Restricting these types of uses of web scraping would be harmful, and unnecessary to address this issue.

Consumer-grade face recognition is emerging because current law is not keeping our online biometric data safe from misuse. That needs to change.

Face recognition powered by scraping online images stands out in that it collects biometric data. Such information is highly sensitive by nature, making mass collection in violation of terms of use—and without consent of those whose biometric information is being harvested—a unique risk.

There also might be some limited exceptions where face recognition tied to web scraping could be acceptable. For example, PimEyes claims that it scans adult websites so subscribers can find and respond to their own images being posted as revenge porn. This seems like a valuable service to provide, but it also defies the norm that generally makes scraping biometric data so problematic: In this scenario, the user whose images are being scraped is actually consenting to their image being pulled from the web and run through a face recognition system.

Given that social media companies have been unable to do more than send cease and desist letters to companies like Clearview AI and PimEyes, it’s clear that lawmakers need to step in to prevent the serious harm that will unfold if these services become widely available. The recently introduced Fourth Amendment Is Not For Sale Act is a good start. The legislation would stop entities from selling to government agencies data that was obtained by violating terms of use. We should strongly consider expanding this type of provision to cover selling data—or at least biometric data—to any source, including private citizens.