Give Now

We must close the loophole that allows law enforcement to buy our personal data without a warrant.

Championing Responsible National Security Policy
|
Analysis

ICE Backs Down on “Extreme Vetting” Automated Social Media Scanning

(Photo: Flickr / Josh Denmark, DHS)

In recent months, civil rights and civil liberties groups have been vigorously calling for Immigration and Customs Enforcement (ICE) to abandon a dangerous and expensive plan for electronic “vetting” of U.S. visa applicants and recipients. At The Constitution Project, we have expressed concern that the program would only serve to flood analysts with unreliable information and chill online speech (the latter being a constitutional right of any person here in the United States). Luckily, last week ICE acted in response to criticism, cutting off a key component of this problem-laden proposal: use of automated social media scanning to flag “threats.” This is a significant victory not just for civil liberties, but in guarding against wasteful government programs that have a high cost with little to no return. However, it’s important to keep a watchful eye on ICE, as the agency appears intent on maintaining an equally concerning and ineffective component of this Extreme Vetting plan.

Last year DHS put forward plans for a mass Extreme Vetting program focused on social media monitoring of all visa applicants and visa holders. But three critical features of the Extreme Vetting system demonstrated that it would be a $100 million White Noise Machine, which at best would flood analysts with useless information, and at worst would be so vague and malleable that it could facilitate abuse and improper targeting of vulnerable groups.

First, the system was focused on automated social media scanning, with artificial intelligence (AI) systems trying to “exploit” social media sites like Facebook and Twitter, and flag threats. But as over 50 leading technologists and computer science experts highlighted in a letter to DHS, AI systems are incredibly bad at interpreting context and meaning on these platforms; even slang and language translations cause huge problems that would render this method incredibly ineffective. Second, the program included a quota system that required the contracted vendor to flag a minimum of 10,000 threats. This creates perverse incentives to lower standards for flagging and makes it likely that innocuous posts would be tagged as “threats,” wasting resources and harming innocent individuals. Finally, the standard for assessing whether individuals posed a threat was whether or not they were “positively contributing member[s] of society” that “contribute to the national interest,” standards far too vague to facilitate any meaningful system of vetting, and certain to exacerbate the problems with AI scanning systems.

Fortunately, after facing a growing chorus of opposition by advocacy groups and members of both the House and Senate, ICE has reversed course and now publicly states that the program has “shifted from a technology-based contract to a labor contract.” This is a significant victory that will aid the government by removing an ineffective system, and protect individuals from improper monitoring. In many areas of society we’re grappling with the fact that while algorithms make it easy to surf the web and buy household products, they are not silver bullets that can solve every serious problem. In fact, in many areas relying on automated systems can be counterproductive. Trying to interpret why individuals are retweeting something or if their use of a hashtag was sarcastic is certainly one of those areas.

But while removing automated scanning from Extreme Vetting is welcome news, ICE is still moving ahead with another problematic piece of the program: quotas. According to The Washington Post report breaking news of the ICE policy shift, the agency “will probably call for roughly 180 people to monitor the social-media posts of those 10,000 foreign visitors whom ICE flagged as high-risk.” One of the article’s authors described the basis for this number as follows: “Why 10K? Just the goal they set.” ICE may be attempting to avoid the label—the article’s author also noted “They [ICE] say the human monitors won't have the quota”—but a pre-established numerical minimum is the very definition of a quota system, and in this situation it is very bad policy. Law enforcement should first set objective standards for when threats warrant further action and respond accordingly; starting with the number of flagged individuals and working backwards to define “threats” based on that number will lead to misuse of resources and subject innocent individuals to invasive government monitoring. By sticking to quotas, it seems like ICE’s priority is simply running up the score, even if that means diverting attention away from more serious threats to devote resources to a larger group that exists solely because of a pre-set number.

ICE has not commented on whether it will persist in assessing threats based on whether a monitored individual is a “positively contributing member of society”—which would present numerous risks if combined with the use of arbitrary quotas. First, this language is so nebulous that it could easily be appropriated to fill ICE’s 10,000-person pre-established goal of “risky” individuals, even though not “contributing to society” seems far afield from genuine security threats. Further, the malleable nature of this standard also creates risks that it could be contorted to target specific groups. As we’ve previously noted, this concern is well-founded given that the language is taken verbatim from President Trump’s original Travel Ban, which was struck down by lower courts for religious animus.

ICE took a step in the right direction by taking automated social media scanning off the table, but significant problems with the planned Extreme Vetting system still remain. ICE should continue to improve its vetting plans by scrapping its quota system and setting an objective risk standard to trigger monitoring, regardless of whether that standard flags 100 or 10,000 individuals. Its standards should use nothing like the “positively contributing member[s] of society” standard, and should instead use specific identifiers that clearly demonstrate security threats. And while ICE will hopefully continue to address the problems with its Extreme Vetting proposal, Congress should not stand by and wait. Automated social media scanning should not be up for discussion as an ICE expenditure (in this case, a massive $100 million contract) or for any other component of DHS until the technology is far more reliable, and quotas should never form the foundation of enforcement efforts. ICE’s Extreme Vetting plan is not as extremely bad as it used to be, but there are still many issues left to resolve.