By Caleb Harrison

As the climate deteriorates and people around the world increasingly need to migrate, the United States (“US”) seeks to develop and implement migration control technologies like migrant databases and facial recognition technologies (“FRTs”) that threaten free movement.[1]

For example, the US has recently begun implementing its “Extreme Vetting Initiative” (“EVI”)—an effort to not only increase the breadth of data that they collect on migrants and potential migrants, but to incorporate artificial intelligence and machine learning algorithms to analyze the data collected through the EVI.[2] Under the EVI, the US is creating a central repository of biometric and biographic information on migrants.[3] The information will include “social media handles, aliases, associated identifiable information, and search results” scraped or otherwise obtained from “publicly available information obtained from the internet . . . commercial data providers . . . [and] information obtained and disclosed pursuant to information sharing agreements [with foreign governments].”[4]  The data will be used to evaluate “the applicant’s likelihood of becoming a positively contributing member of society” and ability to make contributions to the ”national interest,” and “to assess whether or not the applicant has the intent to commit criminal or terrorist acts after entering the United States.”[5] Migrants will be screened recurrently, “so as to identify activities, associations with known or suspected threat actors, and other relevant indicators that inform adjudications and determinations related to national security, border security, homeland security, or public safety.”[6] Anybody who is identified as having an entry in the database is at risk of having their movement impeded on the basis of algorithmically-inferred connections between data points and opaque or undefined notions of “positive contribution” or “terrorist threat.” With FRTs, identification risks becoming trivial.

At their most basic, FRTs comprise a set of technologies that permit the identification of an individual via their facial features.[7] FRTs work by comparing digital representations of facial features, including both real-time feeds as well as stored video or photos.[8] Given enough data and training, the algorithm can match faces, and provide a degree of certainty that the match is not a false positive.[9] Companies like Clearview AI have scraped billions of photos and videos from the internet—including from social media profiles—and purport to be able to identify any person in the world with a single photo or video.[10] If you have a face, the government can identify you.

Of course, the preceding descriptions are theoretical, and there is reason to expect that neither the EVI nor FRTs will work as intended in practice. For instance, if the data entering the EVI database from commercial aggregators and public internet scrapings is not valid, then the database itself is likely to be of limited use.[11] As the saying goes: “garbage in, garbage out.” And if the target to be identified in the data is not defined or definable—e.g., “positively contributing member of society”—then no quality of data will be sufficient to generate a match for the target.[12] Such imperfections undermine the purpose of the database, increasing the likelihood that migrants without derogatory data will be misidentified as migrants with such data, and also decreasing the likelihood that the collected data represents the information it purports to represent.

FRTs are similarly imperfect. Popular AI software like Amazon’s ‘Rekognition’ software have erroneously matched photos to alarming degrees.[13] Further, the technology is not as accurate for women, people who are not white, people who are trans, and people who are elderly or children.[14] If one belongs to at least one of these groups, one is subject to the risk of false identification.[15] And if inherently discriminatory data are associated with particular groups, then members of those groups are at increased risk of having their movement impeded.

The vision is dystopic. In theory, the government will be able to compare any person on earth with visual media in order to identify them. Once identified, the government will be able to search their database for derogatory information in order to prevent or permit their travel. In practice, the government will do so by means of technologies riddled with unacceptable error rates.

Currently, migrants have few legal protections. The denial decisions of U.S. immigration officials are not reviewable by courts so long as the reviewing agent provides a “facially legitimate and bona fide reason.”[16] Further, US officials can share information from migrant databases with third-countries to inform those countries’ migration control decisions, with no guarantee that migrants can do anything to challenge their determinations.[17] FRTs are generally opaque, relying on private algorithms the working of which FRT suppliers claim are protected as trade secrets.[18] Proactive legislation that bans or severely restricts the use of FRTs in migration control, and that prevents the collection and storage of migrant data in the absence of good reason to suspect wrongdoing is perhaps our only hope of slowing its use. As our need to move across borders increases with the deterioration of our environments, the tracking and surveilling of migrants will threaten free movement if we do nothing in advance to stop it.

[1] See, e.g., Jack Corrigan, DHS Funds Machine Learning Tool to Boost Other Countries’ Airport Security, (Aug. 20, 2018) (“The Homeland Security Department is investing in machine learning technology that could help foreign countries increase airport security at zero cost.”),; Aaron Boyd, CBP Expands Facial Recognition for Global Entry Travelers, Nextgov.Com (Jan. 16, 2020)

[2] Exec. Order. No. 13,769, 82 Fed. Reg. 8977 (2017).

[3] Exec. Office of the President, Memorandum from President Donald J. Trump, Presidential Memorandum on Optimizing the Use of Federal Government Information in Support of the National Vetting Enterprise, § 2(a) (Feb. 6, 2018),

[4] Notice of Modified Privacy Act System of Records, 82 Fed. Reg. 179, at 43557 (Sept. 18, 2017).

[5] Exec. Order. No. 13,769, 82 Fed. Reg. 8977, at § 4(a) (2017).

[6] Id., at § 2(a)–(b).

[7] Clare Garvie, Alvaro Bedoya & Jonathan Frankle, What is Facial Recognition Technology?, Perpetuallineup.Org (Oct. 18, 2016)(“Face recognition is the automated process of comparing two images of faces to determine whether they represent the same individual.”),

[8] Id.

[9] Id., at § V(D) (describing how FRTs represent degrees of certainty in matching faces).

[10] Kashmir Hill, The Secretive Company That Might End Privacy as We Know It, (Jan. 18, 2020) (“[Clearview AI’s system] — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants.),

[11] Lindsey Barrett, Reasonably Suspicious Algorithms: Predictive Policing at the United States Border, 41 N.Y.U. R. L. & Soc. Change 327, 339 (2017) (“Commercial data brokers—companies that aggregate data about consumers to sell for marketing and analytics purposes—operate with little accountability or oversight, and have been subject to considerable criticism for lack of transparency and low data quality standards.”) (citing Kevin Miller, Total Surveillance, Big Data, and Predictive Crime Technology: Privacy’s Perfect Storm, 19 J. Tech. L. & Pol’y 105, 120 (2014)).

[12] David A. Martin, Trump’s ‘Refugee Ban’ – Annotated by a Former Top Department of Homeland Security Lawyer, (Jan. 30, 2017, 8:50 AM), (referring to these requirements as “remarkably vague criteria that will be very hard to turn into operational guidance”)).

[13] Stephanie Beasley, Big Brother on the U.S. Border?, (Oct. 9, 2019 at 4:59 AM) (“the American Civil Liberties Union of Northern California released findings from its test of Amazon’s ’Rekognition’ facial recognition software, which it said falsely matched the images of 26 California state lawmakers to mugshots in a public database . . . [t]he ACLU did a similar experiment with Amazon’s software in 2018, comparing photo database of mugshots of people arrested for crimes with members of Congress and found the software incorrectly matched 28 lawmakers with mugshots.”),

[14] See, e.g., Alpa Parmar, Policing Migration and Racial Technologies, 59 Brit. J. Criminology 938, 940–41 (2019) (“artificial intelligence learns from examples it was trained on, so implicit or explicit human racial biases are reproduced through the application of the criminal justice technologies (e.g. facial recognition) and expand the over-surveillance and inaccurate identification of black bodies resulting in ‘algorithmic discrimination’”).

[15] Id.

[16] Kleindienst v. Mandel, 408 U.S. 753, 770, 92 S. Ct. 2576, 2585 (1972) (holding that courts will not review discretionary determinations when made “on the basis of a facially legitimate and bona fide reason”).

[17] Plan to Implement the Presidential Memorandum on Optimizing the Use of Federal Government Information in Support of the National Vetting Enterprise, 10 (Aug. 5, 2018) (“The effective management of watchlist encounters not only protects the nation by excluding individuals from entry into the United States, but also alerts law enforcement, IC, military, and foreign partner agencies about potential opportunities to act against threat actors already present in the United States and around the world.”) (emphasis added).

[18] Meredith Whittaker et. al., AINow Report 2018, 22 (“accountability in the government use of algorithmic systems is impossible when the systems making recommendations are ‘black boxes.’ When third-party vendors insist on trade secrecy to keep their systems opaque, it makes any path to redress or appeal extremely difficult.”)