Insert Face to Fly

by Ege Uz

As air travel ground to a halt during COVID-19, U.S. Customs and Border Protection (CBP) has been rolling out their new facial recognition program across airports in the U.S. The program, called Traveler Verification Service (TVS), is a cloud-hosted facial recognition software used to identify travelers through photos taken of them in realtime at the airport, most prominently during passport control, and security checks in boarding gates. Since its initial pilots in 2016 and 2017, the TVS has been put to use in 20 international airports in 14 states, and CBP has stated that they’ve processed over 23 million travelers using facial recognition technology in 2020.1

I first encountered the TVS this February, as I was flying out from New York. It was a new step in the usual airport security procedures, between the airline check-in and the TSA checkpoint. Approaching the booth manned by a CBP officer, I was asked to hand over my passport and ticket, and then to take my mask off to look up at a camera affixed to the booth. I did so, and after a few seconds, was told I could proceed. With no written or verbal indication that I can remember, I didn’t even find out until a month after the fact that the procedure involved facial recognition.

Here’s what I found about how the TVS works after the fact. After the officer took my photo, it was sent to a server running TVS, where it was compared (based on biometric data) to a pool of other photos. The pool is composed of biometric images of all of the passengers on the flight that I’m supposed to be on, including my own. These photos are retrieved from a variety databases that belong to CBP and its parent agency, the Department of Homeland Security (DHS), If the photo taken matches my image from the pool, I am verified by the system to be who I say I am.2

That may be the end of my interaction with the TVS, but what about the photo taken of me? What happens to the photos sent to the TVS depends on who they are of, vis-a-vis citizenship. U.S. citizens get to object to being subjected to facial recognition and instead ask for manual identification (which still employs biometrics). If they don’t object, their images are retained in the TVS servers for 12 hours before being deleted. Non-citizens —as in, immigrants, tourists, work/student visa holders— instead have their photos stored in the server for 14 days, where it is used for system audits and tech evaluations. The image is also shared with other biometric databases and systems across the DHS, as well as other with federal departments such as the Department of Justice. The most notable of these systems is the Automated Biometric Identification System (IDENT), the DHS’s central database of biometric information, within which the image would be stored for a whopping 75 years. And unlike citizens, noncitizens cannot object to any of these proceedures or demand to be processed in any other way.3

As an immigrant to the U.S., I’m part of the second group, whose images are collected and operationalized without asking for consent, and will populate DHS datasets for decades to come. I’ve travelled between the U.S. and my home country of Turkey so many times over the years that the DHS could construct a timeline of me going through puberty if they wanted to. And while the TVS is a significant milestone in the use of facial recognition technology and biometric data collection, these practices are not new: they have been deployed and developed on immigrants and noncitizens before being rolled out to affect Americans. It’s important to keep that fact in mind as it points to the larger pattern in how the U.S. develops its policy around domestic security, policing and surveillance: by co-opting practices and technology it applies to its “aliens”. Examining the TVS in-depth, I will attempt to illustrate how its development parallels the extensive surveillance and data collection that the U.S. already imposes on non-citizens, with the potential to go far beyond.

Visions of Facial Recognition

Aside from privacy concerns, there are plenty of reasons to be critical of facial recognition technology and oppose its increased use in any field. One reason is, as studies indicate, it’s largely inaccurate in actually recognizing faces, especially faces of people of color, and women. The use of commercial facial recognition software in policing across cities in the US have already led to the wrongful arrests of Black people. CBP touts a “match rate of more than 97 percent” across their applications of facial recognition technology (including the TVS),4 which, with over 23 million people scanned, still means an estimated 460,000 to 690,000 people being misidentified each year.

Another reason, particular to airports and border security, is that using facial recognition to identify travelers as opposed to doing so manually does not make much of a difference when it comes to detecting identity fraud. A total of zero people were caught using false identification at the airport in 20205, with the grand total of such cases since the TVS’s official rollout in 2018 being seven.6 That gives airport facial recognition a track record of zero identity fraud cases caught, versus hundreds of thousands of people being misidentified in 2020. It’s perhaps because of this that CBP officers are instructed to fall back to identifying people manually in cases where the TVS provides a no-match result7— rendering the program little more than a redundancy to existing procedures in its current iteration.

Despite these prevalent issues around facial recognition, the CBP and DHS have placed it at the center of their future visions of airport security and border control. In their 2020 Trade and Travel Report, CBP states that they’ve increased their use of facial recognition despite travel decreasing and that they see “biometric [facial recognition] technology as the way of the future”, and a “vital element of national security and enforcing U.S. immigration laws.”8

Although the report doesn’t mention the TVS by name, other CBP and DHS documents reveal that it is central to this future vision. The overview for the 2017 Privacy Impact Assessment (PIA) for the TVS describes the larger ambitions of the program:

“By partnering with stakeholders on a voluntary basis and using biometric technologies, CBP is facilitating a large-scale transformation of air travel that will make air travel more: (1) secure, . . . (2) predictable, . . . and (3) able to build additional integrity to the immigration system.”9

According to ACLU senior policy analyst Jay Stanley, the TVS is the first step of a “very specific, clear and well-defined pathway” towards “a much broader implementations of face surveillance at the airport”.10 Formulations of this pathway can be found across public documents by CBP and DHS. One document by the Transportation Security Administration (TSA), another agency of the DHS, lays out the broad vision for expanding the use of biometrics in airports, starting with international travelers, then expanding to TSA PreCheck members, and finally to all domestic travelers.11

Within the context of this vision, the TVS’s major achievement is that it’s the first time the technology is deployed during international departures from the U.S. Prior facial recognition programs had been limited to use in international arrivals.12 As a result, CBP is able to collect biometric data on more U.S. citizens than it had been able to before, while increasing the amount of points where it can collect data from non-citizens.

However, the scope of the expansion goes beyond who gets subjected to facial recognition. One worrying trend is the international adoption of CBP’s facial recognition technology, facilitated via international partnerships. Currently, these partnerships involve biometric data collected by partner countries being shared with the U.S. in order to create a record of the departure.13 However, as of January 2020, CBP is developing programs collaborating with U.S.-based and international airlines, “in which the airline collects the photos of travelers en route to the United States at the airport of origin and securely transmits the facial images to CBP’s TVS for identity verification”14. The list of over 60 airports found on CBP’s webpage on biometrics include airports in countries such as Canada, Ireland and the United Arab Emirates15, where one may expect to see this program in use soon.

Another development in the works is the capabilities of the facial recognition technology that is employed. In their 2021 Privacy Impact Assessment for the TVS, CBP states that they are “increasingly employing technologies that do not require subjects to present their face directly to the camera. . . to collect face images with minimal participation from the subject.”16 Acknowledging that this “poses increased privacy risks since the individual may be unaware that their photo is being captured”, they currently address it using “both physical and either LED message boards or electronic signs, as well as verbal announcements”17. Stripping off the euphemistic language, this evokes a future where any camera present at the airport, from any angle or distance, may be sending your image data to the TVS to identify you and/or populate biometric databases.

Putting all of this together, this vision of airport security and facial recognition seems quite bleak. As Jay Stanley puts it:

“It hardly takes a paranoid flight of fancy to foresee this program morphing into something far more comprehensive and dystopian— a world where face recognition is used throughout our public spaces to scrutinize our identity, record our movements, and create a world where everyone is constantly watched.”18

The paranoid flight of fancy I’m taken on by all of this leads me towards an airport that is characterized by a state of total surveillance, where information is extracted out of people constantly, passively, simply as a precondition of being at the airport, and without necessarily them knowing about it or consenting to it. Reflecting on my own experience as a traveler and immigrant to the U.S. coming from Turkey, it doesn’t feel too far off, as I’ve often felt like I’ve had to provide information about myself and open myself to scrutiny without much of a choice.

Total Surveillance

Across their documents and communication, CBP states that they’re mandated by the government to deploy and develop the TVS (and facial recognition in a broader sense), citing a number of laws that begin with the 1996 Illegal Immigration Reform and Immigrant Responsibility Act, which “authorized the U.S. Government to use an automated system to record arrivals and departures of non-U.S. citizens at all . . . ports of entry”.19 Though that is where CBP begins their history, the language shifts significantly in the laws following 9/11, as is often the case with policies relating to surveillance. After 9/11, the authorization transformed to be “for the creation of a nationwide biometric entry-exit system”,20 which has significant differences. First, the scope was expanded and abstracted from non-U.S. citizens to nationwide. Second, what was meant to be recorded changed, from arrivals and departures to biometric data, relating directly to people’s faces, fingerprints, identities. “The 9/11 Commission determined that implementing such a system [the TVS] is ‘an essential investment in our national security’”21, one CBP article from 2019 ends, positioning the program as a product of the national paranoia that 9/11 has caused, not a vision that seeks to go beyond it.

Another system that emerged out of 9/11 is IDENT, the biometric database that the TVS both pulls image data from to make comparisons (by jumping through a few bureaucratic hoops), as well as saves facial images into for 75 years. When it was established in 1994, IDENT was a database of fingerprints. After the formation of DHS it was expanded gradually to store further biometric and biographical data, and was deployed on international border checkpoints starting in 2004. As a result today, it has come to hold the biometric images and fingerprints (alongside a slew of personally identifying information) of 200 million people, and shares this data with all DHS agencies, including CBP, TSA and ICE, which share data back into it during their operations22.

The first time I flew to the US was in 2012, so I was recorded into IDENT on all of my trips, providing a new biometric photograph and new fingerprint records each time. Each individual data point, as well as their totality, are not only used to track and identify me, but are also used for comparisons done to track and identify others, or to train and improve the facial recognition model. Alongside the biometric information, I provide my biographical information, such as my age, nationality, gender and race, as well as information about the encounter— each possibly a correlational feature used in one algorithm or another, employed by god knows which DHS agency. Of course, the track record began the moment I took action on my intent to travel to the US— when I apply for a visa. When I applied for my green card in 2016 to begin my immigration process, the amount of information I submitted into the federal data sprawl increased to include my address history, my marital history, employment history, as well as information on any spouses or children23, with the same being asked of my petitioner (my mom).

Consider, without thinking about whether any of this data should be collected or not, the magnitude of data collected on an individual through these processes. How granularly it describes someone. How many correlations can be plucked out of the various data points. Due to the extent to which IDENT data is presently used and shared across federal agencies, and the extent to which facial recognition and other biometrics-based, machine learning driven technologies have become prevalent, the people who provide this biometric data have become highly valuable sources of information. Within this surveillance capitalist framework, it only makes sense to expand and build upon data collection policies and practices employed on non-citizens— with how much of a framework there already is in place, legitimizing an immense collection of data, at numerous points during the travel.

Under the current iteration of the TVS, U.S. citizens who consent to its use on them are nevertheless exempt from having their image be recorded in IDENT, or be shared with any other DHS system or database. In contrast, for travelers coming into the U.S. from abroad, there is no escaping IDENT— with or without facial recognition. The TVS therefore may seem lenient compared to its predecessors that operated solely in international arrivals. However, when historicized from the perspective of non-citizens being subjected to facial recognition programs, the TVS marks the point where participation is made mandatory. Preceding systems that employ facial recognition (or used to employ other means of biometric identification but now use facial recognition), were opt-in programs, or voluntary, yet opt-out programs. While the TVS is mandatory for non-citizens, its implementation also mirrors the tactics employed by these nascent programs in order to develop and spread the technology.

If you still have doubts whether the TVS will expand and calcify into something more invasive and draconian, I urge you to consider that there are three times as many domestic travelers in the U.S. as international travelers24, ask yourself whether your country would stop themselves from tapping into that fertile data capital.


Ege Uz is a second year MFADT student. He likes the Internet, and dislikes airports.

  1. U.S. Customs and Border Protection. “CBP Trade and Travel Report Fiscal Year 2020.” February 2021. https://www.cbp.gov/sites/default/files/assets/documents/2021-Feb/CBP-FY2020-Trade-and-Travel-Report.pdf 

  2. U.S. Department of Homeland Security. “Privacy Impact Assessment for the Traveler Verification Service.” November 14, 2018. dhs.gov/sites/default/files/publications/privacy-pia-cbp056-tvs-february2021.pdf 

  3. ibid., 

  4. CBP. “CBP Trade and Travel Report.” 

  5. Riley, Duncan. “CBP facial recognition technology fails to find anyone using false identities at airports.” Silicon Angle. https://siliconangle.com/2021/02/14/cbp-facial-recognition-technology-fails-find-anyone-using-false-identities-airports/ 

  6. CBP. “CBP Trade and Travel Report.” 

  7. DHS. “Privacy Impact Assessment for the Traveler Verification Service.” November 14, 2018. 

  8. ibid., 

  9. U.S. Department of Homeland Security. “Privacy Impact Assessment for the Traveler Verification Service.” May 15, 2017. https://www.dhs.gov/sites/default/files/publications/privacy-pia-cbp030-tvs-may2017.pdf 

  10. . Stanley, Jay. “U.S. Customs and Border Protection’s Airport Face Recognition Program.” ACLU. https://www.aclu.org/other/aclu-white-paper-cbps-airport-face-recognition-program 

  11. Transport Security Administration. “TSA Biometrics Roadmap For Aviation Security & The Passenger Experience”. https://www.tsa.gov/sites/default/files/tsa_biometrics_roadmap.pdf 

  12. DHS. “Privacy Impact Assessment for the Traveler Verification Service.” November 14, 2018. p. 2. 

  13. ibid., pp. 7-8. 

  14. ibid., p. 48. 

  15. “Air / CBP Biometrics”. *U.S. Customs and Border Protection. https://biometrics.cbp.gov/air 

  16. ibid., p. 11. 

  17. ibid., p.18 

  18. Stanley. “U.S. Customs and Border Protection’s Airport Face Recognition Program.” 

  19. DHS. “Privacy Impact Assessment for the Traveler Verification Service.” May 15, 2017. p. 2 

  20. ibid., p. 2 

  21. “CBP and Privacy Groups Discuss Biometric Entry-Exit Mandate.” U.S. Customs and Border Protection. December 4, 2019. https://www.cbp.gov/newsroom/national-media-release/cbp-and-privacy-groups-discuss-biometric-entry-exit-mandate 

  22. U.S. Department of Homeland Security. “Privacy Impact Assessment for the Automated Biometric Identification System.” July 31, 2006. https://dhs.gov/sites/default/files/publications/privacy_pia_usvisit_ident_final 

  23. Department of Homeland Security. “USCIS Form I-130: Petition for Alien Relative.” 

  24. “2018 Traffic Data for U.S Airlines and Foreign Airlines U.S. Flights.” Bureau of Transportation Statistics. April 30, 2020. https://www.bts.dot.gov/newsroom/2018-traffic-data-us-airlines-and-foreign-airlines-us-flights 

1
Dark Connections

The internet is made up of interconnected pieces of data about its users. Every website has trackers installed in it, mostly belonging to Google or Facebook, that keep tabs on the people using it. This data is neither protected or encrypted, often fully accessible to anyone with the means to access it. Though these companies store our data and use it to sell their products to us, they are in no way responsible for it. This entire system is almost always not implicit and shrouded in the background of its utility. This section aims to connect these dots that exist in the dark underbelly of the internet, that we have a vague idea about, but that are not necessarily clear.
Making these connections can make the online experience feel scary and unsafe, but it already is. Although governments and large corporations are often seen as the problem, the truth is that they are far less interested in you or I than someone who knows us personally and has an agenda that involves us. This section shines a light on the dark patterns that enable your data to be collected and potentially mobilized against your interest.

2
Digital Forensics

In order to combat the practice of dark data, one can exploit the loopholes in its architecture. But in order to do this, we need to at least comprehend the full extent of the information that is collected about us. It is now possible for us to demand the data that is collected about us, though this option is not directly obvious to most people. Resources like APIs, Google Takeout, and OSINT tools allow us to conduct small-scale investigations with regards to where our data lives and what data exists about us. This section is a collection of attempts by the authors to gain access to and interpret their own data that exists online.
However, awareness of the data does not guarantee its control. Google may give us a copy of the data that exists about us in its servers through its Google Takeout service; but this does not mean that that we now own this data. Google can still use it however it likes, it has not been deleted from their databases. We are being given only an illusion of control and this is intentional. Digital Forensics can only grant us a window into this massive machine, the machinations of which may still continue to be unclear. This section explores these windows and what they teach us both about ourselves and about the technology that we utilize.

3
Data Futures

What is the future of dark data? People are increasingly aware that information about them is collected online. Governments are making efforts to regulate Big Tech and protect the privacy of citizens. How can we imagine better ways to exist in the system? How can we protect ourselves from its repercussions? This section speculates how dark data is changing as a practice. It discusses ways in which people can take action and re-examine their browsing methods. The ideas discussed here think about how technology can be used to propose solutions to the problem it has created.
It is important to consider that the practice of data collection and exploitation is ongoing. There is no easy way out of these cycles. However, we would like to believe that sparking deliberate thought and action to help you orient yourself in this Wild West landscape can make the process of coming to terms with dark data easier.

4
About

This digital edition was compiled from scholarship, research, and creative practice in spring 2021 to fulfill the requirements for PSAM 5752 Dark Data, a course at Parsons School of Design.

Editors

  • Sarah Nichols
  • Apurv Rayate

Art Directors

  • Nishra Ranpura
  • Pavithra Chandrasekhar

Technology Directors

  • Ege Uz
  • Olivier Brückner

Faculty

  • David Carroll
  • Melanie Crean

Contributors

This site needs no privacy policy because we did not install any tracking code and this site does not store any cookies.