Skip to main content
Testimony

Statement for the Record, Hearing on “Facial Recognition Technology: Examining Its Use by Law Enforcement”

Although facial recognition has been available for decades in one form or another, recent improvements in the technology and the plethora of private and public images related to law abiding citizens means that left unchecked it poses an unprecedented risk to Americans’ privacy.

July 13, 2021 • Testimony

Subcommittee on Crime, Terrorism, and Homeland Security
Committee on the Judiciary
United States House of Representatives

Chairwoman Lee, Ranking Member Biggs, and Members of the Subcommittee:

My name is Matthew Feeney. I am the director of the Cato Institute’s Project on Emerging Technologies. My research is focused primarily on how new and emerging technologies affect civil liberties. Facial recognition is one of the most concerning of these technologies, so I appreciate the opportunity to provide input in connection with this panel’s July 13 hearing, titled “Facial Recognition Technology: Examining Its Use by Law Enforcement.”

I believe that it is possible to craft regulations and legislation that would address the most worrying uses of facial recognition technology without hampering innovation. Below, I will highlight why I think such regulation and legislation are necessary before providing an overview of specific policy recommendations.

Risks and Benefits of Facial Recognition

Facial recognition technology confirms an individual’s identity via automated analysis of images. Most means of identity verification, such as driver’s licenses and passports, are easily concealed, and there is a body of law that governs how a law enforcement officer can legally request such documents. Yet our faces are not so easy to conceal, and facial recognition requires only a camera linked to the necessary software to work.

Although facial recognition has been available for decades in one form or another, recent improvements in the technology and the plethora of private and public images related to law abiding citizens means that left unchecked it poses an unprecedented risk to Americans’ privacy.

Sadly, foreign countries have shown us what a society with widespread use of government facial recognition looks like. Chinese law enforcement officials use facial, behavioral, and ethnic recognition technologies to supplement an already extensive and oppressive surveillance state.1 Perhaps the most horrifying of this surveillance is taking place in China’s Xinjiang province, where the Chinese Communist Party is implementing a policy of cultural cleansing of the predominantly Muslim Uyghur population.2

In the United States we are fortunate to enjoy far more civil liberty protections than the Chinese. But the history of American surveillance reveals that the United States is not immune to mass surveillance and that a long and diverse list of communities have found themselves on the receiving end of such snooping.3 Absent appropriate restrictions, facial recognition could be used in America’s next episode of mass surveillance.

Like many other surveillance tools, facial recognition has valuable private sector applications. Facial recognition could one day make tickets lines at train stations, concerts, and cinemas a feature of the past. It could also be widely used for payments and to assist the visually impaired.4 It would be regrettable if the legitimate civil liberty concerns associated with facial recognition hampered these welcome innovations.

Federal Facial Recognition

Twenty federal agencies, which combined employ around 120,000 law enforcement officers, reported using facial recognition technology systems during a recent survey conducted by the Government Accountability Office (GAO).5 These systems can access billions of images.6 Ten of the surveyed agencies (including the Federal Bureau of Investigation, the Drug Enforcement Administration, and the U.S. Customs and Border Protection) use Clearview AI, a facial recognition search engine that allows users to search billions of images uploaded to popular social media platforms. According to reporting from BuzzFeed, the GAO report may have undercounted the number of federal law enforcement agencies that have used Clearview AI’s technology.7

The vast majority of the billions of images available to federal law enforcement officers using facial recognition systems are associated with law abiding citizens and residents. Many of these images are scraped from non‐​government entities, but some come from public agencies such as state departments of motor vehicles, creating what Georgetown researchers have described as the “perpetual lineup.”8

It is this “perpetual lineup” that ought to most concern lawmakers. Below are policies that would protect civil liberties while allowing law enforcement officials to use facial recognition technology.9

  1. Database restrictions: Law enforcement facial recognition databases should only include data related to those with outstanding warrants for violent or other serious crimes. These databases should undergo regular purges of irrelevant data.

Law enforcement should only be able to add data related to someone to the database if they have probable cause that person has committed a violent or other serious crime. Relatives or guardians of missing persons (kidnapped children, those with dementia, potential victims of accidents or terrorist attacks) should be able to contribute relevant data to these databases and request their prompt removal.

Such a policy need not restrict law enforcement from using facial recognition systems trained with data related to those without outstanding warrants. The policy merely restricts the library of images law enforcement can search.

  1. Open source/​data requirement: The source code for the facial recognition system as well as the datasets used to build the system should be available to anyone. This will help improve accountability and transparency and is of particular urgency given concerns associated with facial recognition technology and racial bias.10
  2. Public hearing requirement: Law enforcement agencies should not be permitted to use facial recognition technology without first having informed the public about the planned use, released details about how the technology works, and allowed ample time (e.g. six months) for public comment.
  3. Threshold requirement: Use of facial recognition should be delayed until law enforcement can demonstrate at least a 95 percent identity confidence threshold across a wide range of demographic groups (gender, race, age, etc.).11

Such policies would significantly restrict the number of images law enforcement collect for facial recognition searches, increase transparency and accountability, and protect those engaged in First Amendment‐​protected activities such as protests and religious gatherings. They would also allow private companies to develop facial recognition systems without having to worry that their images of customers will be included in law enforcement facial recognition searches.

Thank you for your attention to this important issue. I welcome the opportunity to discuss facial recognition further.

About the Author
Notes

1 Paul Mozur, “Inside China’s Dystopian Dreams: A.I., Shame and Lots of Cameras,” The New York Times, July 8, 2018. https://​www​.nytimes​.com/​2​0​1​8​/​0​7​/​0​8​/​b​u​s​i​n​e​s​s​/​c​h​i​n​a​-​s​u​r​v​e​i​l​l​a​n​c​e​-​t​e​c​h​n​o​l​o​g​y​.html

Seungha Lee, “Coming into Focus: China’s Facial Recognition Regulations,” Center for Strategic & International Studies, May 4, 2020. https://​www​.csis​.org/​b​l​o​g​s​/​t​r​u​s​t​e​e​-​c​h​i​n​a​-​h​a​n​d​/​c​o​m​i​n​g​-​f​o​c​u​s​-​c​h​i​n​a​s​-​f​a​c​i​a​l​-​r​e​c​o​g​n​i​t​i​o​n​-​r​e​g​u​l​a​tions

2 Jane Wakefield, “AI emotion‐​detection software tested on Uyghurs,” BBC News, May 26, 2021. https://​www​.bbc​.co​.uk/​n​e​w​s​/​t​e​c​h​n​o​l​o​g​y​-​5​7​1​01248
Human Rights Watch, “China’s Algorithms of Repression: Reverse Engineering a Xinjiang Police Mass Surveillance App,” May 1, 2019. https://​www​.hrw​.org/​r​e​p​o​r​t​/​2​0​1​9​/​0​5​/​0​1​/​c​h​i​n​a​s​-​a​l​g​o​r​i​t​h​m​s​-​r​e​p​r​e​s​s​i​o​n​/​r​e​v​e​r​s​e​-​e​n​g​i​n​e​e​r​i​n​g​-​x​i​n​j​i​a​n​g​-​p​o​l​i​c​e​-mass

3 American Big Brother,” Cato Institute, accessed July 8, 2021, https://​www​.cato​.org/​a​m​e​r​i​c​a​n​-​b​i​g​-​b​r​other.

4 Richard Baimbridge, “Why your face could be set to replace your bank card,” BBC News, January 24, 2021. https://​www​.bbc​.co​.uk/​n​e​w​s​/​b​u​s​i​n​e​s​s​-​5​5​7​48964
Lindsay Reynolds, Shaomei Wu , “Designing a face recognition application for people with visual impairments,” Facebook Research, April 23, 2018. https://​research​.fb​.com/​b​l​o​g​/​2​0​1​8​/​0​4​/​d​e​s​i​g​n​i​n​g​-​a​-​f​a​c​e​-​r​e​c​o​g​n​i​t​i​o​n​-​a​p​p​l​i​c​a​t​i​o​n​-​f​o​r​-​p​e​o​p​l​e​-​w​i​t​h​-​v​i​s​u​a​l​-​i​m​p​a​i​r​m​ents/

5 U.S. Government Accountability Office, Facial Recognition Technology: Federal Law Enforcement Agencies Should Better Assess Privacy and Other Risks, GAO-21–518 (Washington, DC, 2021), 8, accessed July 11, 2021, https://www.gao.gov/assets/gao-21–518.pdf

6 Ibid.

7 Caroline Haskins, Ryan Mac, “A Government Watchdog May Have Missed Clearview AI Use By Five Federal Agencies In A New Report,” BuzzFeed, June 30, 2021. https://​www​.buz​zfeed​news​.com/​a​r​t​i​c​l​e​/​c​a​r​o​l​i​n​e​h​a​s​k​i​n​s​1​/​g​a​o​-​f​a​c​i​a​l​-​r​e​c​o​g​n​i​t​i​o​n​-​r​e​p​o​r​t​-​c​l​e​a​r​v​i​e​w​-​f​e​d​e​r​a​l​-​a​g​e​ncies

8 Clare Garvie, Alvaro Bedoya, Jonathan Frankle, “The Perpetual Lineup: Unregulated Police Face Recognition in American,” Georgetown Law, October 18, 2016. http://​www​.per​pet​u​alline​up​.org

9 These policies recommendations are adapted from policies discussed here:
Matthew Feeney, “Should Police Facial Recognition Be Banned?” Cato Institute At Liberty blog, May 13, 2019. https://​www​.cato​.org/​b​l​o​g​/​s​h​o​u​l​d​-​p​o​l​i​c​e​-​f​a​c​i​a​l​-​r​e​c​o​g​n​i​t​i​o​n​-​b​e​-​b​anned

10 Alex Najibi, “Racial Discrimination in Face Recognition Technology,” Harvard University Science in the News (SITN) blog, October 24, 2020. https://​sitn​.hms​.har​vard​.edu/​f​l​a​s​h​/​2​0​2​0​/​r​a​c​i​a​l​-​d​i​s​c​r​i​m​i​n​a​t​i​o​n​-​i​n​-​f​a​c​e​-​r​e​c​o​g​n​i​t​i​o​n​-​t​e​c​h​n​o​logy/

11 The identity confidence threshold need not be 95%, but arguments in favor of a lower threshold should be met with scrutiny. Amazon recommends a 95% confidence threshold for law enforcement use of its facial recognition system.
William Crumpler, “How Accurate are Facial Recognition Systems – and Why Does It Matter?” Center for Strategic & International Studies, April 14, 2020. https://www.csis.org/blogs/technology-policy-blog/how-accurate-are-facial-recognition-systems-–-and-why-does-it-matter