Facial Recognition Meets the Fourth Amendment Test

Facial Recognition Meets the Fourth Amendment Test

There may be no way to address the overarching fears all new surveillance technologies raise; however, major concerns can be addressed without slowing the use of facial recognition.

After the 9/11 terrorist attacks, law enforcement officials were able to secure grainy images of hijackers as they navigated the airports on the morning of their assault.  The authorities were frantically trying to establish their identities in order to determine who helped them carry out the attacks and whether they had associates on the lam, planning other assaults. However, it took weeks to identify them. As of September 28, 2001, the FBI was still working to confirm their identities. Unable to close the matter on its own, the FBI released nineteen photographs, along with possible names and numerous aliases, seeking help from the public to fully identify the terrorists. Today, using facial recognition (FR), law enforcement could have identified them within three minutes.

This stunning figure is based on a study led by Daniel Steeves, chief information officer for the Ottawa Police Service, which conducted a six-month test of the use of FR in a robbery-investigation unit. It found that the tool “lowered the average time required for an officer to identify a subject from an image from 30 days to three minutes.”

On June 28, 2018, a gunman opened fire at the office of the Capital Gazette newspaper in Annapolis, Maryland. The Anne Arundel County police department captured the suspect, however he refused to cooperate and it took a considerable amount of time to use his fingerprints to identify him. Instead, the police used a state database of driver’s license and mugshot photos and established his identity on the spot.

In New York City, FR has been employed for a wide range of public goods. These include the arrests of a suspected rapist, of a person who pushed another onto the subway tracks, the identification of a hospitalized woman suffering from Alzheimer’s, and the identification of a child sex trafficker sought by the FBI. Over the course of 2018, NYC detectives requested 7,024 FR searches, resulting in 1,851 possible matches and 998 arrests.

The introduction of FR encountered waves of strong criticism. Woodrow Hartzog, a professor of law and computer science at Northeastern University’s School of Law, and Evan Selinger, a professor of philosophy at Rochester Institute of Technology, stated that they “believe facial recognition technology is the most uniquely dangerous surveillance mechanism ever invented.” They argue that “[s]urveillance conducted with facial recognition systems is intrinsically oppressive. The mere existence of facial recognition systems, which are often invisible, harms civil liberties, because people will act differently if they suspect they’re being surveilled.” 

Clare Garvie, an associate with Georgetown Law’s Center on Privacy and Technology, sounds the alarm on the use of FR by law enforcement:

And what happens if a system like this gets it wrong? A mistake by a video-based surveillance system may mean an innocent person is followed, investigated, and maybe even arrested and charged for a crime he or she didn’t commit. A mistake by a face-scanning surveillance system on a body camera could be lethal. An officer, alerted to a potential threat to public safety or to himself, must, in an instant, decide whether to draw his weapon. A false alert places an innocent person in those crosshairs.

In a blog post entitled “Facial recognition: It’s time for action,” Brad Smith, the president of Microsoft, wrote that “there is one potential use for facial recognition that could put our fundamental freedoms at risk.” He elaborated: 

When combined with ubiquitous cameras and massive computing power and storage in the cloud, a government could use facial recognition technology to enable continuous surveillance of specific individuals. It could follow anyone anywhere, or for that matter, everyone everywhere. It could do this at any time or even all the time. This use of facial recognition technology could unleash mass surveillance on an unprecedented scale. 

Smith raises the specter of the society George Orwell envisioned in 1984, and he argues for legislation to prevent such a dystopian future from coming to pass. 

In response, Congress is considering curbing the use of FR. Representative Elijah Cummings (D-MD), the chairman of the House Oversight and Reform Committee, believes that either a complete ban or a temporary moratorium is necessary. Representatives Yvette Clark (D-NY), Ayanna Pressley (D-MA), and Rashida Tlaib (D-MI) are introducing a bill that will block the installation of FR in housing supported by the US Department of Housing and Urban Development (HUD). On May 14, 2019, San Francisco, California became the first locality to pass a law preventing its local government agencies from using FR. Since then, Somerville, Massachusetts and Oakland, California have followed San Francisco’s lead.

There may be no way to address the overarching fears all new surveillance technologies raise; however, major concerns can be addressed without slowing the use of FR. One way to proceed is to ensure transparency in the ways law enforcement officials, prosecutors, and tech companies employ FR. Brad Smith, the President of Microsoft, calls for these users of FR to “provide documentation that explains the capabilities and limitations of the technology in terms that customers and consumers can understand.” Smith also recommends that FR producers be obligated to make their services available to third parties for the purpose of assessing accuracy and checking for any biases.

Particularly important is that lawmakers ensure that law enforcement authorities only use FR to identify suspects, but not by itself as a sufficient cause for arrest or conviction. One way to implement this rule is to follow the structure of the New York Police Department, where the Facial Identification Section is a separate unit of the Detective Bureau. Its investigators examine the matches found by the FR software and, if they identify a strong match, they search social media and other publicly available databases to gather more information before passing along their findings to other units, which may then consider a search or questioning of a suspect. Furthermore, as Commissioner James O’Neill put it, “the facial identification team will provide only a single such lead to the case detective.” Another important limit on the usage of FR is to enact a prohibition of ongoing surveillance of specific people, unless there is a court order or a bona fide emergency. 

A significant problem that must be addressed is that some FR software poorly identifies people of dark pigmentation, leading to a high number of false positives and charges of racial discrimination. FR algorithms from East Asian countries are better at recognizing East Asian faces than Caucasian faces. Clearly this weakness of FR must be addressed, however, there is no reason to hold that it cannot learn to draw distinctions among people of all colors.

In the meantime, strict enforcement of the rule that prevents FR from being used as the sole piece of evidence for a warrant, arrest, or conviction should help to protect people in the case of machine misidentification. 

The claim that people have an expectation of privacy when they show their face in public, and hence that it is unconstitutional to use FR, fails the reasonable person test. Surely a random sample of Americans would agree that a police officer with a picture of a criminal should not be prohibited from scrutinizing people on the street to see if they are that criminal. FR is not more invasive. Further, the use of driver’s license pictures and pictures posted on those parts of Facebook that are meant to be seen by others (as distinct from family albums and “closed” sites) meet the third party doctrine, which holds that once a person voluntarily releases information to a third party, that party is free to share that information with law enforcement authorities. 

Eyewitness identifications are notoriously unreliable. “Mistaken eyewitness identifications contributed to approximately 71% of the more than 360 wrongful convictions in the United States overturned by post-conviction DNA evidence,” according to the Innocence Project. However, police and prosecutors continue to rely on eyewitness identifications in their investigations and in court, and juries often find eyewitness testimony to be compelling. FR, even in its current state, is much more reliable.

The argument that FR is used in China to further enhance its police state is not an argument against employing FR in the United States. Even if the United States would not draw on it, China will continue to benefit from FR. Moreover, the Chinese are employing their own FR technologies. Hence banning it in the United States would hardly set them back.

Importantly, if the usages of FR are properly supervised, as described above, they fully meet the requirements of the Fourth Amendment. The Fourth Amendment protects people from “unreasonable searches and seizures.” That is, it recognizes on the face of it that there are searches that are reasonable and, thus, fully constitutional. The courts have repeatedly established that, when the public interest is high and the intrusion into people’s personal lives is small, the search is reasonable. In Camara v. Municipal Court, the Supreme Court held that routine government inspections of homes to ensure that they were in compliance with housing code was permissible, despite the fact that such searches did not involve any sort of particularized suspicion. The Court upheld the constitutionality of sobriety checkpoints where every vehicle is stopped in Michigan Department of State Police v. Sitz. In Illinois v. Lidster, the Court decided that “the gravity of the public concerns served by the seizure” and “the degree to which the seizure advances the public interest” outweighed “the severity of the interference with individual liberty” with regard to a traffic stop set up for the purpose of investigating a hit-and-run. In United States v. Hartwell, the Third Circuit Court of Appeals held that Transportation Security Administration screenings are permissible, despite lacking individualized suspicion and being conducted without a warrant, as they further a key state interest while also being minimally invasive. Millions of Americans are subjected to these screenings each day, and they are much more invasive than FR.