Back to News & Commentary

The Untold Number of People Implicated in Crimes They Didn’t Commit Because of Face Recognition

Facial recognition software scanning a crowd.
The sheer scope of police face recognition use in this country means that people have almost certainly been — and will continue to be — misidentified, if not arrested and charged for crimes they didn’t commit.
Facial recognition software scanning a crowd.
Clare Garvie,
Georgetown Law’s Center on Privacy and Technology
Share This Page
June 24, 2020

In January, Michigan resident Robert Williams was arrested for shoplifting from a watch store in downtown Detroit a year ago—a crime he did not commit. Police thought he was connected to the crime because of a face recognition search that found similarities between grainy surveillance footage of the theft and Mr. Williams’ driver’s license photo.
 
What makes this case unique is not that face recognition was used, or that it got it wrong. What makes it unique is that we actually know about it.
 
The sheer scope of police face recognition use in this country means that others have almost certainly been—and will continue to be—misidentified, if not arrested and charged for crimes they didn’t commit. At least one quarter of the 18,000 law enforcement agencies across the United States have access to a face recognition system. Over half of all American adults are—like Mr. Williams—in a driver’s license database searched using face recognition for criminal investigations (and in some states, for immigration enforcement too). States have spent millions of dollars on face recognition systems, some of which have been in place for years and are searched hundreds, if not thousands of times per month.
 
Florida, for example, implemented its police face recognition system in 2001. By 2016 and as much as $8 million dollars later, local, state, and federal agencies were searching a database of 11 million mugshots and 22 million state driver’s license photos 8,000 times per month.
 
We have no idea how accurate these searches are, and how many lead to arrests and convictions. If we were to assume that misidentifications happened in only one out of a thousand searches, or .1% or the time, that would still amount to eight people implicated in a crime they didn’t commit every month—in Florida alone. But the Pinellas County Sheriff’s Office, which operates the system, does not conduct audits. Defendants are rarely, if ever, informed about the use of face recognition in their cases.
 
And yet these searches have real consequences.
 
No one knows this better than Willie Allen Lynch, arrested in 2015 for selling $50 worth of crack cocaine to two undercover Jacksonville officers. Like Mr. Williams in Michigan, a face recognition match implicated Mr. Lynch as a suspect and was the main evidence supporting his arrest. Unlike Mr. Williams, however, Mr. Lynch was convicted of the crime. He is currently imprisoned and serving an eight year sentence. He maintains his innocence.
 
No one knows this better than Amara Majeed, who on April 25, 2019 woke up to the nightmare of having been falsely identified by a face recognition system as a suspect in a deadly terrorism attack in Sri Lanka. Sri Lankan authorities eventually corrected the mistake, but not before Ms. Majeed had received death threats targeting both herself and her family back home.
 
And no one knows this better than Robert Williams, who was arrested in front of his young children and detained for 30 hours for a crime to which he had no connection other than a passing resemblance, according to a face recognition system, to a person caught on poor quality surveillance footage.
 
We cannot account for the untold number of other people who have taken a plea bargain even though they were innocent, or those incarcerated for crimes they did not commit because a face recognition system thought they looked like the suspect. But the numbers suggest that what happened to Mr. Williams is part of a much bigger picture.
 
Despite the risks, face recognition continues to be purchased and deployed around the country. Within the month, the Detroit Police Department is set to request $220,000 from the City Council to renew its $1 million dollar face recognition contract. An analysis of thousands of pages of police documents that the Center on Privacy & Technology has obtained through public records requests can confirm up to $92 million spent by just 26 (of a possible 18,000) law enforcement agencies between 2001 and 2018. This is surely a serious undercount, as many agencies continue to shroud their purchase and use of face recognition in secrecy.
 
The risk of wrongful arrests and convictions alone should be enough to cast doubt on the value of acquiring and using these systems. Over the past few years advocates, academics, community organizers, and others have also amplified the myriad other risks police face recognition poses to privacy, free speech, and civil rights. What we haven’t seen is ample evidence that it should be used—that the millions of dollars spent, the risks of misidentification, and the threats to civil rights and liberties are justified somehow by the value of face recognition in maintaining public safety. This absence is particularly stark in light of growing calls to divest from over-militarized, unjust policing structures.
 
If Mr. Williams was the only person mistakenly arrested and charged because of a face recognition error, it would be one too many. But he’s not the only one. And unless we pass laws that permit this technology to be used only in ways consistent with our rights,  or stop using the technology altogether, there will be others.
 
Clare Garvie is a senior associate with the Center on Privacy & Technology at Georgetown Law and co-author of The Perpetual Line-Up; America Under Watch; and Garbage In, Garbage Out, three reports about the use and misuse of face recognition technology by police in the United States.

Learn More About the Issues on This Page