How Facial Recognition Technology Is Helping Identify the U.S. Capitol Attackers
The FBI is still trying to identify some of the hundreds of people who launched a deadly attack on the U.S. Congress last week. We have deployed our full investigative resources and are working closely with our federal, state, and local partners to aggressively pursue those involved in criminal activity during the events of January 6," reads a page that contains images of dozens of unknown individuals, including one suspected of planting several bombs around Washington, D.C.
But while the public is being urged to put names to faces, America's law enforcement agencies already have access to technologies that could do much of the heavy lifting. We have over three billion photos that we indexed from the public internet, like Google for faces," Hoan Ton-That, CEO of facial recognition start-up Clearview AI told Spectrum.
Ton-That said that Clearview's customers, including the FBI, were using it to help identify the perpetrators: Use our system, and in about a second it might point to someone's Instagram page."
Clearview has attracted criticism because it relies on images scraped from social media sites without their-or their users'-permission.
Photos: FBI Photographs of some of the people who attacked the United States Capitol Building on January 6, 2021, in Washington, D.C.The Capitol images are very good quality for automatic face recognition," agreed a senior face recognition expert at one of America's largest law enforcement agencies, who asked not to be named because they were talking to Spectrum without the permission of their superiors.
Face recognition technology is commonplace in 2021. But the smartphone that recognizes your face in lieu of a passcode is solving a much simpler problem than trying to ID a masked (or often in the Capitol attacks surprisingly unmasked) intruder from a snatched webcam frame.
The first is comparing a live, high resolution image to a single, detailed record stored in the phone. Modern algorithms can basically see past issues such as how your head is oriented and variations in illumination or expression," says Arun Vemury, director of the Department of Homeland Security (DHS) Science and Technology Directorate Biometric and Identity Technology Center. In a recent DHS test of such screening systems at airports, the best algorithm identified the correct person at least 96 percent of the time.
The second scenario, however, is attempting to connect a fleeting, unposed image against one of hundreds of millions of people in the country or around the world. Most law enforcement agencies can only search against mugshots of people who have been arrested in their jurisdictions, not even DMV records," says the law enforcement officer.
And as the size of the database grows, so does the likelihood of the system generating incorrect identifications. Very low false positive rates are still fairly elusive," says Vemury. Because there are lots of people out there who might look like you, from siblings and children to complete strangers. Honestly, faces are not all that different from one another."
Nevertheless, advances in machine learning techniques and algorithms mean that facial recognition technologies are improving. After the COVID-19 pandemic hit last year, the National Institute of Standards and Technology tested industry-leading algorithms [PDF] with images of people wearing facemasks. While some of the algorithms saw error rates soar, others had only a modest decrease in effectiveness compared to maskless facial recognition efforts. Incredibly, the best algorithm's performance with masks on was comparable to the state-of-the-art on unmasked images from just three years earlier.
In fact, claims Vemury, AI-powered facial recognition systems are now better at matching unfamiliar faces than even the best trained human. There's almost always a human adjudicating the result or figuring out whether or not to follow up," he says, But if a human is more likely to make an error than the algorithm, are we really thinking about this process correctly? It's almost like asking a third grader to check a high school student's calculus homework."
Yet such technological optimism worries Elizabeth Rowe, a law professor at the University of Florida Levin College of Law. Just because we have access to all of this information doesn't mean that we should necessarily use it," she says. Part of the problem is that there's no reporting accountability of who's using what and why, especially among private companies."
Last week, The Washington Times as well as Republican Congressman Matt Gaetz incorrectly claimed that face recognition software from New York startup XRVision had revealed two of the Capitol attackers as incognito left-wing instigators. In fact, XRVision's algorithms had identified the agitators as being the very right-wing extremists they appeared to be.
There are also ongoing concerns that the way some face recognition technologies work (or fail to work) with different demographic groups [PDF] can exacerbate institutional racial biases. We're doing some additional research in this area," says Vemury. But even if you made the technologies totally fair, you could still deploy them in ways that could have a discriminative outcome."
But if facial recognition technologies are linked to apprehending high profile suspects such as the Capitol attackers, enthusiasm for their use is likely only going to grow, says Rowe.
Just as consumers have gotten attached to the convenience of using biometrics to access our toys, I think we'll find law enforcement agencies doing exactly the same thing," she says. It's easy, and it gives them the potential to conduct investigations in a way that they couldn't before."