Amazon touts its Rekognition facial recognition system as “simple and easy to use,” encouraging customers to “detect, analyze, and compare faces for a wide variety of user verification, people counting, and public safety use cases.” And yet, in a study released Thursday by the American Civil Liberties Union, the technology managed to confuse photos of 28 members of Congress with publicly available mug shots. Given that Amazon actively markets Rekognition to law enforcement agencies across the US, that’s simply not good enough.

The ACLU study also illustrated the racial bias that plagues facial recognition today. “Nearly 40 percent of Rekognition’s false matches in our test were of people of color, even though they make up only 20 percent of Congress,” wrote ACLU attorney Jacob Snow. “People of color are already disproportionately harmed by police practices, and it’s easy to see how Rekognition could exacerbate that.”

Facial recognition technology’s difficulty detecting darker skin tones is a well-established problem. In February, MIT Media Lab’s Joy Buolamwini and Microsoft’s Timnit Gebru published findings that facial recognition software from IBM, Microsoft, and Face++ have a much harder time identifying gender in people of color than in white people. In a June evaluation of Amazon Rekognition, the two researchers found similar built-in bias. Rekognition managed to even get Oprah wrong.

“Given what we know about the biased history and present of policing, the concerning performance metrics of facial analysis technology in real-world pilots, and Rekognition’s gender and skin-type accuracy differences,” Buolamwini wrote in a recent letter to Amazon CEO Jeff Bezos, “I join the chorus of dissent in calling Amazon to stop equipping law enforcement with facial analysis technology.”

‘We wouldn’t find this acceptable in any other setting. Why should we find it acceptable here?’

Alvaro Bedoya, Center on Privacy and Technology

Yet Amazon Rekognition is already in active use in Oregon’s Washington County. And the Orlando, Florida police department recently resumed a pilot program to test Rekognition’s efficacy, although the city says that for now, “no images of the public will be used for any testing—only images of Orlando police officers who have volunteered to participate in the test pilot will be used.” Those are just the clients that are public; Amazon declined to comment on the full scope of law enforcement’s use of Rekognition.

For privacy advocates, though, any amount is too much, especially given the system’s demonstrated bias. “Imagine a speed camera that wrongly said that black drivers were speeding at higher rates than white drivers. Then imagine that law enforcement knows about this, and everyone else knows about this, and they just keep using it,” says Alvaro Bedoya, executive director of Georgetown University’s Center on Privacy and Technology. “We wouldn’t find this acceptable in any other setting. Why should we find it acceptable here?”

Amazon takes issue with the parameters of the study, noting that the ACLU used an 80 percent confidence threshold; that’s the likelihood that Rekognition found a match, which you can adjust according to your desired level of accuracy. “While 80 percent confidence is an acceptable threshold for photos of hot dogs, chairs, animals, or other social media use cases, it wouldn’t be appropriate for identifying individuals with a reasonable level of certainty,” the company said in a statement. “When using facial recognition for law enforcement activities, we guide customers to set a threshold of at least 95 percent or higher.”

While Amazon says it works closely with its partners, it’s unclear what form that guidance takes, or whether law enforcement follows it. Ultimately, the onus is on the customers—including law enforcement—to make the adjustment. An Orlando Police Department spokesperson did not know how the city had calibrated Rekognition for its pilot program.

The ACLU counters that 80 percent is Rekognition’s default setting. And UC Berkeley computer scientist Joshua Kroll, who independently verified the ACLU’s findings, notes that if anything, the professionally photographed, face-forward congressional portraits used in the study are a softball compared to what Rekognition would encounter in the real world.

“As far as I can tell, this is the easiest possible case for this technology to work,” Kroll says. “While we haven’t tested it, I would naturally anticipate that it would perform worse in the field environment, where you’re not seeing people’s faces straight on, you might not have perfect lighting, you might have some occlusion, maybe people are wearing things or carrying things that get in the way of their faces.”

Amazon also downplays the potential implications of facial recognition errors. “In real world scenarios, Amazon Rekognition is almost exclusively used to help narrow the field and allow humans to expeditiously review and consider options using their judgement,” the company’s statement reads. But that elides the very real consequences that could be felt by those who are wrongly identified.

“At a minimum, those people are going to be investigated. Point me to a person that likes to be investigated by law enforcement,” Bedoya says. “This idea that there’s no cost to misidentifications just defies logic.”

‘What we’re trying to avoid here is mass surveillance.’

Jeramie Scott, EPIC

So, too, does the notion that a human backstop provides an adequate check on the system. “Often with technology, people start to rely on it too much, as if it’s infallible,” says Jeramie Scott, director of the Electronic Privacy Information Center’s Domestic Surveillance Project. In 2009, for instance, San Francisco police handcuffed a woman and held her at gunpoint after a license-plate reader misidentified her car. All they had to do to avoid the confrontation was to look at the plate themselves, or notice that the make, model, and color didn’t match. Instead, they trusted the machine.

Even if facial recognition technology worked perfectly, putting it in the hands of law enforcement would still raise concerns. “Facial recognition destroys the ability to remain anonymous. It increases the ability of law enforcement to surveil individuals not suspected of crimes. It can chill First Amendment-protected rights and activities,” Scott says. “What we’re trying to avoid here is mass surveillance.”

While the ACLU study covers well-trod ground in terms of facial recognition’s faults, it may have a better chance at making real impact. “The most powerful aspect of this is that it makes it personal for members of Congress,” says Bedoya. Members of the Congressional Black Caucus had previously written a letter to Amazon expressing related concerns, but the ACLU appears to have gotten the attention of several additional lawmakers.

The trick, though, will be turning that concern into action. Privacy advocates say that at a minimum, law enforcement’s use of facial recognition technology should be heavily restricted until its racial bias has been corrected and its accuracy assured. And even then, they argue, its scope needs to be limited, and clearly defined. Until that happens, it’s time not to pump the brakes but to slam down on them with both feet.

“A technology that’s proven to vary significantly across people based on the color of their skin is unacceptable in 21st-century policing,” says Bedoya.


More Great WIRED Stories