Problem with police use of facial recognition isn’t with the biometrics

0
Problem with police use of facial recognition isn’t with the biometrics

A major investigation by the Washington Post has revealed that police in the U.S. regularly use facial recognition as the sole basis for making arrests, contravening internal policies that require officers to have probable cause and corroborating evidence.

The Post’s findings, which also bring to light two previously unreported cases of people wrongfully arrested after being identified with facial recognition, highlight one major potential flaw in biometric technology for law enforcement use cases: police must be trusted to use it ethically.

And yet. “Law enforcement agencies across the nation are using the artificial intelligence tools in a way they were never intended to be used,” says the Post: “as a shortcut to finding and arresting suspects without other evidence.”

Journalists Douglas MacMillan, David Ovalle and Aaron Schaffer identified “75 departments that use facial recognition, 40 of which shared records on cases in which it led to arrests. Of those, 17 failed to provide enough detail to discern whether officers made an attempt to corroborate AI matches.”

Among the remaining 23 departments that had detailed records about facial recognition use, they found that “15 departments spanning 12 states arrested suspects identified through AI matches without any independent evidence connecting them to the crime.”

Moreover, “some law enforcement officers using the technology appeared to abandon traditional policing standards and treat software suggestions as facts.”

‘Automation bias’ is a problem; so is lax police work

The report breaks down police failures in the eight known wrongful arrests, which include failing to check alibis and blatantly ignoring suspects’ physical characteristics (the latter in the case of a pregnant woman). The trend is clear, and the Post suggests the examples are “probably a small sample of the problem.”

The piece comes dangerously close to missing its own point in quoting Katie Kinsey, chief of staff for the Policing Project at NYU School of Law, who notes that facial recognition software “performs nearly perfectly in lab tests using clear comparison photos,” but has not been subject to “real-world, independent testing of the technology’s accuracy in how police typically use it — with lower-quality surveillance images and officers picking one candidate from a list of possible matches.”

Because of this, Kinsey says, it’s hard to know how often the software gets it wrong.

Yet her blame is misplaced. As the Post investigation illustrates, it is not the biometric software that usually gets it wrong, but the police. The report notes research showing that “people using AI tools can succumb to ‘automation bias,’ a tendency to blindly trust decisions made by powerful software, ignorant to its risks and limitations.”

If anything, the software is too good at its job. Grainy suspect images run through facial recognition algorithms for photo lineups are highly likely to find people that look a lot like the suspect. In which case, says Gary Wells, a psychologist at Iowa State University who studies faulty eyewitness identifications, when those pictures are shown to victims, they are highly likely to make an ID, even if it is false.

AI to draft police reports not a good idea: ACLU

Solving the problem depends on the same key ingredients that underpin the larger global ecosystem of biometric technology: regulation and trust. And yet, who polices the police is a question that goes beyond biometrics.

A recent report from the ACLU notes that “police departments are adopting software products that use AI to draft police reports for officers” – and says that’s a very bad idea: “AI has many potential functions, but there is no reason to use it to replace the creation of a record of the officer’s subjective experience.”

Other organizations have raised concerns about the potential for civil and human rights violations in AI deployments, including biometric facial recognition, by the DEA and FBI.

And a 137-page federal joint agency report on law enforcement use of biometrics, published this month, offers biometric technology’s dual-edged implications, per the U.S. Department of Homeland Security (DHS), the Department of Justice (DOJ), and the White House Office of Science and Technology Policy.

In each case, technology is an enabler for human decisions. For biometric algorithms, there are standards, tests and certifications that govern their use. Regulating human behavior is much harder, especially in those who wield power. Algorithms have their flaws, but they are generally more predictable than people – and less likely to skip a step or two when someone’s freedom is on the line.

Article Topics

ACLU  |  automation  |  biometric identification  |  biometric matching  |  biometric-bias  |  biometrics  |  facial recognition  |  law enforcement  |  police

Latest Biometrics News

 

Biometrics startups and giant multinationals collide as each tries to navigate emerging markets in the most-read stories of the week…

 

The United Nations Development Programme has selected Laxton to provide hundreds of Biometric Citizen Registration (BCR) kits for Honduras. The…

 

A major leadership change has been kicked off at Thales Digital Identity & Security and the International Biometrics and Identity…

 

Global fintech platform iCapital has entered a definitive agreement to acquire U.S.-based Parallel Markets, which provides reusable identity tools for…

 

A pilot with Commonwealth Bank will test the Australian government’s digital identity exchange scheme, Trust Exchange (TEx), using digital medical…

 

The Federal Trade Commission (FTC) Thursday issued notice that it finalized substantial changes to the Children’s Online Privacy Protection Act…


link

Leave a Reply

Your email address will not be published. Required fields are marked *