Facial recognition tech is a problem. Here’s how the Democratic candidates plan to tackle it.

Facial recognition tech is a problem. Here’s how the Democratic candidates plan to tackle it.

Sen. Bernie Sanders | Justin Sullivan/Getty Images

Bernie Sanders has taken a more radical stance than Elizabeth Warren, Kamala Harris, Julián Castro — and everyone else.

As the growing use of facial recognition technology draws criticism from the public, Democratic presidential candidates are starting to articulate how they’d handle the tech if they’re elected. Some candidates are staking out stronger positions than others.

In August, Sen. Bernie Sanders became the first presidential candidate to call for a total ban on the use of facial recognition software for policing. As part of his broader criminal justice reform plan, he also called for a moratorium on the use of algorithmic risk assessment tools that aim to predict which criminals will reoffend. Critics have called such tools racially biased.

Sen. Elizabeth Warren released her own criminal justice reform plan soon after, saying she’d create a task force to “establish guardrails and appropriate privacy protections” for surveillance tech, including “facial recognition technology and algorithms that exacerbate underlying bias.” She did not promise to institute a ban.

Julián Castro’s website includes a brief mention of facial recognition, but there’s little detail. It simply says Castro wants to “establish guidelines for next-generation surveillance technologies, like facial recognition technology, that accounts for disparate impact and bias in their application.”

This week, it was Sen. Kamala Harris’s turn to roll out her criminal justice plan, which says:

Kamala would work with stakeholders, including civil rights groups, technology groups, and law enforcement, to institute regulations and protections to ensure that technology used by federal law enforcement — such as facial recognition and other surveillance — does not further racial disparities or other biases. She would also invest federal money to incentivize states and localities to do the same.

Again, there’s no promise of a ban here. Nor have other Democratic candidates made explicit campaign promises about facial recognition, though some, like Sen. Cory Booker, have advanced legislation pushing for algorithmic accountability.

Some find this state of affairs disappointing, especially given that the public pushback against facial recognition is gaining momentum. The California Senate passed a bill this week placing a three-year moratorium on the use of facial recognition in police body cameras. State legislatures in New York, Michigan, and Massachusetts are also considering bills to rein in the technology. Bipartisan legislation is expected to be forthcoming in Congress.

“Facial recognition poses a unique threat to human liberty and basic rights — any candidate who wants to be taken seriously on criminal justice issues should be calling for an outright ban, or at the very least a moratorium on current use of this tech. Senator Harris’s plan says she will work with civil rights and technology organizations, but she already seems to be ignoring us,” said Evan Greer, deputy director of the digital rights nonprofit Fight for the Future.

So far, no other candidate has carved out as tough a stance as Sanders’s. His promises about facial recognition and algorithmic risk assessment stand out because they show he’s thinking seriously about the ethical risks of AI technologies. Police officers and judges currently use both these systems to guide their decisions, despite evidence that these systems are often biased against people of color.

Sanders says he won’t allow the criminal justice system to go on using algorithmic tools for predicting recidivism until they pass an audit, because “we must ensure these tools do not have any implicit biases that lead to unjust or excessive sentences.” As a ProPublica investigation revealed, some of the algorithms used in courtroom sentencing do lead to unjust outcomes: A black teen may steal something and get rated high-risk for committing future crimes, for example, but a white man steals something of similar value and gets rated low-risk.

The plan to audit these algorithms for bias makes sense. There are also several good reasons to think Sanders’s more radical plan to completely ban facial recognition in policing is warranted. Let’s break them down.

The case for banning facial recognition tech

Facial recognition software, which can identify an individual by analyzing their facial features in images, in videos, or in real time, has encountered a growing backlash over the past few months. Behemoth companies like Apple, Amazon, and Microsoft are all mired in controversy over it. San Francisco, Oakland, and Somerville have all issued local bans.

Some argue that outlawing facial recognition tech is throwing the proverbial baby out with the bathwater. Advocates say the software can help with worthy aims, like finding missing children and elderly adults or catching criminals and terrorists. Microsoft president Brad Smith has said it would be “cruel” to altogether stop selling the software to government agencies. This camp wants to see the tech regulated, not banned.

Yet there’s good reason to think regulation won’t be enough. The danger of this tech is not well understood by the general public, and the market for it is so lucrative that there are strong financial incentives to keep pushing it into more areas of our lives in the absence of a ban. AI is also developing so fast that regulators would likely have to play whack-a-mole as they struggle to keep up with evolving forms of facial recognition.

Then there’s the well-documented fact that human bias can creep into AI. Often, this manifests as a problem with the training data that goes into AIs: If designers mostly feed the systems examples of white male faces, and don’t think to diversify their data, the systems won’t learn to properly recognize women and people of color. And indeed, we’ve found that facial recognition systems often misidentify those groups, which could lead to them being disproportionately held for questioning when law enforcement agencies put the tech to use.

In 2015, Google’s image recognition system labeled African Americans as “gorillas.” Three years later, Amazon’s Rekognition system wrongly matched 28 members of Congress to criminal mug shots. Another study found that three facial recognition systems — IBM, Microsoft, and China’s Megvii — were more likely to misidentify the gender of dark-skinned people (especially women) than of light-skinned people.

Even if all the technical issues were to be fixed and facial recognition tech completely de-biased, would that stop the software from harming our society when it’s deployed in the real world? Not necessarily, as a recent report from the AI Now Institute explains.

Say the tech gets just as good at identifying black people as it is at identifying white people. That may not actually be a positive change. Given that the black community is already overpoliced in the US, making black faces more legible to this tech and then giving the tech to police could just exacerbate discrimination. As Zoé Samudzi wrote at the Daily Beast, “It is not social progress to make black people equally visible to software that will inevitably be further weaponized against us.”

Woodrow Hartzog and Evan Selinger, a law professor and a philosophy professor, respectively, argued last year that facial recognition tech is inherently damaging to our social fabric. “The mere existence of facial recognition systems, which are often invisible, harms civil liberties, because people will act differently if they suspect they’re being surveilled,” they wrote. The worry is that there’ll be a chilling effect on freedom of speech, assembly, and religion.

The authors also note that our faces are something we can’t change (at least not without surgery), that they’re central to our identity, and that they’re all too easily captured from a distance (unlike fingerprints or iris scans). If we don’t ban facial recognition before it becomes more entrenched, they argue, “people won’t know what it’s like to be in public without being automatically identified, profiled, and potentially exploited.”

Luke Stark, a digital media scholar who works for Microsoft Research Montreal, made another argument for a ban in a recent article titled “Facial recognition is the plutonium of AI.”

Comparing software to a radioactive element may seem over the top, but Stark insists the analogy is apt. Plutonium is the biologically toxic element used to make atomic bombs, and just as its toxicity comes from its chemical structure, the danger of facial recognition is ineradicably, structurally embedded within it, because it attaches numerical values to the human face. He explains:

Facial recognition technologies and other systems for visually classifying human bodies through data are inevitably and always means by which “race,” as a constructed category, is defined and made visible. Reducing humans into sets of legible, manipulable signs has been a hallmark of racializing scientific and administrative techniques going back several hundred years.

The mere fact of numerically classifying and schematizing human facial features is dangerous, he says, because it enables governments and companies to divide us into different races. It’s a short leap from having that capability to “finding numerical reasons for construing some groups as subordinate, and then reifying that subordination by wielding the ‘charisma of numbers’ to claim subordination is a ‘natural’ fact.”

In other words, racial categorization too often feeds racial discrimination. This is not a far-off hypothetical but a current reality: China is already using facial recognition to track Uighur Muslims based on their appearance, in a system the New York Times has dubbed “automated racism.” That system makes it easier for China to round up Uighurs and detain them in internment camps.

A ban is an extreme measure, yes. But a tool that enables a government to immediately identify us anytime we cross the street is so inherently dangerous that treating it with extreme caution makes sense.

Instead of starting from the assumption that facial recognition is permissible — which is the de facto reality we’ve unwittingly gotten used to as tech companies marketed the software to us unencumbered by legislation — we’d do better to start from the assumption that it’s banned, then carve out rare exceptions for specific cases when it might be warranted.

Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.

Author: Sigal Samuel

Read More

RSS
Follow by Email