When the Police Treat Software Like Magic

This article is part of the On Tech newsletter. You can sign up here to receive it weekdays.

A lot of technology is pretty dumb, but we think it’s smart. My colleague Kashmir Hill showed the human toll of this mistake.

Her article detailed how Robert Julian-Borchak Williams, a black man in Michigan, was accused of shoplifting on the basis of flawed police work that relied on faulty facial recognition technology. The software showed Williams’s driver’s license photo among possible matches with the man in the surveillance images, leading to Williams’s arrest in a crime he didn’t commit.

(In response to Kash’s article, prosecutors apologized for what happened to Williams and said he could have his case expunged.)

Kash talked to me about how this happened, and what the arrest showed about the limits and accuracy of facial recognition technology.

Shira: What a mess up. How did this happen?

Kash: The police are supposed to use facial recognition identification only as an investigative lead. But instead, people treat facial recognition as a kind of magic. And that’s why you get a case where someone was arrested based on flawed software combined with inadequate police work.

But humans, not just computers, misidentify people in criminal cases.

Absolutely. Witness testimony is also very troubling. That has been a selling point for many facial recognition technologies.

Is the problem that the facial recognition technology is inaccurate?

That’s one problem. A federal study of facial recognition algorithms found them to be biased and to wrongly identify people of color at higher rates than white people. The study included the two algorithms used in the image search that led to Williams’s arrest.

Sometimes the algorithm is good and sometimes it’s bad, and there’s not always a great way to tell the difference. And there’s usually no requirement for vetting the technology from policymakers, the government or law enforcement.

What’s the broader problem?

Companies that sell facial recognition software say it doesn’t give a perfect “match.” It gives a score of how likely the facial images in databases match the one you search. The technology companies say none of this is probable cause for arrest. (At least, that’s how they talk about it with a reporter for The New York Times.)

But on the ground, officers see an image of a suspect next to a photo of the likeliest match, and it seems like the correct answer. I have seen facial recognition work well with some high-quality close-up images. But usually, police officers have grainy videos or a sketch, and computers don’t work well in those cases.

It feels as if we know computers are flawed, but we still believe the answers they spit out?

I wrote about the owner of a Kansas farm who was harassed by law enforcement and random visitors because of a glitch in software that maps people’s locations from their internet addresses. People incorrectly thought the mapping software was flawless. Facial recognition has the same problem. People don’t drill down into the technology, and they don’t read the fine print about the inaccuracies.

Tech companies shouldn’t say they want to help fight entrenched global problems like climate change and racial injustice without taking a hard look at how their products make things worse.

That was the point that Kevin Roose, a technology columnist for The New York Times, made about Facebook, Google and other internet companies that have proclaimed their support for the Black Lives Matter movement and announced donations, changes to their work force and other supportive measures in recent weeks.

These are good steps. But as Kevin wrote and discussed on “The Daily” podcast, the companies haven’t tackled the ways that their internet hangouts have been created to reward exaggerated viewpoints that undermine movements like Black Lives Matter. They also haven’t addressed how their rewarding of boundary-pushing online behavior has contributed to racial division.

Kevin said the tech companies’ actions were like fast-food chains getting together to fight obesity “by donating to a vegan food co-op, rather than by lowering their calorie counts.”

I have similar feelings about Amazon’s creation of a $2 billion fund to back technologies that seek to combat climate change. Previously, Amazon had announced pledges to reduce its own carbon emissions by, for example, shifting its package-delivery fleet to electric vehicles. Again, great. But.

It’s not clear that Amazon’s efforts can fully offset the carbon emissions of delivering packages fast, or shipping bottles of laundry detergent across the country, or letting people try to return stuff without thinking twice.

In short, Amazon’s carbon pledges might be nibbling around the edges of a problem to avoid considering how the company has shaped our shopping behaviors in an environmentally harmful way.

Big structural changes are incredibly hard — for the companies and us. I’m not saying big tech companies necessarily have an obligation to fight racism or environmental destruction. But the companies say that’s what they want to do. They might not be able to make a big difference without fundamentally changing how they operate.

  • Great! Now do more: Google said it would start automatically deleting logs of people’s web and app activity and data on our location after 18 months, my colleague Dai Wakabayashi reported. This change applies only to new accounts, but it’s a healthy step to put some limits on the stockpiles of information Google has about us. Here’s one more idea: Collect less data on us in the first place.

  • The trustbusters are working hard on Google: Attorney General William Barr is unusually involved in the Justice Department’s investigation into whether Google abuses its power, my colleagues David McCabe and Cecilia Kang write. (Here is my explanation of what’s happening with Google.) Barr’s interest shows the government is taking seriously its look into the power of big tech companies, but it also risks criticism that the investigation has more political than legal motivations.

  • Tilting at windmills, but … President Trump’s campaign is considering drawing more supporters to its own smartphone app or other alternatives to big internet hangouts like Facebook and Twitter, The Wall Street Journal reported. There’s no chance Mr. Trump or his campaign can ditch big internet sites, but they are worried about social media policies that have limited some of their inflammatory posts. They share the fears of many people and organizations, including news outlets, that wish they relied less on the large internet hangouts to get noticed.

It’s eerie, sweet and funny to see this Barcelona musical performance in a concert hall with houseplants filling the seats. (The plants will be donated to health care workers.)

We want to hear from you. Tell us what you think of this newsletter and what else you’d like us to explore. You can reach us at [email protected]

Get this newsletter in your inbox every weekday; please sign up here.

source: nytimes.com