Lessons From a Virus Tracing Dud

This article is part of the On Tech newsletter. You can sign up here to receive it weekdays.

In the coronavirus panic in the spring, Utah hired a small tech company to create an app to trace state residents who were infected with the virus and help notify their contacts about possible exposure.

It didn’t go well.

Only about 200 people used the virus-alert app, Healthy Together, for its main intended purpose. Utah then shut down the key feature entirely. Critics of Healthy Together said that state officials spent too much on rushed and unproven technology.

This feels like a familiar tale of failures by government officials and botched pandemic technology. It is, but the story didn’t end there.

The app company, called Twenty, and Utah public health officials focused the app on less ambitious but potentially more useful purposes, including relaying coronavirus test results and digital symptom checks at schools and workplaces. It’s too soon to call Healthy Together a success or a failure, but the app now has a manageable purpose.

The saga of Healthy Together shows both what can go wrong with virus-fighting technologies and how digital helpers — if we establish trust and don’t overstate their capabilities — have a role to play in the human-led fight against the virus.

Let me state this plainly: Many virus-tracing technologies, like the first version of Healthy Together, have been a mess.

In Utah, state officials told me that many people were reluctant to share their location information via an app with the public health department so it could try to figure out who they might have come into contact with. The state didn’t do much to convince people that the app might be helpful.

This is not an isolated problem for contact tracing efforts. People don’t necessarily trust government or technology companies. It can feel embarrassing or creepy to tell a public health official who you might have exposed to a dangerous virus. Apple and Google are releasing technology that will make it easier for states to set up virus exposure alerts for smartphones, but it won’t fix the trust problem.

In Utah, Healthy Together dropped the location-tracking technology for now, but it still lets people see a map of coronavirus testing centers near them and offers information about how to get tested and who will pay for it, and simpler stuff like whether restaurants nearby are allowed to serve dine-in meals.

A version of the app also offers questionnaires to assess potential coronavirus symptoms for people who work in some health care facilities and colleges including Brigham Young University.

None of this is magic, and that’s fine. We do need some of these digital helpers to supplement the human-powered fight against the pandemic. We just need to be sure to keep technology restrained to what it can reasonably do.

Two of Twenty’s co-founders, Diesel Peltz and Jared Allgood, were humble about what they learned. “We came in with a little naïveté,” Peltz told me. “We had to be honest with ourselves about our limitations and where we can help the [public health] strategy and amplify it.”

Utah State Representative Andrew Stoddard said that he believed Healthy Together wasn’t worth the money, but that it and similar technologies had a role to play in the state’s pandemic response.

“I hope the lesson learned is that technology is innovative and helpful, but there are arenas where technology isn’t the best option,” he said.

If you don’t already get this newsletter in your inbox, please sign up here.


Facebook made several big policy changes on Thursday to try to lower the temperature on a tumultuous U.S. presidential election in November. The new rules are sensible on paper, but the question now is whether Facebook can effectively enforce them.

My colleague Mike Isaac has all the details on Facebook’s new rules. The biggest one to me: Facebook said it would apply an informational label to posts by political candidates or campaigns that try to prematurely declare victory in the election or cast doubt on the legitimacy of mail-in voting.

This election is going to be unlike any other. Far more Americans are expected to vote by mail to avoid the risk of a coronavirus infection, and that most likely means counting votes will take more time than usual.

If ballot tallies take days or longer, one concern is that President Trump or other candidates might declare victory before all votes are counted, or dispute the outcome. One late night tweet or unchecked Facebook post from the president could contribute to a lack of public trust in the election system.

As wild as this might have seemed a few years ago, Facebook has become essential plumbing in democracy, and the company knows the world is watching how it acts in this election.

But making rules is only half the battle. When the president posted in July a baseless claim about voter fraud, Facebook’s attempt at added context was a link to an election information help page. The supposed information label wasn’t actually informative about what the president said.

On Thursday, Facebook added a context label to one of Mr. Trump’s posts that did add useful information about what he said.

And for Facebook to enforce new rules about the election, it will rely in part on social network users flagging posts that seem off, and on teams of workers who must assess whether a post goes against the company’s guidelines.

For particularly sensitive rules like whether a politician is sowing confusion about an election, I would bet that any decisions about whether to remove a post or append contradictory information will ultimately be made by Facebook executives. Those can be tough calls and might take time to make. And on Facebook, bogus information can get millions of eyeballs in a flash.


  • What it’s like to be duped by Russian trolls: My colleague Sheera Frenkel talked to one American who wrote for a news website that turned out to be a covert Russian government-backed propaganda campaign. The writer thought it was strange when editors had a poor grasp of English and waved off some of his article ideas. But he didn’t find out he was ensnared in a propaganda campaign until a reporter contacted him this week.

  • What does Facebook do when political leaders spew hate? Facebook banned the accounts of a prominent Indian politician, T. Raja Singh, over his online posts and comments that have called Muslims traitors and said some Muslim immigrants should be shot. The Wall Street Journal has been reporting on internal division at Facebook over whether the company has protected Singh and some other members of India’s ruling party who have used Facebook to encourage hatred of or violence against Muslims.

  • Instagram scams work because we want stuff easy and fast: A writer for The Verge bought overpriced and crummy secondhand furniture after seeing it listed on Instagram, and dug into why people fall for Instagram pitches for low-quality or sometimes fraudulent merchandise. “The scam works by exploiting our own consumerism — the idea that everything we want should be readily available, and cheap, and delivered within days,” The Verge wrote.

This is one big and beautiful sheep.


We want to hear from you. Tell us what you think of this newsletter and what else you’d like us to explore. You can reach us at [email protected].

If you don’t already get this newsletter in your inbox, please sign up here.

source: nytimes.com