Twitter says it labeled 0.2% of all election-related tweets as disputed.

Twitter said on Thursday that it had labeled as disputed 300,000 tweets related to the presidential election, or 0.2 percent of the total on the subject, even as some users continued sharing misleading information about the outcome.

The disclosure made Twitter the first major social media platform to publicly evaluate its performance during the 2020 election. Its revelations followed intense criticism by President Trump and other Republicans, who have said Twitter’s fact-checking efforts silenced conservative voices.

Twitter, Facebook and Google have faced scrutiny since their platforms were used during the 2016 presidential race to mount misinformation campaigns intended to sow division between Americans and discourage them from voting. The companies vowed that this year’s election would be different, and spent billions of dollars to revamp their policies.

At Twitter, the changes were sweeping. The company permanently banned political advertising last year, began labeling content that misled viewers about the electoral process or its outcome and slowed down users’ ability to retweet content.

Twitter said some of the measures, like labeling misleading tweets, were somewhat effective. Other changes did not appear to have an effect, so Twitter said it would reverse them.

Twitter’s labeled the 300,000 tweets from Oct. 27 to Wednesday, the company said. The labels warned viewers that the content was disputed and could be misleading. Twitter restricted just 456 of those messages, preventing them from being shared and from getting likes or replies.

Twitter labeled many of the tweets — including dozens from Mr. Trump — within minutes of their posting. Seventy-four percent of the people who saw the labeled tweets viewed them after the label was added, Twitter said.

Many of Mr. Trump’s messages were labeled. Between Election Day and Friday afternoon, Twitter labeled about 34 percent of Mr. Trump’s tweets and retweets, according to a New York Times tally. “Twitter is out of control, made possible through the government gift of Section 230!” Mr. Trump tweeted last week, referring to a law that grants some legal protections to social media companies.

Roughly a third of users appeared receptive to the warnings from Twitter by not sharing the messages, while others continued to share the messages. The company reported a 29 percent decrease in “quote-tweeting,” in which users shared the labeled messages with their followers.

“These enforcement actions remain part of our continued strategy to add context and limit the spread of misleading information about election processes around the world on Twitter,” the company’s head of legal and policy, Vijaya Gadde, wrote in a blog post.

The high-profile changes showed that social media companies are still evolving their content moderation policies and that more changes could come. Misinformation researchers praised Twitter for its transparency but said more data was needed to determine how content moderation should adapt in future elections.

“Nothing about the design of these platforms is natural, inevitable or immutable. Everything is up for grabs,” said Evelyn Douek, a lecturer at Harvard Law School who focuses on online speech.

“This was the first really big experiment in content moderation outside of the ‘take down or leave up’ paradigm of content moderation,” she added. “I think that’s the future of content moderation. These early results are promising for that kind of avenue. They don’t need to completely censor things in order to stop the spread and add context.”

Facebook, which initially said it would not fact-check political figures, also added several labels to Mr. Trump’s posts on its platform. Although Twitter was more aggressive, other social platforms may copy its approach to labeling disputed content, Ms. Douek said.

While Twitter will continue labeling misleading tweets about the election, it said it would roll back other changes. The company will turn back on its algorithms that recommend tweets to users, and allow topics to trend without adding context to them.

In October, Twitter said it would disable the algorithms, letting users discover content on their own. The company also said it would add context to every trending topic so users could know at a glance why a topic was gaining popularity. Both changes were celebrated by misinformation researchers.

But Kayvon Beykpour, Twitter’s head of product, said on Thursday, “We found that pausing these recommendations prevented many people from discovering new conversations and accounts to follow.” Although adding context to trending topics resulted in a “significant reduction” of reports on the topics, it also limited how many topics could be allowed to trend, he added.

“We’ll continue to prioritize reviewing and adding context to as many trends as possible, but won’t make this a requirement before a trend can appear,” Mr. Beykpour said. “Our goal is to help people see what’s happening, while ensuring that potentially misleading trends are presented with context.”

One other change will remain, at least for now. Twitter said it would continue slowing down retweets. The tactic may have reduced the spread of misinformation, the company said, adding that it will continue to study the outcome.

source: nytimes.com