Twitch Isn't Overwhelmed With Far-Right Extremists, But It Does Have A Big Misinformation Problem

A protester takes a selfie in front of police dispersing the Capitol riot.

A protester takes a selfie in front of police dispersing the Capitol riot.
Photo: Tasos Katopodis (Getty Images)

Twitch’s relationship with misinformation is complicated. In 2020, platforms such as YouTube, Facebook, and Twitter grappled with their outsized roles in the creation and cultivation of conspiracy movements like QAnon. Misinformation troubled Twitch in a different way, but the platform emerged curiously un-plagued by a comparable extremism epidemic. Now, midway through 2021, Twitch is finally gearing up to combat misinformation.

Last week, the New York Times published a story about extremism on Twitch that overstates the scope of a much more nuanced problem, making it sound like a significant number of far-right influencers fled to Twitch after getting the boot from Facebook and YouTube. The story proceeds to only cite a small handful of specific streamers, one of whom appears to have given up on Twitch a month ago and none of whom have large audiences by Twitch standards.

This isn’t to say far-right influencers—many of whom regularly spout Q rhetoric—are not trying to make inroads on streaming platforms. Julian Feeld, co-host of the QAnon-researching/lampooning QAnon Anonymous podcast, pointed Kotaku to a network of conspiratorial political broadcasters that has 19 members with Twitch presences. But a couple specific channels aside, most have under 1,000 followers, some under 100. They tend to multi-stream their broadcasts onto Twitch, meaning they’re focusing the brunt of their efforts on other platforms (for example, their own proprietary apps and DLive, a smaller video game livestreaming site from which multiple extremists streamed the Capitol riot in January). This also means they’re not directly engaging with the Twitch community via chat, which is arguably the most important part of success both on Twitch and in QAnon’s participatory culture.

Some far-right influencers have managed to skate by unnoticed on platforms like Twitch and within streamer-focused financial services like Streamlabs and StreamElements due to companies’ broader tendency to rely on journalists and user reports to root out extremism. This gives extremist streamers an opportunity to make money and quietly grow. On Twitch, however, the current crop of far-right influencers appears to be caught on the same snag as others noted in Kotaku’s previous reporting on the subject: Twitch lacks the sort of algorithmic ecosystem or receptive audience QAnon and other conspiracy movements were able to exploit on YouTube, Facebook, and Twitter. As a result, far-right influencers are largely isolated, unable to effectively prey on users who aren’t already converts.

That said, Twitch, which according to NYT does not consider QAnon a hate group, has made the ill-advised decision to partner two of them: Redpill78, who is part of the aforementioned network of conspiratorial political streamers, and Terpsichore Maras-Lindeman, a podcaster who became part of attorney Sidney Powell’s conspiracy-ridden 2020 effort to overturn Biden’s presidential victory. Neither have large audiences, and both regularly spend their broadcasts discussing news from far-right sources like Newsmax and The Gateway Pundit. Some of these discussions are rooted in real things politicians have said, but some politicians and talking heads also regularly touch on conspiracy theories related to, for example, Biden’s presidential win and covid. This leaves platforms like Twitch in a bind.

“We’re obsessing over [misinformation] because treating it as a fixable technical problem is a whole lot easier than dealing with the uncomfortable reality that the Republican party has embraced disinfo as central to its political strategy,” Ethan Zuckerman, associate professor at the University of Massachusetts at Amherst and a member of the Harvard Kennedy School’s Misinformation Review editorial board, told Kotaku in an email.

Illustration for article titled Twitch Isn't Overwhelmed With Far-Right Extremists, But It Does Have A Big Misinformation Problem

Photo: Jeff Swensen (Getty Images)

For now, treating misinformation as a technical problem in a way that’s philosophically aligned with other platforms appears to be Twitch’s plan. While Twitch has repeatedly replied to inquiries from Kotaku and other publications about misinformation with stock comments concerning its “hateful conduct” policy, it told NYT that it’s finally developing a more comprehensive approach to misinformation. In a statement to Kotaku, the company provided more details.

“With respect to our misinformation policy, we have been working for some months with industry experts to craft an approach that is appropriate for our community and effective for Twitch,” a Twitch representative told Kotaku in an email. “The vast majority of content on Twitch is live and ephemeral, and the mechanics of Twitch mean that content does not go viral in the same way it might on other services. For these reasons, we are approaching identification and enforcement of misinformation differently than most other services do.”

When asked which experts Twitch was consulting, the representative declined to offer specifics. Many academics who study misinformation, as well as members of larger organizations like gaming-focused nonprofit AnyKey and technology research institute Data & Society, replied to Kotaku’s inquiries saying Twitch had not been in contact with them or their colleagues about this issue. But Kate Starbird, a well-known researcher of crisis informatics and online rumors at the University of Washington, told Kotaku that Twitch had been “persistent” in reaching out to her in recent times, while Joseph Seering, a postdoc at Stanford who researches moderation in online communities, and Joan Donovan, who directs a team at the Harvard Kennedy School that researches disinformation, said they’d had conversations with Twitch.

In a DM, Donovan described the conversation as “not really anything remarkable” and said the company didn’t offer any sort of plan, but noted that Twitch recently hired a former member of Twitter’s trust and safety team who focused on misinformation. Seering declined to provide details on what he’s discussed with Twitch, but he did offer suggestions as to how the company should go about moderating misinformation.

“I think a Twitch policy on misinformation will need to balance specificity with agility,” he said in a DM. “It’s important for streamers to believe that there’s some predictable consistency to Twitch’s moderation decisions, which typically means that there need to be clear, specific policies, but misinformation can take many different shapes and can spread very quickly…More specifically, I’d like to see plans drawn up for how to handle the larger-scale political streams, a la AOC, which will likely become more common in the next year or so. It would be very disappointing if a major politician streamed on Twitch and spread, e.g., covid misinformation, and Twitch was caught unprepared.”

The problem is, Twitch can only prepare for so much. It’s a live platform with millions of channels; anything can happen, and that includes sudden, unexpected detours down the conspiratorial rabbit hole.

“My guess is that Twitch can’t be monitored in traditional ways—searching for keywords, for instance—they’d need to be doing speech to text on so many threads that it would require NSA levels of investment to pull it off,” said Zuckerman. “Instead, I suspect they are focused on high volume channels; a streamer talking to a few dozen people about QAnon is much less dangerous than a streamer talking to hundreds of thousands, harassing an individual.”

“A particular livestream that is full of dangerous misinformation can disappear from the public record as soon as the stream ends,” Travis View, another cohost of QAnon Anonymous, told Kotaku in a DM. “This may make it easier for a Twitch user to spread disinformation while being undetected by Twitch engineers, journalists, and researchers alike.”

Misinformation on Twitch is far from a purely political problem. As Twitch viewers saw when now-banned megastar Guy “Dr Disrespect” Beahm spent a substantial portion of a May 2020 stream spreading covid misinformation, there are many vectors through which debunked falsehoods can get into the platform’s air supply.

“It’s true there aren’t any super popular QAnon stars on the platform,” AnyKey research director Johanna Brewer told Kotaku in a DM. “But yet, just like in a Joe Rogan podcast, there is a serious amount of very questionable information being shared by the prominent cishet white men whose streams dominate the Twitch view count rankings.”

Twitch has its own misinformation problems that are much more widespread than isolated far-right influencers. Misinformation about female streamers, queer people, and people of color, in conjunction with Twitch’s own lack of transparency surrounding its decision-making processes, regularly results in torrents of death threats and harassment. As a specific example, Brewer pointed to Kotaku’s recent report on hot tub streamers and the way that trend has echoed prior panics around “titty streamers.”

“The idea that a pedophile pizza ring is misinfo, but that the systematic devaluing of women, PoC, LBGTQ+, and disabled folks is not, is a highly problematic characterization in itself,” Brewer said.

Illustration for article titled Twitch Isn't Overwhelmed With Far-Right Extremists, But It Does Have A Big Misinformation Problem

Image: Critical Bard

This kind of misinformation has caused Twitch repeated headaches on an enormous scale. For example, earlier this year, Twitch partner Critical Bard was harassed across Twitch, Twitter, and Facebook after Twitch temporarily anointed him platform-wide face of the popular Pogchamp emote—which, itself, was removed from Twitch after its original face, Ryan “Gootecks” Gutierrez, encouraged further “civil unrest” following the Capitol riot in January. Critical Bard was baselessly accused of being a “racial supremacist” thanks to a discussion of Black Lives Matter that viewers turned into an out-of-context clip, a built-in Twitch feature that can easily be used to misconstrue streamers’ words and beliefs. The clip spread rapidly via non-Twitch platforms (Reddit, Twitter, YouTube, Discord, etc), suggesting a different sort of misinformation problem than the one faced by the likes of Facebook and YouTube. When Twitch debuted its safety advisory council last year, a nearly identical situation played out with a trans streamer, Steph “FerociouslySteph” Loehr, as its focal point. Twitch, perhaps in part to counter this, recently introduced a policy that allows it to issue suspensions and bans in reaction to threats, violence, and sexual assault on other platforms.

But Twitch also has to be willing to take action against those peddling misinformation regardless of notoriety or profitability—something it did not do in the past with Beahm (who was ultimately banned for unrelated reasons) and others. Taking aim at specific sharers of misinformation, said Matthew A. Baum, Marvin Kalb professor of global communications at Harvard, is crucial in stemming the tide. In an email, he told Kotaku that “emerging evidence strongly suggests that the vast (dare say overwhelming) majority of misinformation stems from an extremely small number of so-called ‘super sharers,’” which means that a platform “can make a substantial dent by identifying the super sharers [and] blocking those accounts.” As evidence of how small and specific this crowd can be, he said that “on the order of .01%” of Twitter users were responsible for half of all vaccine-related misinformation tweets.

Brewer, though, thinks that Twitch will need to overhaul its processes if it actually wants to address harassment that arises as a result of misinformation.

“The most meaningful and impactful steps they should take involve strengthening their moderation practices to prioritize protecting their marginalized users from harassment by focusing on community education and restorative justice,” Brewer said, noting that algorithms and other technological solutions can only do so much on this front. “Rising to that challenge would require a significant investment on Twitch’s part to radically reimagine the way they approach moderation, for example, by working much more closely with the targets of harassment and misinfo to develop new practices, or employing and training a much larger group of moderators focused on engaging directly with the community.”

Policies are only part of the equation. What really counts is how they’re applied. Platforms like Twitch now find themselves having to define what constitutes good and bad information—a position of tremendous influence when you consider the size of its user base and the fact that many streamers are, well, influencers. Other platforms like Facebook have swung the banhammer with reckless abandon and still failed to squash actual threats. That in mind, Twitch, a company not exactly known for consistent or transparent application of its rules, has a lot of work ahead of it. But even in the unlikely event that Twitch learns all the right lessons from experts and the mistakes of others, misinformation won’t completely go away.

“I don’t think misinformation is a platform problem; as a society we need to be more aware and mindful of how we process and assess information,” Yvette Wohn, an associate professor at the New Jersey Institute of Technology who studies livestreaming, told Kotaku in an email. “This happens through education, and the responsibility for this education goes beyond platforms’ responsibilities. Of course companies can play a part in helping people make informed decisions, but I don’t believe in a Minority Report scenario. Human deviance will always find new ways to do bad things, and then we shall scramble to find technological solutions.”

source: gamezpot.com