Why Republicans (and even a couple of Democrats) want to ax tech's favorite law

When Twitter locked Senate Majority Leader Mitch McConnell’s campaign account in early August, demanding that he remove a video of profane protesters threatening violence outside his home, an arcane war erupted.

The company said they took the action because the video violated their community rules, while Republicans launched a reinvigorated assault on what they claim is ongoing bias by Big Tech against conservatives.

Sen. Josh Hawley, R-Mo., went on Fox News to call the locked account “all too typical of Twitter and big tech,” before turning his attention what he sees as the real problem: the tiny-but-powerful piece of law that allows platforms to make their own community rules in the first place.

Hawley is one of a growing number of GOP lawmakers waging a war on Section 230 of the Communications Decency Act of 1996, which shields companies from the enormous risk of hosting third-party content while allowing them to moderate content according to their own sets of standards. It has paved the way for the internet to be a bastion of free speech and information, while still ostensibly offering guard rails against hate speech, harassment or other material that tech companies don’t want on their websites and apps.

But experts and industry advocates say the changes Hawley and others have proposed are rooted in a fundamental misunderstanding of Section 230, and could unravel the internet as we know it.

“A lot of the rhetoric that’s coming out of Congress is almost the opposite of what the reality is,” said Michael Beckerman, president and CEO of the Internet Association, an industry lobbying group.

“Section 230 is what enables all user-generated content online,” he said. The category includes the wide variety of information everyday internet users post, from shopping reviews, Instagram photos and Wikipedia entries to dating app profiles, TripAdvisor recommendations and real-estate listings.

Before the law was implemented, it was somewhat unclear whether a website or other internet company could publish those opinions and information from users without taking on the risk of being sued for defamation, threats or some other possible harm.

Section 230 means that only the original speaker, or “information content provider,” may be held liable, with only a few exceptions. One of those exceptions, allowing internet companies to be held liable in sex trafficking cases, became law last year. But, over the years, the courts have said the law is broadly applicable. If a user writes a mean review about a restaurant, for example, the restaurant could sue the reviewer — but not Yelp.

The neutrality myth

Growing partisan backlash over how tech companies such as Twitter, Facebook and others moderate their platforms has led some politicians to take aim at Section 230.

In June, Hawley introduced a bill tying Section 230 protections to a new requirement that big tech companies prove political neutrality every two years.

“With Section 230, tech companies get a sweetheart deal that no other industry enjoys: complete exemption from traditional publisher liability in exchange for providing a forum free of political censorship,” Hawley said in a statement about the legislation. “Unfortunately, and unsurprisingly, big tech has failed to hold up its end of the bargain.

Hawley’s been crafting himself as the GOP’s chief critic of tech giants, but he’s hardly the first to target Section 230.

“The predicate for Section 230 immunity under the C.D.A. is that you’re a neutral public forum,” claimed Sen. Ted Cruz, R-Texas, last year, referring to 1996 act.

There is a line in a nonbinding preamble to Section 230 — what are known as the congressional “findings” — that describes the internet as a “forum for a true diversity of political discourse.”

But that preamble doesn’t have the force of law, and the actual text of section 230 doesn’t require neutrality.

“What matters in the statute is not whether you’re politically neutral. It’s whether you’re a content creator,” explained former Rep. Chris Cox, the California Republican who helped write the 1996 law. He said neutrality wasn’t a consideration.

Cox told NBC News he wanted to encourage content moderation, while acknowledging that it would be impossible for tech companies to review everything users put online the way traditional publishers — who are subject to defamation laws, for example — do.

“Defamation laws have a purpose. We weren’t trying to get rid of them. We were just trying to make sure we didn’t get rid of the internet at the same time,” he said.

Cox’s co-author in the House, now Sen. Ron Wyden, D-Ore., has said the same.

“I hear constantly about how the law is about neutrality. Nowhere, nowhere, nowhere does the law say anything about that,” he told Wired this year.

Bipartisan complaints

Republicans aren’t the only lawmakers targeting Section 230. Democrats have also pushed for companies to increase oversight of content published on their platforms.

Presidential contender Beto O’Rourke included plans for rewriting Section 230 in a gun violence prevention plan released in August. In a proposal posted to his website, he called for revoking legal immunity for large internet platforms that don’t ban hateful activities, and for holding all internet companies liable for speech that incites violence if they knowingly promote it.

“We must connect the dots between internet communities providing a platform for online radicalization and white supremacy,” O’Rourke’s proposal said.

House Speaker Nancy Pelosi, D-Calif, has also voiced support for changes.

“I do think that for the privilege of 230, there has to be a bigger sense of responsibility on it. And it is not out of the question that that could be removed,” she said in April.

In May, a doctored video of Pelosi that made her appear to be slurring her words circulated online. YouTube removed the video from its service, while Facebook and Twitter chose not to.

A world without Section 230?

“If 230 were eliminated, you’d end up with two extremes,” Beckerman, the lobbyist, said.

“You’d end up with the 8chan-type websites or worse, where literally anything goes and the platform has no intent or interest to moderate the content. All the nudity. All the hate. All the extremism,” he continued. “Or you would end up with the opposite of the extreme where every single piece of content that goes up on a website has to be pre-screened before it goes up.”

Jeff Kosseff, an assistant professor of cybersecurity law at the U.S. Naval Academy and author of the book “The Twenty-Six Words That Created the Internet,” a history of Section 230, suggested a third possibility: Websites could disallow user-generated content entirely, becoming more like traditional print publishers with highly curated, professional content and upending the business models for companies like Facebook, Twitter and YouTube.

“You have a lot of people with very different criticisms of the current system who say, ‘Let’s get rid of Section 230,’ but they’re not answering the next question of, ‘What would the internet look like without Section 230?’” Kosseff said.

Still, Kosseff said, the size and power of today’s internet companies like Google and Facebook was hard to fathom in the mid-1990s when Section 230 was written and around 40 million people worldwide were online.

“Now you have single social media platforms that have billions of users, and they have tremendous power,” he said. The result is they can’t please everyone, and they generally don’t try to. “People are always going to disagree with moderation decisions when there’s that much power,” he said.

Each of the big tech companies has its own rulebook for user posts, often written by lawyers and approved by company management. Those polices have evolved in recent years, with many adding provisions that ban hate speech and certain types of abuse.

Facebook’s community standards prohibit hate speech and most adult nudity, and the site generally requires people to use their real names. Twitter’s rules ban “hateful imagery” in profile images and prohibit the promotion of violence generally. YouTube has its own extensive guidelines, and each of the companies maintains large offices of people paid to judge pieces of content that are reported for violations. They’re usually contract workers, often outside the United States, and sometimes working in harsh conditions. Each removal is, literally, an act of private-sector censorship.

Cruz’s office said in a statement that if tech companies are “going to choose to be partisan political actors, then there is little reason why they should get a special immunity from liability that other platforms, such as The New York Times, don’t enjoy.” Hawley’s office declined to comment.

Cox said that putting this power in the hands of the government would overwhelm federal resources.

Given the volume of stuff that Google handles on a daily basis, he said he doubted that “the FTC is going to look at that and say, ‘You missed this one.’”

“I don’t think so,” he added.

Still, other countries generally don’t have laws like Section 230 and have managed to keep a somewhat robust internet going.

One difference elsewhere is that services such as Facebook receive and sometimes comply with demands from governments and private individuals to take down material, or make certain posts not visible locally. During the second half of 2018, Facebook reported that it restricted access to 224 items in France, mostly related to Holocaust denial or defamation, and 128 items in the United Kingdom, mostly for alleged defamation. Facebook took down zero posts in the U.S. during the same period because of government requests or individuals’ defamation claims. The numbers do not include takedowns by Facebook to enforce the site’s own rules.

source: nbcnews.com