Europe is trying to make the internet more fair. How that may backfire.

The motives behind Europe’s effort to reform its copyright law appear unambiguously altruistic.

The reformers argue that internet giants like Google or Facebook are taking money that should be going to smaller publishers like local media or artists. “Due to outdated copyright rules, online platforms and news aggregators are reaping all the rewards while artists, news publishers, and journalists see their work circulate freely, at best receiving very little remuneration for it,” the European Parliament’s legislative committee explained earlier this month.

No one involved in the debate objects to these goals. But critics question their cost should the European Copyright Directive move forward in its current form. They warn that the attempt to make the major platforms directly liable for copyright infringement by their users could cause them to limit their users’ ability to post their own content – and fundamentally change the way the internet works.

Recommended: Free speech advocates sound alarm over EU copyright proposal

“The Copyright Directive will make the internet a place where anything anyone posts must first pass through algorithmic gatekeepers,” says Jim Killock, executive director of British digital rights campaigners Open Rights Group. “That’s a tectonic shift away from the principles underpinning a free and open internet. Everything we know about automated filters shows they are incapable of grasping context which is vital to legal use of copyrighted material like parodies, commentary, or remixes. This won’t threaten the very survival of online communities, but free expression will be worse off for it.”

FILLING THE ‘VALUE GAP’

The stated goals of the directive are to protect press publications, reduce the “value gap” between the profits made by internet platforms and content creators, and the creation of copyright exceptions for text and data mining. And it has been subject to intense lobbying by a broad spectrum of actors, ranging from corporate tech giants like YouTube and Facebook to digital rights defenders such as the Electronic Frontier Foundation and commercial vendors like Audible Magic, an automated content-recognition developer.

The main controversy stems from the directive’s Article 13, which will govern internet platforms that organize and promote large amounts of copyright-protected works uploaded by their users in order to make a profit. The article would make platforms like Facebook, YouTube, Instagram, and Tumblr directly liable for any copyright infringements committed by their users – effectively forcing them to police their own users for copyright violations.

“If even a single user commits a copyright infringement, it will be viewed as if the platform had done so itself,” notes Germany’s Pirate Party member of the European Parliament Julia Reda, a leading critic of the directive. “This will force platforms to take drastic measures, since they can never say for certain which of our posts or uploads will expose them to costly liability.”

Complicating the issue is the directive’s lack of “safe harbors” for platforms. In the past (and under similar US laws), internet platforms have not been held liable for copyright violations until they are aware of them; only once a platform has received notification of a violation and an opportunity to correct it does it face legal ramifications. But under the proposed directive, platforms do not have any “safe harbor.” Instead, they face liability as soon as a violation occurs, meaning at the time of upload by the user.

That changes the problem dramatically for internet giants by making catching copyright violations into a proactive rather than reactive process. Given the staggering volume of content that is put up online per minute, platforms have to turn to automated systems to go through the material to pick out possible violations.

THE PROBLEM WITH FILTERS

The current preferred solutions that the platforms use to identify copyright infringement are upload filters. But they aren’t cheap. YouTube’s Content ID system cost it a whopping $60 million in 2016. The company has reportedly used the system to pay rights holders more than 2.6 billion euros ($3 billion) for third-party use of their content. But even armed with this technology, YouTube CEO Susan Wojcicki has warned that the potential liabilities under Article 13 are so large that no company would be willing to have that risk.

“Bottom line, you need to have information and the money and resources to develop this kind of technology,” says Ms. Reda. “It is only very large companies that can do that, or the entertainment industry itself.”

Such filtering systems, says Mr. Killock, lack the nuance needed to differentiate between a film commentary, which would be generally legal, and an illegally posted pirated video. While this won’t spell the end of all commentary and memes, “false positives snared by algorithms will block a lot of legal content and generally hamstring free expression.”

Besides, the large platforms have an incentive to block that extra content. “They may well need to restrict who is allowed to post/upload content in the first place, demand personal identification from uploaders, and/or block most uploads using overly strict filters to be on the safe side,” Reda says.

In practical terms, this means that a big star’s music video would have no trouble with YouTube’s hypothetical directive-based upload filter. But the amateur music critic who posts a video that used part of the star’s original – in order to make points about the song – could set off the filter, thereby being prevented from legally sampling the original work.

Platforms like Wikimedia – which makes content available under free license for anyone to use, copy, or remix – have also warned that mandatory automatic content detection will take a toll on collaboration and freedom of expression.

Geneva-based Konstantinos Komaitis, senior director of policy development and strategy working of the Internet Society, says regulation must be informed, focused, and proportionate. The problems start when you try to design laws with a narrow focus on the Big Tech companies, who ironically will come out the winners under the new copyright regime.

“One of the things that we need to understand in this internet climate and in this internet economy,” he says, “is that when you create rules with specific businesses or business models in mind, like Google and Facebook, they will be able eventually to accommodate these new regulatory obligations in ways that new entrants cannot.”

BREAKING THE INTERNET?

Another area of concern is how the controversial articles of the directive contribute to the fragmentation of the internet. Europe’s General Data Protection Regulation (GDPR), which came into force in May last year, offered a lesson on the unintended extraterritorial consequences of internet laws. Several American newspapers went dark in Europe over the GDPR. European citizens still cannot read the Chicago Tribune from their side of the Atlantic.

“There is this danger that the global reach of the internet might actually start getting less global, and we are moving towards fragmentation,” Komaitis says. “The more we try to make the internet with one nation’s regulatory thinking … the more we risk sabotaging the diversity that is critical for its resilient and global nature.”

Carlo Scollo Lavizzri, a Basel-based copyright lawyer with 17 years experience, believes some of the fears around the copyright bill – on the themes that it will “break the internet” or that filtering technologies will pave the way to censorship and surveillance – are overplayed. He argues that it makes sense to hold online platforms accountable and make them more responsible for the content they use, especially when they base their entire business model on offering attractive content to the public.

“The idea that the online world is a zone where normal rules of the road don’t apply may have to change,” he says. “People [may] become more aware of the centrality of the online platforms and also ask to be rewarded where they generate traffic and data that favors tech giants.”

Article 11 of the European Copyright Directive, also known as the “link tax,” has stirred an equally fierce debate. It would require news aggregators to pay for showing snippets of information when linking to news stories. It is meant to “ensure that some money goes from multi-billion news aggregators to the journalist who has done all the hard work writing up an article” and would affect content aggregators like Facebook, Twitter, and Pinterest.

But a similar law in Spain resulted in Google shutting down Google News in the country, and a comparable effort in Germany led publishers to lose money. If the aggregators responded similarly to Article 11, large newspaper publishers would likely be fine because they can rely on people finding them directly, whereas smaller and more specialized outlets would be harder hit. So it would be very bad for the diversity of the media landscape.

Reda adds that one of the unintended consequences of Article 11 will be a boost for fake news and propaganda outlets. “No fake news website in the world will charge for being linked to it,” she says. “Their entire reason of existence is to be seen by as many people as possible.”

A LONG WAY TO GO

The copyright reform was adopted by the European Parliament in September 2018, officially launching the legislative process. Since then, the European Commission, the Parliament, and the Council of the European Union have been negotiating the final version of the directive’s text in intense closed-door negotiations.

But the final vote, expected this past Monday, was canceled at the last minute after six countries objected to the proposed text, sending the directive back to the drawing board. Compromise language on Articles 11 and 13 remains elusive, as does agreement on potential exceptions for small or medium-sized enterprises – another sticking point. The directive is now considered unlikely to pass before May’s European elections.

John Weitzmann, the head of policy at Wikimedia Deutschland, notes that since different countries are likely to interpret the text of the directive differently, “it will take several years and multiple strategic litigations to understand the full scope of its consequences.”

Related stories

Read this story at csmonitor.com

Become a part of the Monitor community

source: yahoo.com