Ever since the European Commission presented its hugely controversial proposal to force internet platforms to employ censorship machines, the copyright world has been eagerly awaiting the position of the European Parliament. Today, the person tasked with steering the copyright reform through Parliament, rapporteur Axel Voss, has finally issued the text he wants the Parliament to go forward with.

It’s a green light for censorship machines: Mr. Voss has kept the proposal originally penned by his German party colleague, former Digital Commissioner Günther Oettinger, almost completely intact.

In doing so, he is dismissing calls from across the political spectrum to stop the censorship machines. He is ignoring one and a half years of intense academic and political debate pointing out the proposal’s many glaring flaws. He is discarding the work of several committees of the Parliament which came out against upload filters, and of his predecessor and party colleague MEP Comodini, who had correctly identified the problems almost a year ago. He is brushing off the concerns about the proposal’s legality several national governments have voiced in Council. And he is going against the recently published coalition agreement of the new German government – which is going to include Voss’ own Christian Democratic Party – where filtering obligations are rejected as disproportionate.

Photo © European Union (used with permission)

[Read Axel Voss’ compromise proposal PDF]

This is a “compromise” in name only. Mr. Voss’ proposal contains all the problematic elements of the original censorship machines idea, and adds several new ones. Here’s the proposal in detail:

1. obligatory impossible-to-get licenses

The proposal says: All apps and websites where users can upload and publish media are required to get copyright licenses for all content. These platforms are considered to “communicate to the public” all those user uploads, which means that the platforms would be directly responsible for copyright infringements committed by their users, as if it were the platform’s employees themselves uploading these works.

This is a bizarre addition to the Commission proposal, which would be impossible to implement in practice: Who exactly are the platforms supposed to get those license agreements from? While there may be collecting societies representing professional authors in a few areas such as music or film, which may be able to issue a license covering the works of many individual authors, other sectors do not have collecting societies at all.

Imagine a platform dedicated to hosting software, such as GitHub. There is no collecting society for software developers and nobody has so far seen the need to found one. So where will GitHub, which undoubtedly hosts and gives access to (copyright-protected) software uploaded by users, get their copyright license from? They can’t enter into license negotiations with every single software developer out there, just because somebody might someday upload their software to GitHub without permission. And without that impossible-to-get license, this law says they will be directly liable as soon as somebody does upload copyrighted works. That’s a sure-fire way to kill the platforms economy in Europe.

And these impossible-to-get licenses cover only non-commercial use: If the platform acquires a license as prescribed, then non-commercial uploaders won’t be liable. Uploaders acting for commercial purposes however, such as companies with social media accounts, can still be sued by rightsholders.

2. The censorship machine is here to stay

The proposal says: All platforms hosting and providing public access to “significant amounts” of user-uploaded content have to prevent copyrighted content that rightsholders have identified from being uploaded in the first place.

There are only two ways to do this: (a) hire an army of trained monkeys to look at every individual user upload and compare it manually to the rightsholder information or (b) install upload filters. The article that creates this obligation no longer mentions content recognition technologies explicitly, but they are still mentioned in other parts of the text, making it clear that filters are what Voss has in mind.

There is no definition what “significant amounts” are supposed to be. The Commission was widely criticised for requiring censorship machines on platforms with “large amounts” of content, following the misguided idea that only large companies with significant resources available to dedicate to the development of upload filters host large amounts of content, completely ignoring the wide diversity of popular specialised platforms out there: Community-run platforms like Wikipedia, niche platforms like MuseScore (for sheet music) and many startups host millions of uploads, but would struggle to implement or license expensive filtering technology.

Why Voss believes replacing the word “large” with the potentially even broader “significant” is supposed to improve anything remains completely unclear.

3. A tiny problem with fundamental rights

The proposal says: The filtering measures must not entail any processing of personal data, in order to protect users’ privacy

The only indication that Mr. Voss has paid attention to any of the public criticism at all is that he acknowledges there may a tiny problem with fundamental rights. Indeed, the European Court of Justice has in the past ruled that an obligation to filter all user uploads violates the fundamental rights to privacy, freedom of expression, freedom of information and freedom to conduct a business. Voss picks one of those fundamental rights seemingly at random and adds a provision aimed at protecting it. Admirable as this may be, it is also in direct contradiction to what comes next:

Because filters will invariably delete content that is legal, for example under a copyright exception, users are supposed to have access to a redress mechanism to complain about overblocking. But how exactly is the platform supposed to offer the user that redress if it is not allowed to process any personal data? Simply recording which user’s uploads have fallen victim to the filter already requires processing of personal data. How can a user complain about a wrongful takedown if the platform is not allowed to keep records of what the filter deleted in the first place?

It gets better: Guess who should decide about what happens with the users’ complaints about wrongful takedowns? The rightsholders who asked for the content to be blocked in the first place. Surely they will turn out to be an impartial arbiter.

At least, users are supposed to be able to go to a court if the redress mechanism fails. However, this may end up being ineffective, because copyright exceptions do not constitute legal rights against the rightsholders, so a court may decide not to require a platform to reinstate previously deleted uploads, even if they were legal under a copyright exception.

What users need is a clear legal rule that the copyright exceptions constitute users’ rights – just like the previous copyright rapporteur Therese Comodini had suggested.

4. Very specific general monitoring

The proposal says: Checking all user uploads for whether they are identical to a particular rightsholder’s copyrighted work does not constitute forbidden “general“ monitoring, but is “specific“.

EU law forbids any laws that force hosting providers to do “general monitoring”, such as checking every single file uploaded by every user all of the time. Voss simply postulates that upload filters would not break that rule and writes that only “abstract monitoring” should be forbidden, which presumably means randomly looking at uploaded files without looking for anything in particular.

This argument has already been dismissed by the European Court of Justice: The European Commission tried making it in defense of upload filters in the past – and lost (Paragraph 58 of this French-language Commission contribution to the European Court of Justice case Scarlet vs. SABAM).

5. Few exceptions

The proposal says: The filtering obligation should not apply to Internet access services, online marketplaces such as ebay, research repositories where rightsholders mainly upload their own works such as arXiv, or cloud service providers where the uploads cannot be accessed publicly, such as Dropbox.

In a last-ditch attempt to redeem himself, Voss provides a welcome clarification that the obligation to filter does not extend to certain businesses. But this exception, not legally binding since it is in a recital rather than an article, does not apply to the obligation to license.

The listed platforms would still have to get licenses from rightsholders provided that the user uploads are publicly accessible, because they would still be considered to be communicating to the public. But how are these platforms supposed to shield themselves from lawsuits by rightsholders if they can’t get a license for all possible content that may be uploaded? They will have to resort to a filter anyway.

6. Critical parts remain unchanged

Large parts of the most widely criticised elements of the Commission proposal were left completely unchanged by rapporteur Voss, such as the infamous Recital 38 (2), where the Commission misrepresents the limited liability regime of the e-commerce directive, essentially stating that any platform that so much as uses an algorithm to sort the uploaded works alphabetically or provides a search function should be considered as “active” and therefore liable for its users’ actions. The only change that Mr. Voss has made to this section is cosmetic in nature.

* * *

It’s not too late to stop the Censorship Machines!

Fortunately, Axel Voss does not get to decide the Parliament position on his own. He will need to secure a majority in the Legal Affairs (JURI) committee, which will vote in late March or April. Two other committees have already come out strongly against filtering obligations, and several JURI members have tabled amendments to delete or significantly improve the Article.

Now it’s time to call upon your MEPs to reject Mr. Voss’ proposal! You can use tools such as SaveTheMeme.net by Digital Rights NGO Bits of Freedom or ChangeCopyright.org by Mozilla to call the Members of the Legal Affairs Committee free of charge. Or look for MEPs from your country and send them an email.

But most importantly, spread the word! Ask you local media to report on this law. The Internet as we know it is at stake.

To the extent possible under law, the creator has waived all copyright and related or neighboring rights to this work.

3 comments

  1. 1

    “Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.”

    20 years after the declaration of independence of cyberspace, we need more of those laws to realize the future: an technically uncensorable cyberspace. Without crazzy laws like this one, there is no incentive to develop those technologies.

  2. 2

    Hallo Julia, ich habe Deinen Blog übersetzt auf meiner neue Website veröffentlicht. Beste Grüße
    Knut
    1. VoSi LaVo Sachsen

  3. 3

    I think it’s possible to make this particular argument in their own language and to win this without appealing to an alternative philosophy.

    There is a concrete technical difference between software source code and other electronic media which explains why sites like Github have never had a problem with widespread copyright infringement in the way youtube has. Producers of video, audio and literature are faced with the problem that they must distribute a complete work in a linear format that is digestible by a person. Such a format is never very hard to digitally copy.

    Source-code suffers no such problem. It generates software useful to billions of people without any of them needing access to it.

    This means that usually when source code is freely available this was nearly always the desire of the author. Sure, there are different kinds of license but usually these license are used to *encourage* further copying rather than limit to it. In the case where unlicensed code is stolen and published then automated, pre-emptive filters are virtually powerless and we already have DMCA for companies to issue takedown notices.

    In summary, a digital asset like source-code is different because it does not need to be copied to non-copyright holding customers for it to serve its purpose. The other types of media covered by this law (including software executables) do.

    This makes it a categorically different entity. Something which is categorically different needs categorically different laws written for it.