This seems to be the year for awful internet regulation by the EU. At least there were some redeeming qualities in the GDPR, but they were few and far between, and much of the GDPR is terrible and is creating real problems for free speech online, while simultaneously, undermining privacy and giving repressive governments a new tool to go after critics. Oh, and in the process, it has only made Google that much more dominant in Europe, harming competition.
And, then, of course, there's the still ongoing debate about the EU Copyright Directive, which will also be hellish on free speech. The entire point of Article 13 in that Directive is to wipe away the intermediary liability protections that enable websites to host your content. Without such protections, it is not difficult to see how it will lead to a widespread stifling of ideas, not to mention many smaller platforms for hosting content exiting the market entirely.
But here's the thing: both of those EU regulations are absolutely nothing compared to the upcoming EU Terrorist Regulation. We mentioned this a bit back in August, with the EU Commission pushing for the rule that all terrorist content must be taken down in an hour or face massive fines and possible criminal liability. Earlier this year, Joan Barata at Stanford wrote a compelling paper detailing just how extreme parts of the proposed regulation will go.
Among the many questionable bits of the Terrorist Regulation are that it will apply no matter how small a platform is and even if they're not in the EU, so long as the EU claims they have a "significant number" of EU users. Also, if a platform isn't even based in the EU, part of the proposal would require the companies to hire a "representative" in the EU to respond to these takedown demands. If the government orders a platform to take down "terrorist" content, a platform has to take it down within an hour and then set up "proactive measures" to stop the same content from ever being uploaded (i.e., mandatory filters).
Oh, and of course, this mechanism for rapid and permanent censorship based solely on the government's say so, has... a ridiculously vague "definition" of what counts as "terrorist content."
'terrorist content' means one or more of the following information:(a) inciting or advocating, including by glorifying, the commission of terrorist offences, thereby causing a danger that such acts be committed; (b) encouraging the contribution to terrorist offences; (c) promoting the activities of a terrorist group, in particular by encouraging the participation in or support to a terrorist group within the meaning of Article 2(3) of Directive (EU) 2017/541; (d) instructing on methods or techniques for the purpose of committing terrorist offences.
There are all sorts of problems with this, and as the IP-Watch site notes, this appears to be a recipe for private censorship on the internet.
Recently, a large group of public interest groups sent a letter to EU regulators laying out in great detail all of the problems of the regulation. I'm going to quote a huge chunk of the letter, because it's so thorough:
Several aspects of the proposed Regulation would significantly endanger freedom of expression and information in Europe:
Vague and broad definitions: The Regulation uses vague and broad definitions to describe ‘terrorist content’ which are not in line with the Directive on Combating Terrorism. This increases the risk of arbitrary removal of online content shared or published by human rights defenders, civil society organisations, journalists or individuals based on, among others, their perceived political affiliation, activism, religious practice or national origin. In addition, judges and prosecutors in Member States will be left to define the substance and boundaries of the scope of the Regulation. This would lead to uncertainty for users, hosting service providers, and law enforcement, and the Regulation would fail to meet its objectives. ‘Proactive measures’: The Regulation imposes ‘duties of care’ and a requirement to take ‘proactive measures’ on hosting service providers to prevent the re-upload of content. These requirements for ‘proactive measures’ can only be met using automated means, which have the potential to threaten the right to free expression as they would lack safeguards to prevent abuse or provide redress where content is removed in error. The Regulation lacks the proper transparency, accountability and redress mechanisms to mitigate this threat. The obligation applies to all hosting services providers, regardless of their size, reach, purpose, or revenue models, and does not allow flexibility for collaborative platforms. Instant removals: The Regulation empowers undefined ‘competent authorities’ to order the removal of particular pieces of content within one hour, with no authorisation or oversight by courts. Removal requests must be honoured within this short time period regardless of any legitimate objections platforms or their users may have to removal of the content specified, and the damage to free expression and access to information may already be irreversible by the time any future appeal process is complete. Terms of service over rule of law: The Regulation allows these same competent authorities to notify hosting service providers of potential terrorist content that companies must check against their terms of service and hence not against the law. This will likely lead to the removal of legal content as company terms of service often restrict expression that may be distasteful or unpopular, but not unlawful. It will also undermine law enforcement agencies for whom terrorist posts can be useful sources in investigations.
The European Commission has not presented sufficient evidence to support the necessity of the proposed measures. The Impact Assessment accompanying the European Commission’s proposal states that only 6% of respondents to a recent public consultation have encountered terrorist content online. In Austria, which publishes data on unlawful content reports to its national hotline, approximately 75% of content reported as unlawful were in fact legal. It is thus likely that the actual number of respondents who have encountered terrorist content is much lower than the reported 6%. In fact, 75% percent of the respondents to the public consultation considered the internet to be safe.
And that's not all. The UN's Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression (yup, that's the title), David Kaye, has also sent a letter warning of the problems of such a regulation on free speech. It's 14 pages long, but the key point:
...we wish to express our views regarding the overly broad definition of terrorist content in the Proposal that may encompass legitimate expression protected under international human rights law. We note with serious concern what we believe to be insufficient consideration given to human rights protections in the context of to the proposed rules governing content moderation policies. We recall in this respect that the mechanisms set up in Articles 4-6 may lead to infringements to the right to access to information, freedom of opinion, expression, and association, and impact interlinked political and public interest processes. We are further troubled by the lack of attention to human rights responsibilities incumbent on business enterprises in line with the United Nations Guiding Principles on Business and Human Rights.
In other words, yet another European regulation targeting internet companies (many of whom are not based in Europe) that will ultimately lead to (1) greater censorship (2) more consolidation by internet giants, as smaller platforms won't be able to compete, and (3) massive "unintended" consequences for the internet as a whole.
Maybe it's time we just kick the EU off the internet. Let them build their own.