A man holds a laptop computer as cyber code is projected on him in this illustration picture taken on May 13, 2017.
A man holds a laptop computer as cyber code is projected on him in this illustration picture taken on May 13, 2017. 
Kacper Pempel / REUTERS

For some years now, Americans have been demanding that Internet companies deal with online ugliness—from misogyny, racism, anti-Semitism, and other forms of abuse to disinformation, propaganda, and terrorist content. The public fever is justifiably high. As The New York Times breathlessly editorialized about Facebook and fake news shortly after the election of U.S. President Donald Trump, “Surely its programmers can train the software to spot bogus stories and outwit the people producing this garbage.” And yet, while Congress hauls company lawyers up to Capitol Hill hearings and Facebook, Twitter, YouTube, and other technology companies struggle to address public concerns, U.S. legislation to restrict online content seems as unlikely today—for constitutional and political reasons—as it did before November 2016 (bracketing such things as child exploitation and imminent threats of violence, subjects strictly regulated offline and on).

Meanwhile, as Americans collectively fret over an Internet gone bad, Europe regulates, unconstrained by the legislative paralysis or solicitousness toward corporate America present in Washington. At every level—executive, legislative and judicial, union and state—Europeans are moving to impose restrictions on the expression that Internet companies can permit on their platforms. Although these moves reflect legitimate concerns about the abuse of online space, many risk interfering with fundamental rights to freedom of expression. What’s more, the possibility of this trend spreading beyond Europe is high.

A WAVE OF CONTENT RESTRICTIONS

European regulation of online speech has roots in a continental willingness to protect vulnerable groups against “speech harms.” (Think, for instance, restrictions on Holocaust denial.) But more recent actions show European courts and legislators pushing companies to act as speech regulators themselves. Consider, for example, the European Court of Justice’s 2014 “right to be forgotten” decision. In a case involving a Spanish citizen’s claim against Google Spain, the Court held that search engines must, upon request, ensure that irrelevant information about a person—that is, information “no longer necessary in the light of the purposes for which it was collected”—does not appear in name-based search results. The court acknowledged that rules for public figures may vary, but it found that the individual’s interest in delinking would “override, as a rule, not only the economic interest of the operator of the search engine but also the interest of the general public in having access to that information.”

The Google Spain decision, as it is often called, gave a clue to the direction that subsequent regulation of online platforms would take. It prioritized personal reputation over access to information, but, more broadly, it placed the burden on search engines themselves to implement the new rules. It took legal development out of the courts and into the monitoring cubicles of the companies. The Google form that Europeans must complete to seek delinking informs claimants that “we will balance the privacy rights of the individual with the public’s interest to know and the right to distribute information.” The cost of such private adjudication, at the enormous scale that search requires to be effective, in all likelihood prices out start-ups and innovators in the search field, which Google dominates with ease. And the decision could soon have an even broader reach: the Court will decide next year whether decisions to delink should have global effect beyond the country-specific search domains.

Meanwhile, terrorism and crimes against minorities and refugees have led the European Commission to take a number of further steps to force companies to regulate digital space. In 2016, the commission pressured Facebook, Microsoft, Twitter, and YouTube to agree to a code of conduct that pushes them to review “illegal hate speech” within 24 hours of notice and promptly remove it. It goes even further, with the companies agreeing to continue their work as mild propaganda machines “identifying and promoting independent counter-narratives.” The code parallels developments in the European Court of Human Rights, which has been toying with imposing monitoring requirements and liability on platforms for failure to remove certain kinds of hateful content.

In September of this year, the commission doubled down on these principles, adopting a formal communication that urges “online platforms to step up the fight against illegal content.” As with the right to be forgotten, the communication puts the companies themselves in the position of identifying, especially through the use of algorithmic automation, illegal content posted to their platforms. But, as Daphne Keller of Stanford’s Center for Internet and Society has argued, the idea that automation can solve illegal content problems without sweeping in vast amounts of legal content is fantasy. Machines typically fail to account for satire, critique, and other kinds of context that turn superficial claims of illegality into fully legitimate content. Automation thus involves a disproportionate takedown of legal content all to target a smaller amount of illegal material online. As a matter of law, as attorney and legal analyst Graham Smith noted, the commission process reverses the normal presumptions of legality in favor of illegality, with safeguards so weak that companies will likely err on the side of taking down content.

The idea that automation can solve illegal content problems without sweeping in vast amounts of legal content is fantasy.

The communication expressly avoids the problem of disinformation and propaganda. But regulation of such content may also be on the horizon, as the commission has announced creation of a High-Level Group to address it. Even the staunchest promoters of freedom of expression in European politics recognize that disinformation is a major problem. Marietje Schaake, a Dutch member of the European Parliament and a leading proponent of respect for human rights in Europe, captured a widespread view on the continent when she said in parliamentary debate that she is “not reassured when Silicon Valley or Mark Zuckerberg are the de facto designers of our realities or of our truths.”  

Content restrictions extend beyond Brussels to the national level. Germany enacted a law this year that places strict obligations on major Internet companies to remove “manifestly illegal content” within 24 hours of notice, with heavy fines that incentivize quickly taking down posts rather than performing careful evaluations. The United Kingdom adopted a Digital Economy Act this year, with the goal of protecting minors from “harmful content” but likely encouraging removal of lawful adult content in order to avoid sanctions. Spain took drastic measures to crack down on Catalan separatists online. French legislators sought to criminalize browsing of content "glorifying terrorism," only to be struck down by the Constitutional Council. Poland has strengthened national security controls over activity on the Internet. In each of these cases, governments are putting pressure on companies to remove illegal content, a predictable response to online harms. However, the pressure works in only one direction—leaving up illegal content will lead to penalties, whereas taking down legal content will not. Unless governments also constrain takedown of legitimate content, companies will almost certainly overregulate.

Beyond hate speech, abuse, and disinformation, one draft article in a European Commission-proposed copyright directive poses a significant potential threat to creative expression. In most online copyright law, including in the United States under the Digital Millennium Copyright Act, companies have until now typically processed claims of breach on the basis of “notice and takedown” obligations. That is, the platforms are not expected to take down such content unless they are notified of its existence. This principle is restated in the Communication and Code of Conduct, even with the exceptional time frames chipping away at its availability. Article 13 of the proposed directive, however, would reverse the accepted practice with a requirement that companies “prevent the availability” of copyright-protected content, encouraging the use of “effective content recognition technologies.” Here again is the mania for automation. Although this specific provision would only apply in copyright claims, its adoption could set a precedent for significant regulation of other kinds of content. It could impose the kind of monitoring of uploads, with the accompanying threat of overregulation, that notice-and-takedown procedures have been designed to avoid, and it would apply across a range of creative endeavors.

THE FUTURE OF FREE EXPRESSION ONLINE

These rules should concern anyone who cares about freedom of expression, as they involve limitations on European uses of online platforms. European policymakers have good faith reasons to advocate them, such as countering rampant abuse at a time of human dislocation, political instability, and rise of far-right parties. Yet the tools used often risk overregulation, incentivizing private censorship that could undermine public debate and creative pursuits. Companies may be forced into the position of facilitating practices that undermine their customers’ access to information. Europeans should be concerned, as many are.

Why should anyone else care? In the analog era, after all, a fair response in the United States to speech regulation across the pond (or anywhere else) might have been: that’s the way they do it in Europe. They have different experiences, giving some support (if very limited) to rules that U.S. courts would never permit—such as those against Holocaust denial or the glorification of terrorism.

But online space is different. All of the major companies operate at scale, and there is significant risk that troubling content regulations in Europe will seep into global corporate practices with an impact on the uses of social media and search worldwide. The possibility of global delinking of search results may be the most obvious form of content threat, but all of the rules and proposals noted above may slowly move to undermine freedom of expression. For instance, once a company invests the considerable funding required to develop sophisticated content filters for European markets, the barriers to applying them in American contexts are likely to come down.

To be clear, global attacks on online freedom of expression are severe. Illiberal governments around the world are imposing liability on individuals for posts and tweets and blogs that merely criticize public authorities or allegedly spread false information. That kind of regulation, a popular tool of repressive states, creates direct forms of censorship and individual harm. By contrast, European states have traditionally presented a protective environment for freedom of expression, with some—Scandinavian countries setting a model example—providing the strongest protection worldwide.

These are not easy policy questions. Companies themselves should be developing approaches—and many now are—to counter abuse that often successfully aims to push people, especially women and minorities, off the platforms. Those approaches should be rooted in the common standards of human rights law. Companies should be providing easy access to flagging tools to notify harassment quickly and be transparent about their processes for takedowns and be responsive when they get it wrong.

Governments should encourage these kinds of responsible steps by companies, just as many in civil society are doing, while avoiding the stiff penalties and the outsourcing of speech regulation that have been recent hallmarks of European responses to Internet harms. When they demand takedowns, their courts should be available for parties to appeal.

The proposals above, however, risk leading to a shrinking of space in the most important forums for expression available in history. They will be hard to contain in practice, principle, or in terms of geography. To the extent that they involve outsourcing adjudication to private actors, they limit the possibility of democratic accountability. They should be reconsidered, limited, and enforced through the traditional tools of the rule of law.

You are reading a free article.

Subscribe to Foreign Affairs to get unlimited access.

  • Paywall-free reading of new articles and over a century of archives
  • Unlock access to iOS/Android apps to save editions for offline reading
  • Six issues a year in print and online, plus audio articles
Subscribe Now
  • DAVID KAYE is a Professor at UC Irvine School of Law and the United Nations Special Rapporteur for the Freedom of Expression. 
  • More By David Kaye