How do we tackle online hate speech?


Last summer marked the start of a new era in internet policing. On 31 May, the European Commission announced a new agreement with social media giants to tackle hate speech online. It unveiled a new “code of conduct”, under which Facebook, Twitter, Microsoft and YouTube all pledged to curb the spread of racist and xenophobic language on their European platforms.

The new policies seek to update a previously inconsistent body of laws surrounding the issue. One important feature of the code is the companies’ commitment “to review the majority of valid notifications for removal of illegal hate speech in less than 24 hours and remove or disable access to such content, if necessary”.

To some, this might seem like common sense.


Social media serve as breeding grounds for fundamentalist ideology and the spread of racist messages. In a time when violent xenophobic movements are reemerging, and international terrorism is a horrific reality, it is arguable that such networks should be increasingly monitored. (If you want a sense of some of the nasty language being used, search for “migrant crisis” on YouTube.)

Vĕra Jourová, the EU commissioner in charge of drawing up the code, spoke of an “urgent need to address illegal online hate speech” following recent terror attacks in Europe. She described social media as an important means by which terrorist networks could radicalise young people.


But any talk of content being removed ought to set alarm bells ringing. When dealing with hate speech, we encounter the thorny issues of free speech versus censorship, of safety versus civil liberties. If we allow Facebook and Twitter greater licence to remove posts they deem unacceptable, where do we draw the line?

Campaigners for an open internet have expressed dismay at the announcement. European Digital Rights (EDRi) and Access Now, two prominent lobbying groups for online rights, claim the code downgrades existing European laws to “a second class status”, leaving the decisions about what constitutes hate speech in the hands of social media sites.

Spokespeople for the websites were quick to emphasise the need for caution when it comes to removing content. Twitter’s European Public Policy chief, Karen White, told reporters that: “Hateful conduct has no place on Twitter… However, there is a clear distinction between freedom of expression and conduct that incites violence and hate”. Similarly, Monica Bickert, Head of Global Policy Management at Facebook, said her company sought “to balance giving people the power to express themselves whilst ensuring we provide a respectful environment”.

But for many, these assurances will not be enough.

To those worried about erosion of free speech, the Commission’s new code can be seen as part of a worrying trend. As Germany struggles with an upsurge in popular nationalism linked to the migrant crisis, fear of violence has led to unprecedented government intervention in online affairs.

Following a well-publicised trial in May, Pegida founder Lutz Bachmann was fined €9,600 by a Dresden court for calling migrants “filth” and “scum” on Facebook. His was one of a number of recent convictions under Germany’s “incitement of the people” laws. Over half the charges under the Volksverhetzung now arise from social media posts (although most don’t lead to a conviction), according to Sabine Beppler-Spahl.

Beppler-Spahl, head of a liberal thinktank, has made the important point that laws like the Volksverhetzung promote silence rather than open discourse: “Putting people like Bachmann on trial is not just an attempt to stop him from saying what he wants. It is a powerful signal to the rest of us to watch what we say”.

A disturbing picture is emerging, of a government forced to resort to censorship – and related arrests – in the face of fear.

In July, Germany saw its first nationwide police raids linked to the practice of hate posting. In the morning of 13 July, police in 14 different states arrested at least 60 individuals connected to a far-right online forum. Recently it was revealed that over 100,000 posts containing “hateful comments” were removed from German Facebook in August alone, with Justice Minister Heiko Maas describing this figure as “too little”.


Is attempting to wipe the internet clean of hate speech the best policy for these troubled times?

We should condemn the awful language used by Bachmann and others, but it is important to question whether this method of tackling hate speech is the correct one. A better solution would be to allow even greater freedom of speech, encouraging people to challenge these hateful opinions, rather than simply report them.

Beppler-Spahl argues that the silencing of unpleasant voices promotes a culture of cowardice, which those who preach hate can exploit. The resolve of political and religious extremists is only strengthened by attempts to silence them.

Now, more than ever it seems, we need to encourage people to speak out when they encounter opinions they disapprove of, rather than referring them to an arbitrary monitor. In a lecture at the TED Summit in June, internet freedom activist Rebecca MacKinnon sets out clearly how the fight against extremist ideology can be won without the destruction of human rights.

Citing UN Secretary General Ban Ki-Moon’s assertion that “preventing extremism and promoting human rights go hand-in-hand”, she explains why an open internet is one of the strongest weapons a democracy possesses in the fight against hate speech. She heaps scorn on the “black box” enforcement policies of social media sites.

Examples of activists who have had their accounts deleted serve to illustrate this point: take the case of Iyad el-Baghdadi, a vocal critic of ISIS on Twitter whose account was deactivated because he shares a surname with one of its leaders; or David Thomson, a terrorism expert and reporter for Radio France whose Facebook page was taken down simply because it contained images of ISIS flags.

These instances of “collateral damage”, MacKinnon argues, demonstrate how activists who should be on the frontline of fighting online hate speech are being impeded by attempts to stamp it out completely.

[spacer height=”20px”]

[ted id=2567]

[spacer height=”20px”]

[spacer height=”20px”]

It is understandable that in times of crisis, governments want to appear responsive. Surveys conducted earlier this year revealed that 77% of German respondents feared an imminent terrorist attack, while 68% were worried by political extremism. No doubt many of these people are not bothered by the government’s creeping involvement in internet policing.

It is quite natural to want to feel safe. But rather than sacrifice our rights in the face of fear, we need to fight even harder to protect them. Not only does censoring hate speech actually galvanise its preachers, it also hinders our greatest tool for dealing with it – lively public discourse.

It is no surprise that the 2015 UNESCO report on countering hate speech online sees “mobilizing civil society” as crucial. This is a menace that needs to be fought from the bottom up, not the top down.


By Tom Lynas

Tom Lynas is an Englishman who has recently fled to Leipzig. He is a history graduate and occasional community radio host. His interests include politics, psychology, cinema, swimming and, of course, history.

Default thumbnail
Previous Story

Cézanne and Zola: turbulent but lasting bond

Default thumbnail
Next Story

Eilenburg and the terrorist next door

Latest from Opinion