Beyond Social Media Blocking: How to contain Hate Speech online


How to deal with hate speech spreading fast online? This has emerged as a big policy and regulatory challenge, as demonstrated most recently during the anti-Muslim violence in the Eastern and Central Provinces of Sri Lanka. In response, the government first declared a state of emergency, and then blocked key social media and instant messaging platforms for the first time. This blocking lasted eight days, and led to many protests by free speech activists and average users alike. Thousands of micro and small businesses using Facebook and Instagram for their operations were impacted.

Being the most popular social media platform (used by over six million Lankans), Facebook was at the centre of attention. FB stood accused of not doing enough to weed out hate speech being generated by a small but active number of its users. Facebook representatives held talks with senior government officials before the service was unblocked. Details of their talks were not made public, but a media release from the president’s office said, “The government will continue to work together with Facebook to prevent hate speech and misuse of the platform. Anyone propagating hate speech on Facebook is liable under Sri Lankan law and prompt action will be taken as per Facebook’s community standards. Both sides will continue to engage extensively to discuss these matters.”

Containing hate speech anywhere – online or offline – is much easier said than done. It involves a fine balancing act as overzealous regulation can easily trample freedom of expression (FOE) guaranteed by the Constitution and international human rights treaties that Sri Lanka has signed. Censorship or brute force blocking is not the answer. Instead, we need an informed and nuanced approach to identifying and limiting “hate speech”, while allowing the maximum possible freedoms for criticism, debate and dissent.

Hate speech itself is not new. But many of its modern-day peddlers often advantage of the web’s facilities to remain anonymous or use pseudonyms, and exploit the ease of rapid mass sharing on social media. Conceptual clarity matters here. Hate speech has various definitions, but is generally understood as the advocacy of hatred based on nationality, race, religion, gender or sexual orientation.

More importantly, hate speech is different from offensive speech, and the two should not be conflated. Indeed, the right to FOE extends to unpopular ideas and statements that may “shock, offend or disturb.” As author Salman Rushdie once remarked, “What is freedom of expression? Without the freedom to offend, it ceases to exist.” (He should know.)

Hate speech as it is understood in general usage is often different to what is prohibited as hate speech in legal terms. The latter requires an element of incitement (and not the simple advocacy of hatred). A number of human rights treaties, including the International Covenant on Civil and Political Rights (ICCPR, adopted in 1966), address the issue of hate speech. The ICCPR actually obliges states to prohibit hate speech, but such legal action requires evidence of incitement to ‘discrimination, hostility or violence’ in addition to advocacy of hatred.

A global study by UNESCO – the UN agency that promotes FOE – noted in 2015 that “any limitations [to hate speech] need to be specified in law, rather than arbitrary. They should also meet the criterion of being ‘necessary’ – which requires the limitation to be proportionate and targeted in order to avoid any collateral restriction of legitimate expression which would not be justifiable”. In this study, titled ‘Countering online hate speech’, UNESCO also highlighted: “International standards also require that any limitation of expression also has to conform to a legitimate purpose, and cannot just be an exercise of power. Besides for the objective of upholding the rights of others noted above, these purposes can also be national security, public morality or public health.”

Modern day hate speech in Sri Lanka can be dated back to at least a century. According to his biographers, Buddhist revivalist Anagarika Dharmapala (1864-1933) used a mix of advice, rhetoric and invective – sometimes berating specific ethnic or religious minorities. He was also an early adopter of motorised transport and loudspeakers to propagate his views across the country.

Hate speech spreads seeds of suspicion and antagonism that get entrenched. Prejudices built up over decades have no doubt contributed to communal violence against erupting periodically since 1958.

Hate speech in Sri Lanka also needs to be understood in the context of the country’s civil war, and the slow reconciliation since it ended in 2009. The conflict heavily polarised Lankan society along ethnic, religious and political lines, and energised various forms of ultra-nationalism. Instead of nurturing national healing, political parties have only exploited these divisions.

In post-war Sri Lanka, some Sinhala and Tamil language newspapers continue to use racially charged language and accommodate extremist viewpoints (without adequate or any counter views). Ratings-chasing TV channels also play with the deadly fires of communalism. The setting for reigniting communal violence was thus many years in the making. An already parched environment caught fire when a few extremist users of social media provide just a few sparks…

Warnings of hate speech spreading online have been sounded for a while, but went unheeded. A few concerned social activists and researchers have been gathering and analysing evidence of rising volumes of hate speech, especially on Facebook.

In the first such local study in 2014, the Centre for Policy Alternatives (CPA) noted: “The growth of online hate speech in Sri Lanka does not guarantee another pogrom. It does, however, pose a range of other challenges to government and governance around social, ethnic, cultural and religious co-existence, diversity and, ultimately, to the very core of debates around how we see and organise ourselves post-war.”

Titled ‘Liking Violence: A study of hate speech on Facebook in Sri Lanka’, the report looked at 20 Facebook groups in Sri Lanka over a couple of months, focusing on content generated just before, during and immediately after violence against the Muslim community in Aluthgama in June 2014. More generally, the study explored the phenomenon of hate speech online – how it occurs and spreads online, what kind of content is produced, by whom and for which audiences.

While the Muslim community in Sri Lanka – 9% of the population – have been the direct target of most such online hate speech, the CPA study found that various other groups are also being targeted. Among them were human rights activists, moderate politicians, clergy who advocate religious harmony, women, the LGBT community and many citizens who don’t ‘identify with the hardline Sinhalese Buddhist cause’. (Full report at:

Other studies, for example by the Women and Media Collective (WMC), have documented how women in general and women activists in particular are targeted for online harassment, threats and vilification. All this work has been released in the public domain. In any case, any law enforcement or intelligence officer could easily find examples of hate speech by spending a few hours on Facebook. Much of the material has been shared so openly and blatantly. Educated men and woman have uncritically shared them.

Despite such mounting evidence, the police took no action against hate speech (even though individual cases of identity theft and cyber bullying were investigated). As Gehan Gunatilleke, a human rights lawyer and research director of Verité Research, a think tank, wrote in January 2016: “Sri Lanka does not face a gap in the law as far as hate speech is concerned. In fact, the current law is fully compliant with international standards. The problem is one of enforcement.”

According to him, the ICCPR Act No 56 of 2007 prohibits the advocacy of ‘religious hatred that constitutes incitement to discrimination, hostility or violence’. It gives the High Court jurisdiction to punish offenders. This law, he says, is fully compliant with international human rights standards.

Yet, after a decade since the law was passed, no one has been successfully prosecuted for hate speech using this law. It is not as if the offenders were hard to find: emboldened by an apparent impunity, they have been posting hate-spewing text, images and videos online under their own names.

Not finding redress locally, some outraged social media users complained to Facebook administrators against specific posts and images violated its own community standards (explained here: This too proved to be a hit-or-miss strategy, as Facebook’s internal vigilance is slow and uneven – especially on content in Sinhala and Tamil languages, where their capacity seems very limited.

Even when Facebook took down content, hate mongers would often emerge under new (often bogus) identities. Facebook admin and concerned users are engaged in a never-ending cat and mouse game. A more systemic fix is needed. But how?

“Ultimately, there is no technical solution to what is a socio-political problem,” said CPA’s 2014 study. That sums up the challenge: The massive number of users, high level of content volume and diversity, and the speed at which hate speech is being generated and shared make real time monitoring a daunting task. The challenge is complicated by some governments trying to stifle legitimate criticism in the guise of tacking hate speech. This is a major concern as the Lankan government responds to hate speech online.

UNESCO’s 2015 study has recommended several actions. Concerned individuals can engage in peer-to-peer counter-speech; civil society groups can monitor, document and analyse hate speech, and where warranted, report such evidence to the authorities for legal action. Advocacy groups can also campaign for greater vigilance by internet companies.

Both UNESCO and CPA studies have recommended systemic responses as well. In the medium to long term, the real defences against hate speech can only be built inside human minds. This is where enhancing digital literacy and strengthening the online community’s capability to counter hate speech become crucial.