In the wake of the Easter Sunday suicide attacks by Muslim youth inspired or instigated by the Islamic State (ISIS), there is renewed interest in understanding how youth radicalisation happens. Some are asking whether, among other factors, the internet and social media also contribute to it. Since 21 April, news media have revealed how the extremist preacher Zahran Hashim, leader of now-banned National Thowheed Jamaath (NTJ), had used Facebook to publicly call for “the death of non-Muslims”, while he worked in private online chatrooms to persuade young men and women to sacrifice themselves. Hashim had uploaded some of his vitriolic sermons to YouTube as well. Following the 4/21 attacks, the Google-owned video sharing platform quickly removed them.
Hashim is among many using public and private online platforms for peddling extremism of various kinds. Many governments and even some civil society groups seem convinced of a cause-and-effect link between web content and radicalisation. That, in turn, has prompted online censorship, mass scale electronic surveillance, as well as counter-speech (promoting ‘good’ content to counter negative or destructive messages) in a growing number of countries.
It is true that violent extremists use social media in sophisticated ways – mainly for propaganda, intimidation, recruitment and fundraising. What is still not clear, however, is the efficacy of propaganda in brainwashing young people for extreme causes ranging from white supremacism and neo-Nazism, to religious fundamentalism.
UNDERSTANDING RADICALISATION
There is no universally accepted definition of radicalisation. One used widely by the European Union describes it as “the process by which a person comes to adopt extreme political, social or religious ideas and aspirations that inspire violence or acts of terror”.
Youth radicalisation is not new in Sri Lanka. Anishka De Zylva, a research associate at the Lakshman Kadirgamar Institute of International Relations and Strategic Studies, has mapped different waves of radicalisation the country has experienced since the 1960s. It started with the left-wing radicalisation under the Janatha Vimukthi Peramuna (JVP), which led to two youth uprisings in 1971 and 1987-89, both of which were violently crushed. Then, the Liberation Tigers of Tamil Eelam (LTTE), an ethno-nationalist separatist group, waged war against the government to carve out an independent state. This led to the brutal civil war from 1983 to 2009.
[pullquote]VIOLENT EXTREMISTS USE SOCIAL MEDIA IN SOPHISTICATED WAYS – MAINLY FOR PROPAGANDA AND RECRUITMENT[/pullquote]
The third wave, identified by this researcher, is the ‘politico-religious radicalisation’ of Bodu Bala Sena (BBS), a hardline Sinhala Buddhist nationalist group (ongoing since 2012). BBS has been accused of inciting violence against Muslim Sri Lankans, including vandalizing mosques, and attacking Muslim-owned houses and businesses. The JVP’s youth radicalisation happened entirely in the pre-web era, and while the LTTE used the web for propaganda, the low number of users at the time limited their reach. In contrast, BBS, NTJ and other radical groups are active in a time of social media proliferation.
So exactly how does social media content lead vulnerable individuals to resort to violence?
In 2016-17, UNESCO, the UN agency monitoring information society issues from a cultural perspective, sought to answer this question through a global study. It commissioned an international mapping of research (done mainly during 2012-2016) into the ‘assumed roles played by social media in violent radicalisation processes, especially as they affect youth and women’. Such ‘research on research’ is formally known as systematic reviews.
RESEARCH COVERAGE
They looked at more than 550 published studies from scientific journals and “grey literature” (materials produced by organisations outside commercial or academic publishing), covering titles in English (260), French (196) and Arabic (96). Their analysis was published in 2017 as a UNESCO report titled ‘Youth and Violent Extremism on Social Media’ (see: http://bit.ly/YouthRad).
The study recognised that social media’s key qualities – volume, speed, multimedia interactivity, decentralisation, cheapness, anonymity, and a global audience across time and space – offer clear advantages to extremist groups that may otherwise have stayed marginal. But these same benefits are available to educators, advocacy groups, charities and social activists. The study’s overall conclusion was this: “The literature reviewed in the study provides no definitive evidence of a direct link between the specificities of social media and violent radicalisation outcomes on youth. Likewise, there is no definitive evidence about the impact of counter-measures.”
It also pointed out that violent radicalisation is a complex process and cannot be reduced to one factor like internet exposure. Instead, it entails social-psychological processes and person-to-person communication in conjunction with other offline factors (feelings of injustice, alienation, deprivation and anomie – which means the lack of usual social or ethical standards in an individual or group). Finally, it is known that extremist groups do not trust large-scale commercial platforms like Facebook. Yet, most academic studies have focused on such outlets, and overlooked cloaked websites and other spaces where more information could be found about at-risk sympathizers, their identity, social circles and actions.
The UNESCO review concluded that research is still in its early stages, and urged caution about the results and interpretations.
INDUSTRY RESPONSE
While researchers keep looking for the ‘smoking gun’ (pun not intended), internet tech companies have started responding to the association between the web and violent extremism. The most direct approach is to remove content that is explicitly extremist or inciting violence (an ineffective strategy as more come up in place of those taken down). Also, tech platforms’ commitment to free speech means that a good deal of contentious material would be allowed to remain – the right to freedom of expression includes the right to “offend, shock or disturb”.
So more nuanced efforts are needed. In 2010, Google launched a subsidiary called Google Ideas (renamed Jigsaw in 2015), which is dedicated to understanding global challenges and applying technological solutions (https://jigsaw.google.com). One project tapped into Google’s strength as the world’s most widely used internet search engine – where many young people begin their quest for extremist information or ideas.
“You need to reach them when they’re still researching, and they go online and search for answers, which means there’s a mechanism for us to reach them,” says Yasmin Green, director of research and development at Jigsaw. Those efforts led to what is known as the Redirect Method, which seeks to reach those who are actively looking for extremist content and connections online. Rather than create new content and counter-narratives, its smart approach is to try and divert young people off the extremist path by using pre-existing YouTube content and targeted advertising.
[pullquote]BEHIND EACH HATE COMMENT UPLOADED IS A HUMAN BEING WHO WON’T DISAPPEAR WHEN THEIR COMMENT DOES[/pullquote]
For this, Google is using one of its contentious innovations: tracking user behaviour in searches and other Google services to display online ads most relevant to each user. Over the years, the company has improved its automated software – known as algorithms – that deliver targeted online ads. In 2017, these generated almost $95.4 billion in revenue.
UNDERSTANDING SEARCH
But people’s search habits are highly variable, so automating a response strategy is a tough challenge. Some Google users may be simply looking up information out of curiosity or for study purposes rather than out of any ideological interest. Based on many conversations with former radicals, Jigsaw and partners have some insights on how their mindset works. Yet, anticipatory diversions are not foolproof. Jigsaw doesn’t track how many potential recruits might have changed their minds about joining ISIS – the original source of radicalisation they started with. Since 2015, one of Jigsaw’s offshoots named Moonshot CVE – a startup countering violent extremism through data driven innovation – has used the Redirect Method to address other ideologies such as right-wing extremism. Not everything is left to algorithms, however. Much human oversight is involved in targeting keywords and videos with ‘good’ messaging. Jigsaw works with Moonshot CVE and Quantum Communications in vetting and updating keywords in English and Arabic. Counter-messaging videos are reviewed by a council of theologians, law enforcement officials and experts. Following a white supremacist’s attacks at mosques in Christchurch, New Zealand, on 15 March 2019, Ludovica Di Giorgi and Vidhya Ramalingam of Moonshot CVE wrote in an op-ed: “Removal of content alone after this attack is not going to solve the problem of right-wing terrorism. Behind each piece of content uploaded to the internet is a human being who won’t disappear when their comment, meme or website does.”
They added, “Every corner of the internet where the Christchurch terrorist’s vitriol has been posted and shared is littered with comments by internet users across the globe who support his actions. These users have left us a trail of clues, a digital footprint informing us of the path they are taking. We need to harness new technology to find such individuals early and intervene, by offering alternatives, challenging them, and engaging them in social work interventions to try and change their paths before they resort to violence.” (Full text: http://bit.ly/MoonCVE) The digital arms race between extremism and tolerance is far from over. For good to triumph over evil, it will take a combination of regulation, tech innovation and digital literacy.