Input to Christchurch Call meeting in Paris

First posted on the ICT4Peace Foundation website. Features input given by me for Social Media companies and other key actors that fed into a document created for and tabled at the meeting held in Paris to launch the ‘Christchurch Call’. First published on 14 May 2019.


The Christchurch Call to Action to Eliminate Terrorist and Violent Extremist Content Online

In preparation of the Christchurch Call Meeting hosted by the Prime Minister RT Hon Jacinda Ardern in Paris on 14 May 2019, ICT4Peace prepared the following ICT4Peace Policy Paper as input to the conference.

Through our work at ICT4Peace over the past years, we have witnessed and analyzed the changing use of social media and its growing impact on critical issues related to democracy, political stability, freedom, communication and security. The euphoria about the role of social media as a primarily positive force during the Arab Spring has given way to a much more layered and complex picture of its role and uses across society and around the globe. The sheer enormity of today’s social media platforms, the volume of users and almost infinite mass of content, means that the containment of the spread of violent content, as witnessed after the Christchurch attack, proved almost impossible.

We have been working on these issues for many years now, including, launching on behalf  of  the UN Security Council  the Tech against Terrorism platform  with inter alia Facebook, Microsoft, Twitter and Telefonica; carrying out cybersecurity policy and diplomacy capacity building for inter alia ASEAN and the CLMV countries; working with the UN GGE and ASEAN on norms of responsible state behaviour for cybersecurity with the ASEAN regional Forum on CBMs; carrying out workshops  in Myanmar, Sri Lanka on online content verification and online security; participating in the CCW GGE discussions in Geneva on Lethal Autonomous Weapons Systems (LAWS), and analyzing the role of artificial intelligence and its role in peace-time threats such as surveillance, data privacy, fake news, justice, the changing parameters of health including the risks of certain biotechnological advances and other emerging technologies.

The challenge of controlling and removing terrorist content online

Despite now serious attempts by social media platforms to control content that violates norms, human beings are simply unable to keep up with the speed and connectivity of content creation around the world. This task can only be computationally managed by algorithms and AI, but these are also opaque, offering biased recommendations, search functions and are in part responsible themselves for the rise in extremism, conspiracy theories and destabilizing content online. However, there is some hope going forward in the engineering of greater friction in times of crisis. Instead of on/off censorship, engineering greater friction into sharing can help, at scale, control and curtail the flows of misinformation. The best example of this comes from India and WhatsApp. For years, apps – linked to ‘growth hacking’ made it as easy as possible to share and engage with content. In countries and contexts with little to no media literacy, this because quickly weaponised by actors who used the virality of content over social media, without any fact checking whatsoever, as a vector for misinformation spread at scale. With added friction – limits on the number of forwards, adding visual aids, adding an extra step to interact, in the backend, through algorithmic suppression of content with poor quality (e.g. clickbait articles) are a range of ways that from the app (front facing) to the algorithm (back-end) companies can and have invested in ways that in effect, reduce mindless sharing.

Protecting democracy, ethical principles and a free open Internet

The challenge of balancing the need to maintain a free and open Internet with the need for security and protection of human beings, data, ethical principles, human rights and democratic processes is daunting. It is essential to achieve a broad alliance pushing for practical change to prevent the spread of extremist content online and the glorification of the perpetrators of mass murder.  It is also important to protect users from fake news, misinformation and manipulation in particular in the terrorist context. From the Global South perspective, a key concern is also how and if measures undertaken by social media companies in response to Western demands, challenges and concerns, might undermine the ability of civil society to hold authoritarian governments accountable, and could weaponise processes and structures to clamp down on dissent. We must remain vigilant to ensure that at each step of the way these concerns are considered.

Social Media, Coming of Age

We have a responsibility to current and future generations to ensure a framework for all emerging technologies that respects basic human rights and ethical principles. Social media is evolving and society needs to figure out how these tools should be used now and in the future. There is a real risk of the race to the bottom with hate and the baser nature of human beings taking the lead as users gravitate toward clickbait and gruesome content. However, society seems to have fortunately reached an ethical border with the livestreaming of murder, just as we had reached a border in biotechnology with the cloning of a human being. We need to develop guidelines for how we, as a global society want to move and operate in the social media space. What kind of ethical principles need to be built into the algorithms and AI that will control our future content and interaction online?  How should social media companies evolve in their approaches and business models to take into account the human dimension? We need to shift and develop technical measures that consider the quality, not just quantity, of conversations online, the mental and physical health and age of consumers and the ways in which content is shared and manipulated.

For years, Facebook and other Silicon Valley companies prioritised ‘growth hacking’ by which they meant that the increase in user base for products and platforms overtook every other aspect of the business. At Facebook this was led by Sheryl Sandberg, the company’s COO. This is why the company is facing the issues it is today, and why Mark Zuckerberg’s pivot to the health of conversation and content is significant. It is a major change of course for the company, including the new emphasis on privacy over sharing, a concept scoffed at by Zuckerberg himself in the past. Though the contours of what the company will become are clear, it is unclear what exactly will be done to ensure the quality of conversations and content are engineered to be biased away from the toxic, violent and hateful. But across major platforms including YouTube, Twitter and Facebook’s Instagram, Messenger and WhatsApp platforms, there is a new emphasis on securing user privacy and at the same time engineering ways that also protect users from hate, harm and violence.

Added to this is a new interest, at the operating system level of phones and tablets on both iOS and Android, ways through which Apple and Google respectively are now keen to ensure users have a good balance of on-screen and off-screen time. This includes logging device and app usage at the OS level, and by engineering tweaks on apps like Instagram for example (first, deeply unpopular within Facebook) to give a visual indication of when new content was over, and the user was scrolling through content already seen or engaged with, thus ensuring the user spent less time on the app, not more. Less time on apps meant less advertisements seen, and less interactions with the app or operating system, in turn resulting in less monetisable action points, impacting, at scale, profits as well as the harvesting of user level engagement metrics. So, what appears to be a simple change is actually a major shift for companies that now prioritise the health of users over constant engagement and addiction to apps.

Proposed actions for Social Media companies and other key actors:

  1. Develop joint multi-stakeholder taskforces to consider the big picture of the human dimension of social media and develop ethical principles that could guide Social Media companies and users in the online world.
  2. Social Media companies in cooperation with government, law enforcement and civil society need to reinforce joint SWAT team responses for content that meets certain extreme criteria, e.g. in particular livestreaming of murder or other heinous acts.
  3. Prioritize the development of AI that could better define and distinguish types of content and support in the clamping down in emergency situations on the connectivity of terrorist content.(The real problem at the moment is the definition of terrorist content. No AI at present can easily distinguish between the media’s coverage of a terrorist incident which may include graphic violence, and a terrorist group’s promotion of violent ideology, which may include the same or similar graphic violence. AI can also be fooled, and without human review, can and has led to instances where content documenting human rights abuses in war have been entirely deleted, in effect contributing to the impunity of perpetrators.)
  4. Ensure existing AI and algorithms do not promote terrorist content and extremist views by pushing more such content to users. (YouTube’s recommendation engine / algorithm, widely and increasingly criticized for the promotion over time of increasingly extremist content, is undergoing major overhauls on these lines).
  5. Proactively remove hate speech and repeat offenders. (This is directly linked to how much of human and technical resources can be put in by these companies, and also the challenge of end-to-end encrypted channels / platforms like WhatsApp, where even the company doesn’t easily know the kind of content exchanged in groups.)
  6. Improve review mechanisms and responsiveness.
  7. Reinforce trusted reporting network that expedites the flagging of content vetted through experienced individuals and institutions.
  8. Improve the reporting mechanisms built into Facebook apps like Facebook Messenger to make it easier and simpler to report violent or hateful content.
  9. It is important in these difficult times also to remember that just as social media helps extremist ideology take seed and grow, it also helps in healing, empathy, gestures of solidarity, expressions of unity, the design of conciliatory measures and the articulation of grief and sympathy. In the immediate aftermath of the Christchurch attack, a cursory top-level study of the nearly 85,000 tweets generated in the 24 hours after the violence shows a global community outraged or dismayed at terrorism, an outpouring of love, empathy and solidarity, engagement that spans many continents and languages, addressing prominent politicians and journalists, featuring hundreds of smaller communities anchored to individuals based in New Zealand and in a manner overwhelmingly supportive of the Muslim community.” Sanjana Hattotuwa in Pulse Points

Does blocking social media help in the wake of a terrorist attack?

Reposted from the ICT4Peace Foundation website and published on 27 April 2019.


On Easter Sunday, Sri Lanka was hit by devasting terrorist attacks across the country, claimed by ISIS some days later. The attacks killed over 250, and injured many hundreds more.

Throughout the week, more arms caches, known associates of the suicide bombers and safe houses were discovered and raided.

In the wake of the terrorist attacks, with no warning, the Sri Lankan government blocked social media including Viber, Facebook and WhatsApp. It did not, however, block Twitter. Some, welcomed the move, including Ivan Segal from the renowned GlobalVoices (sharing some initial thoughts on Twitter), and Kara Swisher, a journalist from the New York Times. It is unclear if Swisher has ever visited Sri Lanka or studied, to any degree, the country’s complex media eco-systems.

Pushing back on the Western gaze, which used Sri Lanka’s tragedy to simplistically spin, somehow, that since Facebook allowed vast amount of toxicity on its platform, it could and would be weaponised in the aftermath of the terrorism to seed more violence, many commentators in Sri Lanka painted a far more nuanced, cautious approach. Placing the block, the second time the government of Sri Lanka has done it in just over a year, in context, activists, journalists and public commentators on the ground offered perspectives that were anchored to how inextricably entwined social media was in response, recovery, public information, political and crisis communications, news and credible information dissemination as well as a vector for an unprecedented national disaster to connect in grief and loss.

Sanjana Hattotuwa, Special Advisor at the ICT4Peace Foundation, was featured in international media talking about the social media block, including the possible reasons for it and the ramifications on account of it.

A comprehensive article on the attacks and the social media block was published on the Australian Broadcasting Corporation’s Religion and Ethics website. Read Can social media be a force for good in Sri Lanka after the Easter Sunday bombings? It’s complicated here.

Social media: promoter of democratic participation or purveyor of violence?, a podcast presented by Waleed Aly and Scott Stephens on Radio National in Australia, which was an extended take of a live broadcast on Australian radio, can be listened to here.

An interview with Kristie Lu Stout broadcast on CNN, Social media ban is not effective in Sri Lanka, can be viewed here.

Broadcast first on Saturday, Al Jazeera’s Listening Post also featured Sanjana’s input in a segment dealing with the attacks and the aftermath. See the programme, Sri Lanka Easter bombings: Debating the social media clampdown, here.

All media is social

My Editor, and the general readership of this newspaper fall into a demographic that doesn’t quite understand the media landscape in Sri Lanka today. As long as this demographic doesn’t go on to propose and make laws or regulations that seek to govern for tens of millions what they cannot or do not understand, I empathise with their confusion and discomfort. Because of sheer complexity as well as the speed of evolution, we can no longer clearly explain what we see or study in contemporary media landscapes. It wasn’t this way growing up.

Today, the very act of tuning into a channel or turning a page to access the news is quaint in a world where boundaries between terrestrial broadcast, print media and content consumed online are indivisible and invisible. It used to be the case – as recently as a decade ago – that agile, responsive campaigns anchored to civil society were best able to leverage the affordances, power and reach of new media platforms. I should know, having created Groundviews as an entirely web-based operation and platform in 2006 to carry what at the time, print and broadcast media would or could not. A year after, I created the first official Facebook Page and Twitter account for any media institution in the country. They were respectively the first accounts in South Asia for a civic media platform. Neither Facebook nor Twitter remotely resemble what they are and look like today. There were far less than a hundred thousand on Facebook at the time in Sri Lanka. There was no like button. There were no responses through emoticons. The mobile app was rudimentary. You couldn’t upload photos or videos. There was no misinformation produced by anyone active at the time, from any political or partisan perspective, in the way it is understood, treated and studied today.

I saw in both platforms the ability to bypass authoritarian censorship, reach new audiences quickly, create and sustain engagement around inconvenient truths and publish in the public interest content that on these platforms, created a new, resilient engagement economy that contested or bypassed traditional media’s stranglehold on framing the news. I was right to identify the potential to change the way society speaks with and sees itself, and political communications are conducted. I was profoundly naïve in my idealism that it would be a tool enduringly employed for public interest media, or in the service of civil society output bearing witness to human rights in a context of violent adversity.

It was impossible to foresee the use, abuse, adoption and adaptation of social media today by powerful political and media actors a decade ago. In May and June 2009, posts on Facebook that in turn linked to content on other sites like Flickr or YouTube captured horrific ground realities as well as propaganda from the Government and the LTTE. Leading up to 2010’s Presidential Election, the two leading contenders – Mahinda Rajapaksa and Sarath Fonseka – created Flickr, YouTube, Twitter and Facebook accounts anchored to their campaigns. It was the first time a Presidential election featured online media as an extension of traditional campaign activities and propaganda. The General Election later that year featured party leaders as well as political parties signing up to Facebook. The Municipal Elections in Colombo, a year later, was when from the Western Province and beyond, political communications on social media, distinct from communications in other media, platforms and fora, sprang to life. In late 2013, Mahinda Rajapaksa launched his official Twitter account. The one used for his Presidential bid in 2010, by then, was long discontinued. By 2015’s Presidential election, social media was central in the organic online campaigns to champion the current President as well as sophisticated, slick propaganda campaigns of the Rajapaksas, by then treating engagement in digital domains as a seamless extension of hoardings, posters, mugs, caps, pens, wall clocks and other more common debris of campaign freebies handed out at rallies. This was also when the clear distinctions between traditional and social media started to break down.

This brief history, as with any brief history, glosses over competing trends and a more nuanced appreciation of the media landscape’s evolution, even just since 2015. It does, however, serve as a warning for those who seek to weaponise the fears of an older demographic around the dangers of the current media landscape to support policies and regulations that ultimately help censorship. The careful capture and preliminary study of data from the local government election in February 2018, the Digana riots last March, new forms of political campaigns like Jana Balaya led by Namal Rajapaksa that used digital content to mobilise footfall, the unprecedented 52-day constitutional crisis late 2018, the aftermath of the Easter Sunday terrorism including repeated and increasingly devastating riots against Muslims, the coverage of the PSC on the Church attacks, the release of Gnanasara Thero late May and mid-2018, his incarceration, major civil society initiatives and campaigns, the commemoration of a decade after the end of the war and the volume of content uploaded to YouTube by every single major TV broadcaster since 2015 all suggest – unequivocally and indubitably – that social media is now populated the most by content produced by traditional media.

So what does the term social media mean today, if anything? To many, it continues to conjure up a domain inhabited by anarchic voices, uninterested in or divorced from truth, producing hate and hell-bent on destroying everything good or great about our society. However, the data strongly suggests the greatest producers, by far, of content that incites hate and violence online are, in fact, traditional TV channels. The data indicates that during the constitutional crisis, credible journalism produced by trusted journalists was in high demand on Twitter. The data clearly shows that though the most amount of political commentary happens outside the official pages and accounts of political parties or politicians, first time and young voters are anything but apathetic or disengaged. In the myriad of conversations I track daily, quality, civility, expression, intent and perspective may leave much to be desired, but is this not a valid critique of media the pre-digital generation grew up with and still like to romanticise?

I believe leading politicians know this, but in their pursuit of power see greater appeal in whipping up anxieties to ultimately help secure their control of all media. But to know and realise this helps resist it. The greater the appeal of a president’s or politician’s proposal to fight misinformation, the more sceptical we should be. The simpler the solution proposed, the greater the risk of censorship and abuse. The greater the paternalism overtly, the stronger the parochialism covertly.

Shutting down or blocking Galle Road because of a higher volume of bad drivers in recent years, contributing to many more offences, accidents and deaths, is not an option. Instead, we stress the stronger application of existing road rules and question why they aren’t enforced. Why should a conversation on media regulation be any different?


First published in The Sunday Island,  16 June 2019.

When a law is not the answer

Wonderful news said all the Sri Lankans. But why Queensland, all the Australians asked. Fifteen years ago, a Rotary World Peace Fellowship award offered seven universities around the world to undertake a Masters in Peace and Conflict Studies. I chose the University of Bradford. I was awarded a place at the University of Queensland, in Brisbane. I didn’t complain. The scholarship was a chance to get out of Sri Lanka and rigorously study what I had till then done on the ground, at a time when violent conflict dynamics were, after some years of relative calm, rising rapidly. My Australian friends, however, were concerned that I would face in Queensland a degree of discrimination and intolerance they said I would never encounter in Sydney or Melbourne. I didn’t know enough to argue and expected the worst. After two years of extensive travel within the state and country, I returned to Sri Lanka experiencing very little along the lines I was warned about. Others though, at the same time, had a different experience – never physically violent, but far more verbally abusive. For them and I, this othering was at the margins of society. Well over a decade ago and without social media, violent extremism and ideology had to be actively sought after to be engaged with. Racism wasn’t digitally dispersed.

It is with an enduring affection of Australia that I am deeply concerned about disturbing new legislation, passed hurriedly last week, which uses the terrorism in Christchurch to justify overbroad controls of social media. The central focus of my doctoral research at Otago University is technology as both a driver of violence and a deterrent. How, today, social media promotes hate or harm is well known and widely reported. As with any generalisation, though elements of truth exist, the simplification of a complex problem results in illegitimate targets of fear or anger. Social media companies, for their part, are irascibly unmoved by what for years those like me have warned them about, around the abuse of platforms by those who seek to profit from violence. Coherence and consistency in policies that respond to the seed and spread of violence are lacking and resisted. However, significant changes in stance, response and policies are coming. The terrorism in Christchurch is responsible for accelerating globally what was sporadically mentioned or implemented with regards to safeguards around the production and promotion of content inciting violence, hate and discrimination. However, we must resist what appear to be simple answers to complex challenges, whether it comes from governments or big technology companies.

Violent extremism has many drivers, both visible and hidden. It doesn’t bloom overnight. Social media, inextricably entwined in New Zealand’s socio-political, economic and cultural fabric as it is back home in Sri Lanka, cannot be blamed, blocked or banned in the expectation that everything will be alright thereafter. Driven by understandable concern around the dynamics of how the terrorism in Christchurch spread virally on social media, the Australian legislation – rushed through in just two days without any meaningful public debate, independent scrutiny or critical input – doesn’t address root causes of terrorism, extremism or discrimination.

Amongst other concerns and though it sounds very good, holding social media companies and content providers criminally liable for content is a very disturbing template and precedent. American corporate entities are now required to oversee to a degree technically infeasible and humanly impossible, information produced on or spread through their services. This risks the imposition of draconian controls over what’s produced, judged by hidden indicators, with little independent oversight and limited avenues for appeal. As a global precedent, the law is even more harmful, allowing comparatively illiberal governments to project or portray as the protection of citizens, parochial laws essentially that stifle democratic dissent.

David Kaye, the UN Special Rapporteur on the promotion and protection of the freedom of expression, is also deeply concerned. In an official letter to the Australian Minister of Foreign Affairs, Kaye stresses, amongst other more technical, procedural and legal points, the need for public review and proportionality, international legal obligations on the freedom of expression and imprecise wording in the law, which is entirely removed from how digital content is generated in society today, and by whom. And herein lies the danger for New Zealand too. Politicians, under pressure to respond meaningfully, need to assuage the fears of a grieving country through demonstrable measures. The tendency is to pick an easy target and push through solutions that look and sound strong. The underlying drivers of violence and conflict, however, simmer and fester. Measures taken to control and curtail gun ownership are welcome, and arguably, long overdue. Policymaking around social media, however, is a different problem set that cannot be as easily, or concretely, addressed.

This is not a submission to do nothing. Rather, it cautions against the understandable appeal of following the Australian response and law. Steps around the non-recurrence of domestic terrorism must certainly embrace aspects of social media regulation and related legislation. The public must be involved in this. We know already that social media reflects and refracts – mirroring values of consumers as well as, through ways academics are struggling to grasp fully, changing attitudes and perceptions of users over time. This requires governments to iteratively work with social media companies on checks and balances that systemically decrease violence in all forms.

Elsewhere in the world, politicians who know the least about social media seek to control it, and those who know more or better, often abuse it. Kiwis, led by PM Ardern’s government, have a historic opportunity to forge a response to terrorism – relevant and resonant globally – that incorporates how best government can work with technology companies to protect citizens from harm. Australia, with the best of intent, gets it very wrong. New Zealand, with a greater calling, must get it right.


This article was first published in the Otago Daily Times on 16 April 2019, under the title ‘A Historic Opportunity’.

Principles over promises: Responding to the Christchurch terrorism

Almost exactly a year ago, Facebook was in the news in New Zealand over a row with Privacy Commissioner John Edwards. The heated public exchange between Edwards and the company took place in the context of the Cambridge Analytica scandal, in which the private information of millions of Facebook users was harvested, illicitly, for deeply divisive, partisan and political ends. Edwards accused the company of breaching New Zealand’s Privacy Act. The company responded that it hadn’t and that the Privacy Commissioner had made an overbroad request which could not be serviced. Edwards proceeded to delete his account and warned others in New Zealand that continued use of Facebook could impact their right to privacy under domestic law. Just a few months prior, the COO of Facebook, Sheryl Sandberg, was pictured on Facebook’s official country page with New Zealand PM Jacinda Ardern. The caption of the photo, which captured the two women in an embrace after a formal meeting, flagged efforts the company was making to keep children safe. It is not surprising that Sandberg also wrote the paean to Ardern in last year’s Time 100 list of the most influential people.

The violence on the 15th of March in Christchurch dramatically changed this relationship. In response to the act of terrorism, Facebook announced, and for the first time, a ban on “peace, support and representation of white nationalism and separatism on Facebook and Instagram”. Two weeks after the killings in Christchurch, a message by Sandberg was featured on top of Instagram feeds in the country and featured in local media. The message noted that Facebook was “exploring restrictions on who can go Live depending on factors such as prior Community Standard violations” and that the company was “also investing in research to build better technology to quickly identify edited versions of violent videos and images and prevent people from re-sharing these versions.” Additionally, the company was removing content from, and all praise or support of several hate groups in the country, as well as Australia. Sandberg’s message called the terrorism in Christchurch “an act of pure evil”, echoing verbatim David Coleman, Australia’s immigration minister, in a statement he made after denying entry to far-right commentator Milo Yiannopoulos, who after the attack referred to Muslims as “barbaric” and Islam as an “alien religious culture”. Last week, New Zealand’s Chief Censor David Shanks, declared the document released by the killer as ‘objectionable’, which now makes it an offence to share or even possess it. Following up, authorities also made the possession and distribution of the killer’s live stream video an offence. Facebook, Twitter and Microsoft have all been to New Zealand in the past fortnight, issuing statements, making promises and expressing solidarity. Silicon Valley-based technology companies are in the spotlight, but I wonder, why now? What’s changed?

Since its debut in 2015, a report by BuzzFeed News published in June 2017 flagged that at least 45 instances of grisly violence including shootings, rape, murders, child abuse and attempted suicides were broadcast on Facebook Live. That number would be higher now, not including Christchurch. The Founder and CEO of Facebook, Mark Zuckerberg, in May 2017, promised that 3,000 more moderators, in addition to 4,500 already working, would be added to over live and native video content. Promises to do more or better are what Zuckerberg and Sandberg are very good at making, in the aftermath of increasingly frequent and major privacy, ethics, violence or governance related scandals Facebook is in the middle of. Less apparent and forthcoming, over time, is what really the company does, invests in and builds.

There are also inconsistencies in the company’s responses to platform abuses. In 2017, the live video on Facebook of a man bound, gagged and repeatedly cut with a knife, lasting half an hour, was viewed by 16,000 users. By the time it was taken down, it had spread on YouTube. A company spokesperson at the time was quoted as saying that “in many instances… when people share this type of content, they are doing so to condemn violence or raise awareness about it. In that case, the video would be allowed.” Revealingly, the same claim wasn’t made with the Christchurch killer’s production.

The flipside to this is the use of Facebook’s tools to bear witness to human rights abuse. In 2016, the killing of a young black American Philando Castile by the Police in Minnesota was live-streamed on Facebook by his girlfriend, Diamond Reynolds, who was present with him in the car. The video went viral and helped document police brutality. There is also clear documented evidence of how violence captured from a Palestinian perspective, as well as content on potential war crimes, is at greater risk of removal on social media platforms. In fact, more than 70 civil rights groups wrote to Facebook in 2016, flagging this problem of unilateral removals based on orders generated by repressive regimes, giving perpetrators greater impunity and murderers stronger immunity.

It is axiomatic that deleting videos, banning pages, blocking groups, algorithmic tagging and faster human moderation do not erase root causes of violent extremism. The use of WhatsApp in India to seed and spread violence is a cautionary tale in how the deletion of content on Facebook’s public platforms may only drive it further underground. The answer is not to weaken or ban encryption. As New Zealand shows us, it is to investigate ways through which democratic values address, concretely and meaningfully, existential concerns of citizens and communities. This is hard work and beyond the lifespan of any one government. It also cannot be replaced by greater regulation of technology companies and social media. The two go hand in hand, and one is not a substitute for the other. It is here that governments, as well as technology companies, stumble, by responding to violent incidents in ways that don’t fully consider how disparate social media platforms and ideologues corrosively influence and inform each other. Content produced in one region or country, can over time, inspire action and reflection in a very different country or community.

Take an Australian Senator’s response, on Twitter, to the Christchurch terrorism. Though condemned by the Australian PM, the very act of referring to the Senator and what he noted on Twitter promoted the content to different audiences, both nationally and globally. The Twitter account, as well as the Facebook page of the Senator in question, produce and promote an essential ideology indistinguishable from the Christchurch killer’s avowed motivations. It is the normalisation of extremism through the guise of outrage and selective condemnation. What should the response be?

In Myanmar, an independent human rights impact assessment on Facebook, conducted last year, resulted in the company updating policies to “remove misinformation that has the potential to contribute to imminent violence or physical harm”. And yet, it is unclear how what may now be operational in Myanmar is also applied in other contexts, including in First World countries at risk of right-wing extremism.

I wonder, does it take grief and violence on the scale of Christchurch to jolt politicians and technology companies to take action around what was evident, for much longer? And in seeking to capitalise on the international media exposure and attention around an incident in a First World country, are decisions made in or because of New Zealand risking norms around content production, access and archival globally, on social media platforms that are now part of the socio-political, economic and cultural DNA of entire regions? Precisely at a time when any opposition to or critical questioning of decisions taken on behalf of victims and those at risk of violence can generate hostility or pushback, we need to safeguard against good-faith measures that inadvertently risk the very fibre of liberal democracy politicians in New Zealand and technology companies seek to secure. An emphasis on nuance, context, culture and intent must endure.

So is meaningful investment, beyond vacuous promises. In 2016, Zuckerberg called live video “personal and emotional and raw and visceral”. After the Christchurch video’s visceral virality, it is unclear if Sandberg pushed this same line with PM Ardern. In fact, Facebook astonishingly allowed an Islamophobic ad featuring PM Ardern wearing a hijab, which was only taken down after a domestic website’s intervention. Clearly, challenges persist. Social media companies can and must do more, including changing the very business models that have allowed major platforms to grow to a point where they are, essentially, ungovernable.

Grieving, we seek out easy answers. Banning weapons and blocking extremist content helps contain and address immediate concerns. Ideas though are incredibly resilient, and always find a way to new audiences. The longer-term will of the government to address hate groups, violent extremism in all forms and the normalisation of othering, from Maori to Muslim, requires sober reflection and more careful policymaking. What happens in New Zealand is already a template for the world. We must help PM Ardern and technology companies live up to this great responsibility.


First published on Scoop NZ on 4 April 2019.

The infamy engines

Coming out of a long meeting, the first I heard of the violence in Christchurch was from those in Sri Lanka who had got breaking news alerts. I was both very disturbed and extremely intrigued. Terrorism as popular theatre or spectacle is not new, and some academics would argue is a central aim of terrorists, who want their acts recorded and relayed, not redacted or restrained. The use of social media to promote and incite hate, violence and prejudice is also not new. From ISIS to politicians elected into office through populist, prejudiced campaigns, social media is foundational in contemporary terrorist recruitment and political propaganda. What events in Christchurch last Friday brought to light was something entirely different, new and very unlikely to be resolved easily or quickly. The killer’s intentional use of the internet will have far longer reaching implications, requiring significant, urgent reform around the governance of large social media platforms as well as oversight mechanisms, including regulations, on parent companies.

Though Facebook New Zealand, Google and Twitter all issued statements hours after the attack that they were working with the New Zealand Police to take down content associated with the attack, the content had by then spread far and wide across the web. The video moved from platform to platform, edited, freeze-framed, downloaded off public streams which risked being taken down and then re-uploaded to private servers, which in turn served up the video to thousands more. As Washington Post journalist Drew Harwell noted, “The New Zealand massacre was live-streamed on Facebook, announced on 8chan, reposted on YouTube, commentated about on Reddit, and mirrored around the world before the tech companies could even react”. The challenge is significant because of the scale of the platforms, with billions of users each creating or consuming vast amounts of content every second. Managing the platforms is now largely algorithmic, meaning that only machines can cope with the scale and scope of content produced every second. There are serious limitations to this approach. Terrorists know and now increasingly exploit it, weaponising the unending global popularity of social media to seed and spread their ideology in ways that no existing form of curtailment, containment or control can remotely compete with. And that’s partly because of the way algorithms tasked with oversight of content are trained, which is entirely opaque. It is entirely probable that algorithms trained to detect signs of radical Islamic terrorism are incapable of flagging a similar violent ideology or intent promoted in English, anchored to the language and symbolism of white supremacism or fascism.

In March 2018, Facebook’s Chief Technology Officer (CTO) Mike Schroepfer noted that the company was using artificial intelligence (AI) to police its platform, and that it was “fairly effective” in distinguishing and removing “gore and graphic violence”. Last Friday’s killings highlight the risible falsity of this claim. Hours after the killings, dozens of videos featuring the same grisly violence as the original live stream were on Facebook. One had generated 23,000 views an hour, with nearly 240,000 seeing it. Though Facebook notes it blocked 1.5 million videos in the days after the killings from being uploaded, it has tellingly withheld statistics on how many the original live stream reached or why 300,000 related videos were not identified soon after upload, which means they too were viewed – even for a short time – by hundreds of thousands. And this isn’t the first time graphic, wanton violence has resided on the platform for hours before it was taken down, by which time, the strategic aim and intention of producers have been met. The problem doesn’t end there. Neal Mohan, YouTube’s Chief Product Officer, is on record saying how Christchurch brought the company’s moderation and oversight to its knees. Unable to deal with tens of thousands of videos spawned across its platform that showed the grisly killings – one every second at its peak. In two unprecedented moves for the company based on the severity of the challenge, his team decided to block search functionality that allows users to search recent uploads and also completely bypass human moderation, trusting even with the possibility of false positives, content possibly linked to the violence in Christchurch flagged by its algorithmic agents. Mohan has no final fix. The company just has no better way – even in the foreseeable future – to deal with another incident of this nature. Terrorists simply have the upper hand.

The Christchurch killer knew this and used it to his advantage. He won’t be the last. The appeal to internet subcultures, famous personalities, memes, the very choice of music, expressions, gestures and popular references are a new argot of communications intentionally designed to use online cultures as means to amplify and promote violent ideology (called red-pilling). At the same time, malevolent producers can almost entirely bypass existing controls and checks on the distribution of such material. The scale of social media is the hook, with the inability to oversee and inadequacies around governance, weaponised. Academics call this a wicked problem – a challenge that is so hard that even partial responses to any single aspect or facet increase the levels of complexity, often exponentially.

Generating greater friction around the production, promotion and spread of content is not in the interests of social media companies, who will continue to maintain – not without some merit – that billions of users producing vast amounts of mundane yet popular content daily is what primarily drives research and development. Read profits. Not without some irony, Facebook’s Chief Operating Officer Sheryl Sandberg wrote in 2018 a glowing tribute to New Zealand’s Prime Minister in Time magazine’s list of 100 ‘Most Influential People’. After PM Ardern noted that the live streaming of the grisly killings would be an issue she takes up with the company and perhaps mortified that this incident will strengthen calls around more robust regulation in the US, Sandberg had reached out after the violence, though it is unclear with what intent or assurances.

This rough sketch of the context I locate my doctoral studies in masks far greater complexity, anchored to community, culture, context and country. What is true of social media Sri Lanka, my home and the central focus of my research, doesn’t always hold sway in New Zealand. There are however strange parallels. Repeated warnings around the weaponisation of Facebook to incite hate and violence, since 2014, went entirely unacknowledged by the company until severe communal riots almost exactly a year ago. In Myanmar, the company’s platforms were flagged by the United Nations as those that helped produce, normalise and spread violence against Muslims. Till 2018, the company did little to nothing to address this, despite warnings and ample evidence from local organisations. YouTube’s recommendation engine – the crucial algorithm that presents content that may interest you – has long since and openly been flagged as extremely problematic, beguilingly guiding users towards far-right radicalisation. The Christchurch killer’s release of a 74-page document before his rampage shows an acute understanding of how all this works, by transforming tired conspiracy into highly desirable material through strategic distribution just before an act that serves as the accelerant to infamy.

Alex Stamos, the former Chief Security Officer at Facebook, posted in the aftermath of Christchurch a sobering reminder of just why this violence goes viral. He notes that the language used, links provided and even excerpts of the violent video broadcast by news media only served to pique interest in the killer’s original document and full video. This is a disturbing information ecology where content produced by terrorists cannot be taken down easily or quickly because the surge of interest generated around discovery and sharing will overwhelm attempts to delete and contain. If this is the world we now inhabit and by using social media, contribute to cementing, the questions directed at companies and governments may be better directed at ourselves. How many of us searched for the video, and shared it? How many of us, without having any reason to, searched for, read and shared the killer’s document? If we cannot control our baser instinct, then we become part of the problem. The terrorists are counting on this, and us, to succeed. We should not let them win.


Sanjana Hattotuwa is a PhD student at the National Centre for Peace and Conflict Studies (NCPACS), University of Otago. This article was first published on Stuff New Zealand on 20 March 2019.

Pulse points

Whether bound by country, city or community, the pulse of or, on Friday, the pain from a place like Christchurch can often be determined by the careful collection of social media updates published in the public domain. It is an interest in precisely this that brought me to New Zealand, where I study how Twitter and Facebook are integral to political communications and cycles of violence in Sri Lanka, my home. In South Asia, social media engagement drive attention towards or away from around key events, issues, individuals and institutions. Sport, religion, politics, elections and entertainment dominate content creation. The resulting conversations, to varying degrees, contest or cement opinions. Emotions drive engagement more than reasoned presentation or critical inquiry. Interestingly, though geographically distant and culturally distinct, a shared pattern of access and resulting behaviour on social media makes a younger demographic back home almost indistinguishable from their counterparts in New Zealand. This includes the heightened production of content on social media after an unexpected event.

Based on all this, I wasn’t surprised to discover that the violence in Christchurch last Friday generated a tsunami of content just over Twitter. In the hours and days after the killings, specific hashtags on Twitter captured a community grappling with trying to make sense of, and recover from, a scale and scope of violence unprecedented in its history. The study of this content – much of it extremely painful to read – offers a glimpse into how the violence in Christchurch resonated access the country, and far beyond.

Almost immediately after the first news reports of the killings, #christchurchmosqueshooting, #christchurchshooting, #christchurchterroristattack, #newzealandterroristattack and #christchurch started to trend on Twitter domestically. This means that content using one or more of these hashtags showed a dramatic increase over a short period. In just a day, around 85,000 tweets featured one or more of these hashtags. By the 16th, two other hashtags started to trend – #49lives and #theyareus. In just a day, these two hashtags generated close to 37,000 tweets. With a single tweet capturing 280 characters, I was curious as to what just over 34 million characters, in the first 24 hours after the killings in Christchurch, said about the event. This is not just of academic interest. Policymakers and others interested in or tasked with immediate response after a natural or man-made catastrophe can look at social media as a digital weathervane of public sentiment, crafting measures based on need, mood, reception or pushback.

When studied at scale, publicly shared content on social media is almost pathological. Key ideas, communities that assemble around specific individuals and content that goes viral can be gleaned through network science, which those like myself employ to understand key drivers and motivations behind content generation. This is easier to grasp by way of an example. Adil Shahzeb is in Islamabad, Pakistan and a television news presenter and host. And yet, on the 15th itself, he appears quite prominently in the content shared around the violence in Christchurch. This is, prima facie, utterly confusing. How can someone all the way in Pakistan become rapidly popular on Twitter around an event that happened in New Zealand? The answer is in a single tweet by Shahzeb, currently pinned to his Twitter profile, which identifies a man who tried to stop the killer as Naeem Rashid, with Pakistani origins. Rashid and his son Talha, the tweet noted, were tragically lost to the killer. This single tweet generated a considerable number of retweets and likes amongst those on Twitter, in both Pakistan and New Zealand. It is a similar story with Sunetra Choudhury, a Political Editor and journalist at NDTV, a popular Indian TV station. One of her tweets, featuring a clip of PM Ardern speaking to the affected community in Christchurch on Saturday, was viewed close to half a million times. The responses to the tweet, almost all from India, feature an overwhelming appreciation of the New Zealand PM’s political leadership. These are two great examples of how empathy, shock and solidarity – here expressed in Urdu, Hindi and English – were able to cross vast geographies in a very short span of time.

Another way to get a sense of what’s being discussed is to analyse the substance of the tweet. Through what’s called a word cloud, words used more frequently can be rendered to appear larger than other words used less frequently. This process ends up with a visual map of the conversational terrain that affords the closer study of specific terms. Different hashtags feature different word clusters, but across all of them, Muslim, condemns, reject, Muslims, victims, terrorist, mentally, deranged, mosque, name, remembering, grotesque, white, supremacist and love feature prominently. The thrust, timbre and tone of tweets that feature these words are overwhelmingly empathetic and ranges from the profoundly sad to the outraged. By way of a loose comparison, when awful violence directed against the Muslim community broke out in Sri Lanka almost exactly a year ago, public sentiment I studied on Twitter at the time didn’t feature anything remotely akin to the levels of solidarity and support channelled towards the Muslim community in New Zealand, since last Friday.

What academics call a ‘platform affordance’ is more simply known to all Twitter users as a mention. Prefacing an account with the @ symbol ensures that on Twitter, a specific account is notified of a tweet. This is also used to direct a tweet towards a specific recipient or group. Unsurprisingly, PM Ardern, the Australian PM, the American President and controversial Australian Senator Fraser Anning are amongst those referenced the most over the first 24 hours. #49lives started trending on the 6th, generating nearly 17,000 tweets in a single day. The instigator of the hashtag is American. Khaled Beydoun is a Professor of Law based in Detroit, Michigan and a published author on Islamophobia. It is perhaps this academic interest that drove him to create #49lives, reflecting the number that at the time was the official toll of those killed in Christchurch. Beydoun’s tweet, pinned to his profile, has generated an astonishing level of engagement – from New Zealand as well as globally. Liked nearly 146,000 times, retweeted just over 89,000 times and generated around 1,700 responses to date, the tweet prefigures PM Ardern’s assertion in New Zealand’s Parliament that she will not ever speak the killer’s name. “I don’t know the terrorist’s name. Nor do I care to know it.” avers Beydoun’s tweet, which also asks to remember stories around and celebrate the lives of the victims. #theyareus generated just over 20,000 tweets by the 16th, but the sentiment or phrase is anchored to a tweet by PM Ardern made on the 15th. In a tweet liked 132,000 times and retweeted 40,000 times to date, she noted that “many of those affected will be members of our migrant communities – New Zealand is their home – they are us.” However, it was two heartfelt tweets by Sam Neill, a businessman from Central Otago, that kick-started the hashtag trend. Speaking out against white supremacism and in solidarity with the Muslim community in New Zealand, Neill’s two tweets, published consecutively on the 15th and 16th, have cumulatively generated nearly 27,000 likes, 4,200 retweets and 300 responses to date.

In sum, a cursory top-level study of the nearly 85,000 tweets generated in the 24 hours after the violence on Friday shows a global community outraged or dismayed at terrorism, an outpouring of love, empathy and solidarity, engagement that spans many continents and languages, addressing prominent politicians and journalists, featuring hundreds of smaller communities anchored to individuals based in New Zealand and beyond tweeting in a manner overwhelmingly supportive of the Muslim community.

The Twitter data underscores the value of studying public sentiment on social media in the aftermath of a tragedy. Social media provides pulse points. Framed by moments in time and driven by an understanding of, amongst other things, context, technology, access and language, the study of content in the public domain often helps in ascertaining how violence migrates from digital domains to physical, kinetic expression. Christchurch offers the world another lesson, a glimpse of which I wanted to capture here. Just as social media helps extremist ideology take seed and grow, it also helps in healing, empathy, gestures of solidarity, expressions of unity, the design of conciliatory measures and the articulation of grief and sympathy. The admiration, bordering on adulation, PM Ardern has received since Friday for her political leadership on just Twitter alone indicates that New Zealand is already seen as a template for how a country can and should respond to terrorism. These are more than just ephemeral in nature. Long after the world has moved on to the next news cycle, domestic conversations around what happened in Christchurch will endure on social media. Understanding how these ideas, anxieties and aspirations grow and spread lie at the heart of measures, over the long-term, that address extremism, racism, terrorism and prejudice, in all forms.

Sanjana Hattotuwa is a PhD student at the National Centre for Peace and Conflict Studies (NCPACS), University of Otago. This article was first published on 21 March 2019 on Scoop New Zealand.