United Nations – Welcoming the United Nations Strategy and Plan of Action on Hate Speech

Excerpt from a letter penned to the UN Secretary-General penned by me on behalf of the ICT4Peace Foundation. Originally posted on the Foundation’s website on 19 June 2019.

###


Image courtesy Vice

The ICT4Peace Foundation congratulates the Secretary-General of the UN on the launch of the UN strategy and plan of action on hate speech. The Foundation’s research into and work on the complex, fluid dynamics of hate speech, over a decade and across five continents, strongly complements the capture and submission of the problem space by the Secretary-General in his remarks at the launch of the strategy.

As far back as 2010, after meetings with the Office of the UN Special Adviser on the Prevention of Genocide, the Special Adviser on the Prevention of Genocide, Mr. Francis Deng and the Special Adviser on the responsibility to protect, Mr. Edward Luck, the Foundation published ‘ICTs for the prevention of mass atrocity crimes‘. Some sections of the report, dealing with the challenges and opportunities of communications technology to prevent genocide, resonate deeply with the new plan of action against hate speech.

The Foundation’s interest in and commitment to this work, for well over a decade, spans work with many UN agencies including the Office for the High Commissioner on Human Rights, substantive input into the ‘Christchurch Call’ and diplomatic briefings in Switzerland. From Sri Lanka – which is twice mentioned in the Secretary-General’s remarks – to Myanmar, New Zealand to the Balkans, the Foundation’s research, training, workshops, output and reports have tackled head on the challenges around countering violence extremism online, and the rise of hate speech in online fora. The Foundation also fed into the High-Level Panel on Digital Cooperation, the framework of which as noted dovetails with what’s required to combat hate speech in both physical and virtual domains.

We recognize that the institutional mandate of the focal point of the action plan, Mr Adama Dieng, Special Adviser on the Prevention of Genocide, is well-placed to embrace the challenges around increasing hate speech generation and dissemination. The Foundation’s experience in this domain is anchored to lived experience and close to two decades of activism by colleagues from Sri Lanka, as well as a long history of diplomatic, institutional, systemic and substantive interventions to and within the UN system, in New York, Geneva and country-offices, including specific peacekeeping missions.

The Foundation, along with colleagues who are well-regarded experts in this domain, looks forward to – in person or electronically – supporting endeavors to widen and deepen this timely, important initiative, which undergirds the UN’s core values and mission.

For related tweets, see tweet thread here.

Full video & slidedeck of lecture: From Christchurch to Sri Lanka – The curious case of social media

First posted on the ICT4Peace Foundation’s website on 17 June 2019.

###

On 20 May 2019, Sanjana Hattotuwa, a Special Advisor at the ICT4Peace Foundation since 2006, gave a well-attended public lecture at the University of Zurich on the role, reach and relevance of social media in responding to kinetic and digital violence, including the potential as well as existing challenges around artificial intelligence, machine learning and algorithmic curation. The lecture was anchored to on-going doctoral research, data-collection and writing on the terrorist attacks in Christchurch, New Zealand in March and the Easter Sunday suicide bombings in Sri Lanka – Sanjana’s home.

A video of the full lecture, also requested by those who couldn’t attend the lecture in person, is now available on YouTube and embedded below.

The full slide deck used in the lecture can be downloaded as a PDF here. It’s also embedded below.

Sanjana’s presentation started with an overview of his doctoral research and scope of data-collection, anchored to Facebook and Twitter in particular. The daily capture and study of this data gives him perspectives at both a macro-level (quantitative) and with more precise granular detail (qualitative) which help in unpacking drivers of violence, key voices, leitmotifs and other key strains of conversations on social media after a violent incident. Comparing the terrorist incidents in Christchurch and Sri Lanka, Sanjana contextualised the global media coverage around both incidents and in particular, the criticism against social media following the live-streaming of the Christchurch incident. Calling social media an ‘accelerant for infamy’, Sanjana proposed an original thesis around how the science of murmuration and the study of mob mentality (based on the three key principles of adhesion, cohesion and repulsion), when applied to conversational and content related dynamics online, could provide insights into how violence spread and generated new audiences.

Sanjana then spoke about artificial intelligence (AI), and despite the more common framing by mainstream media, significant challenges around AI-based content curation faced by leading social media companies at present. Aside from Facebook and Twitter, Sanjana flagged the extremely problematic recommendation engines of YouTube, including the recent misrepresentation of the Notre Dame fire. Using two images, he also flagged how even the simplest of manipulation still baffled the most sophisticated of AI, looking at image classification (which is central to the identification of violent or hateful content online).

Using Sinhala – a language spoken only in Sri Lanka – Sanjana highlighted the challenges of natural language processing (NLP), which akin to AI, was central to content curation at scale. In one slide, he typed Sinhalese characters and in another, showed an image with characters embedded into it, noting something different to what was typed. Sanjana noted that the first, by itself, presented a number of challenges for companies that had for too long ignored the likes of Sinhalese or Burmese content generated on their platforms, while the second compounded those issues, by presenting to AI and ML architectures nuance, context and script training datasets at present aren’t based around or on.

Sanjana then went on to explain the dangerous consequences of ‘context conflation’ in a country or context with very high adult literacy and very poor media literacy. While some or all of this is known, Sanjana went on to then frame and focus, through hard data, the manner in which Twitter provided a global platform after the violence in Christchurch for people to discuss solidarity, express sadness and generate strength. A conversation far removed from hateful right-wing ideology or the promotion of violence took place, rapidly and vibrantly, on the same social media platforms that the global mainstream media chastised for having played a central role in the promotion of violence. Noting the unprecedented constitutional crisis in Sri Lanka late-2018 as well as the content produced after the Easter Sunday attacks, Sanjana again highlighted how social media in general, and Facebook and Twitter in particular – played a central role in democracy promotion, dissent, activism, pushback against authoritarian creep and the promotion of non-violent frames after a heinous terrorist attack.

Sanjana ended the lecture by looking at inflexion points – noting that social media companies, civil society and governments needed to recognise a historic opportunity to change the status quo, including core profit models and business practices, in order to ensure to extent possible social media didn’t provide ready platforms for fermenting or fomenting fear, hate, violence and terrorism.

Sanjana underscored why the #deletefacebook movement in the West would never take root in countries like Myanmar or Sri Lanka, and was a risible suggestion to hundreds of millions using Facebook’s spectrum of apps and services. He noted the dangers around the emulation, adaptation or adoption of regulation from the West in countries with a democratic deficit, while at the same time noting the importance of regulation’s introduction to govern companies that needed oversight to a greater degree than is present today. Linked to this, he noted that Silicon Valley’s business models were anchored to quantity over quality, and the generation of content irrespective of the timbre or tenor of that material – leading to the obvious weaponisation of platforms never meant to be Petrie dishes for terrorism and violent ideologies. Facebook’s recent pivot to privacy, announced by Mark Zuckerberg, Sanjana welcomed with cautious optimism, noting that while there was much to celebrate and welcome, it could also mean that academics would find it much harder or downright impossible, in the future, to study the generation and spread of violent extremism on social media. Sanjana spoke about social media as being central to the DNA of politics, political communication and social interactions in countries like Sri Lanka, noting that as a consequence, there is no alternative to the development of AI, ML and NLP techniques to deal with the tsunami of content generation growing apace, every day, already far beyond the ability of a few hundred humans to oversee and respond to. In both the penultimate and final slides, Sanjana spoke to the need to problematise the discussion of media and social media, noting how complex a landscape it really was, defying easy capture or explanation.

The lively and interesting Q&A session, which exceeded the allotted time, went into a number of aspects Sanjana touched on. The video above captures the Q&A segment as well.

Also read:
ICT4Peace input to Christchurch Call meeting in Paris
ICT4Peace was invited by RT Hon Jacinda Ardern to discuss “Christchurch Call to Action to Eliminate Terrorist and Violent Extremist Content Online“

National Dialogue limits in the age of digital media: ‘New dialogic processes’

Cross-posted from the ICT4Peace Foundation website. Originally published on 13 June 2019.

###

Special Advisor at the ICT4Peace Foundation, Sanjana Hattotuwa, joined the 2019 National Dialogues Conference via Skype video on 12 June to both present a short overview of the state-of-play and as a panellist discussing the interplay between politics, social media, conflict, peace and dialogue. As noted on the NDC’s website,

The National Dialogue Conferences are a continuation of conferences held in Helsinki, Finland since April 2014 onwards enjoying wide participation while deepening the understanding of dialogue processes among attendees. The Conference enables both a broad range of stakeholders from multiple countries and practitioners in the field internationally to take these collaborative lessons forward. These gatherings, familiarly known as NDCs, provide a space for joint reflection and in-depth discussion between practitioners, stakeholders and experts working with dialogue processes in different contexts. The Conference is organised by the Ministry for Foreign Affairs of Finland in cooperation with a consortium of NGOs consisting of Crisis Management Initiative, Felm and Finn Church Aid.

The panel consisted of,

  1. Ahmed Hadji, Team Leader and Co-Founder, Uganda Muslim Youth Development Forum
  2. Sanjana Hattotuwa, Special Advisor, ICT4Peace Foundation (video link)
  3. Achol Jok Mach, Specialist, PeaceTech Lab Africa
  4. Jukka Niva, Head of Yle News Lab, Finnish Broadcasting Company

The moderator was Matthias Wevelsiep, Development Manager – Digital Transition, FCA.

Sanjana’s presentation, titled ‘New dialogic processes‘ was a rapid capture of developments in a field he has researched on and worked in for over 15 years, which as was noted in his presentation, was long before what is now a global interest in both the underlying issues thwarted effective dialogue and new technologies, that both strengthen and erode support democratic exchanges.

Download a copy of his presentation as a PDF here.

With a title slide showcasing what at the time of the presentation were unprecedented public demonstrations in Hong Kong, Sanjana flagged well over a decade of work on strategic communications and dialogue processes anchored to conflict transformation that started in Sri Lanka in 2002, and the One-Text negotiations process that at the time, was in part anchored to software architectures that Sanjana designed and managed. Sanjana referenced a paper written 15 years ago to the month (Untying the Gordian Knot: ICT for Conflict Transformation and Peacebuilding), that as part of his Masters research anchored to the One-Text process in Sri Lanka, looked at how technology could play a more meaningful role in conflict transformation and peace negotiations processes.

Looking at how unceasing waves of content influenced and informed public conversations, Sanjana briefly highlighted the many inter-related fields of study around dialogue processes and communications, or ‘complex media ecologies’. He then offered a way for non-experts to visualise the dynamics of (social media) dialogues in contemporary societies, through murmuration or the swarm effect seen in nature, akin to mob-mentality (sans the violence). Anchored to his doctoral research, Sanjana then looked at the Christchurch terrorist attack in March, and how at scale – involving hundreds of thousands of tweets around key hashtags – Twitter had in the 7 days after the violence, captured events and conversations around it. More central to his doctoral work, Sanjana then focussed on the media landscape in Sri Lanka, looking at both Twitter and Facebook.

Covering the general state of conversation, an unprecedented constitutional crisis, the commemoration of the end of war a decade ago and the Easter Sunday terrorist attack, Sanjana proposed that it didn’t make any sense – in Sri Lanka and arguably in other countries and contexts too – to distinguish social media as a category entirely distinct from or somehow different to mainstream or traditional media. Offering in-depth data-captures around the volume of content production, the deep biases present in the content and key dynamics of sharing and engagement, Sanjana showcased the ’emotional contagion’ effect of how content online impacted how people felt.

Ending with the 90-9-1 principle, Sanjana cautioned against the simplistic study and reading of content online as markers of the health or effectiveness of national dialogues. Far more than the technology, Sanjana focussed on the operational logic(s) of dialogues in complex media ecosystems that were pegged to language, manner of expression, context, media literacy and a range of other factors.

In the ensuing discussion, Sanjana expanded some of these points and highlighted Finland’s emphasis on media literacy with children as a template that other countries could follow, to deal with the threat of misinformation over the long-term. In the short-term, Sanjana underscored the importance of bringing into the room technology companies – who he said were now entrenched gatekeepers of news and information far more than they chose to disclose publicly – as well as cognitive neuroscience, to study more and better the art of communication especially in or applicable around complex, protracted violent conflict.

Also read/watch: First of its kind workshop on ICTs and Constitution Building and Technology and Public Participation – What’s New?, courtesy International Idea.

UN Digital Cooperation – Questions to SG Guterres, Melinda Gates and Jack Ma

Cross-posted from ICT4Peace Foundation website. Originally posted on 10 June 2019. First two questions on (social) media literacy and the staggering bias present, today, in AI and ML architectures were penned and posted by me, complementing two others on AI’s weaponisation from a colleague.

###

To support the launch of the UN SG Guterres’ Report on Digital Cooperation on 10 June 2019, ICT4peace submitted the following four questions addressed to the UN SG, Melinda Gates and Jack Ma:

  1. In countries with poor media literacy, social media is a vector for spread of rumours that often result in kinetic reactions. How to harness the potential of social media to inform, and at the same time, reduce its impact as a driver of hate and violence through misinformation?
  2. Persons of colour aren’t part of many machine learning architectures, from design to dataset, leading to unsurprising racial bias in execution and selection. What can the UN do to ensure new forms of racism aren’t embedded into AI systems that will undergird politics, commerce, industry and travel?
  3. The converging nature of emerging technologies allow combinations of different weapons areas (AI/LAWS, cyber, bio-chemical, nuclear), areas that are currently dealt with in an isolated manner in GGEs and treaties. How can we break-up/connect those classical weapons-specific approaches to reflect this convergence?
  4. Major tech companies have taken on a pseudo-political role through ‘ethical’ principles that might protect certain basic/human rights. How to go about this shift of political tasks and the resulting incapacity to guarantee that regulations affecting HRs have political legitimacy?

Download the UN report here.

ICT4Peace had submitted its formal input to the High-Level Panel on Digital Cooperation in October 2018, which you can find as follows:

  • Download our reflections and recommendations as a PDF here.
  • Download summary of recommendations here.

ICT4Peace has been supporting the UN System-Wide Digital Cooperation since 2007, carrying out the first ever stocktaking of UN Crisis Information Management Capabilities in 2008, which lead to the adoption of the UN Secretary General’s Crisis Information Management Strategy (CIMS) in 2009 . The documents pertaining to this process since 2008 can be found here.

Tech, Fear and Accountability | Panel discussion at Point 8.0 in Sarajevo

Cross-posted from ICT4Peace Foundation website. Originally published on 22 May 2019.

###

Sanjana Hattotuwa, Special Advisor at the ICT4Peace Foundation, was invited by the organisers of Point 8.0 in Sarajevo to participate in a panel discussion anchored to Tech, Fear and Accountability, featuring Victoire Rio and moderated by Stefania Koskova.

The conversation was anchored to social media in general, and Facebook in particular, as a platform or vector for violence and hate, as well as pushback against both in Sri Lanka and Myanmar.

Sanjana was asked to begin by giving a brief overview of Sri Lanka’s tryst with social media. The panel was held at a time when the country was commemorating ten-years after the end of war in a context of renewed communal and religious violence as well as political uncertainty. The role, reach and relevance of social media was captured in broad-brushstrokes, both in how it served as an accelerant to infamy as well as, during significant political crises, acting as a vector to strengthen democratic pushback, dissent and activism.

Pushing back against simplistic, mono-causal and single-sourced explanations for violent conflict, Sanjana flagged the importance of studying the complex media ecologies in Sri Lanka, and elsewhere, without first, only and enduringly blaming Facebook for their failures in preventing the growth of violence and hate on their platforms. Sanjana also flagged the migration of users to WhatsApp, noting the issues associated with the transition from platforms academics could monitor and study, to encrypted messaging platforms which are impossible to observe at scale with as much granularity, insight and access to content.

On the importance of getting the technology right in markets like Sri Lanka and Myanmar – countries which Sanjana has worked, trained and conducted research in – he noted that embracing and designing solutions around the complex interplay of challenges could help tech companies seed and scale solutions in other, similar, contexts, countries, markets and communities. Though for years neglected, Sanjana noted how companies like Facebook and Twitter were now investing much more human and technical resources on both countries, as they didn’t want to be implicated in human rights violations at scale or discussions where their technology was used to, or directly contributing to, incitement of hate and violence.

Speaking to the challenges around content moderation and curation, at scale, and by companies themselves, Sanjana spokes to technical advancements like machine learning and artificial intelligence that in the years to come could make a significant impact, aside from regulatory, legal and other challenges in countries with a democratic deficit. But speaking to that deficit, which is growing in Sri Lanka and indeed, in Myanmar, the central challenge for companies dealing with both countries, albeit at different scales, was how to deal with a language spoken nowhere else, a growing market, sophisticated content production strategies, misinformation at a growing pace, nuance and scale and inauthentic behaviour on platforms by state actors or their proxies.

Recalling and re-affirming the ICT4Peace Foundation’s participation in, and endorsement of the Christchurch Call by New Zealand PM Jacinda Ardern and French President Emmanuel Macron, Sanjana said that the regulation of technology was fraught with challenges including around uncritically and with hidden, parochial self-interest, implementing legal frameworks around fake news and misinformation enacted recently in Singapore and Australia, and contemplated in other Western countries.

Sanjana ended by highlighting several points around how a radical and urgent course correction was needed, by all the main social media companies, to move away from ‘growth hacking’ to measures taken to rid platforms of toxic, harmful and hateful content. He noted that what was for years a plea, call and request from small countries, which went unheeded, was now the very thing that these companies were pivoted to, and anchoring their business goals to reflect, strengthen and expand on.

A short write-up of the session by the organisers can be found here. A video of the session can be seen below, or on YouTube.

Input to Christchurch Call meeting in Paris

First posted on the ICT4Peace Foundation website. Features input given by me for Social Media companies and other key actors that fed into a document created for and tabled at the meeting held in Paris to launch the ‘Christchurch Call’. First published on 14 May 2019.

###

The Christchurch Call to Action to Eliminate Terrorist and Violent Extremist Content Online

In preparation of the Christchurch Call Meeting hosted by the Prime Minister RT Hon Jacinda Ardern in Paris on 14 May 2019, ICT4Peace prepared the following ICT4Peace Policy Paper as input to the conference.

Through our work at ICT4Peace over the past years, we have witnessed and analyzed the changing use of social media and its growing impact on critical issues related to democracy, political stability, freedom, communication and security. The euphoria about the role of social media as a primarily positive force during the Arab Spring has given way to a much more layered and complex picture of its role and uses across society and around the globe. The sheer enormity of today’s social media platforms, the volume of users and almost infinite mass of content, means that the containment of the spread of violent content, as witnessed after the Christchurch attack, proved almost impossible.

We have been working on these issues for many years now, including, launching on behalf  of  the UN Security Council  the Tech against Terrorism platform  with inter alia Facebook, Microsoft, Twitter and Telefonica; carrying out cybersecurity policy and diplomacy capacity building for inter alia ASEAN and the CLMV countries; working with the UN GGE and ASEAN on norms of responsible state behaviour for cybersecurity with the ASEAN regional Forum on CBMs; carrying out workshops  in Myanmar, Sri Lanka on online content verification and online security; participating in the CCW GGE discussions in Geneva on Lethal Autonomous Weapons Systems (LAWS), and analyzing the role of artificial intelligence and its role in peace-time threats such as surveillance, data privacy, fake news, justice, the changing parameters of health including the risks of certain biotechnological advances and other emerging technologies.

The challenge of controlling and removing terrorist content online

Despite now serious attempts by social media platforms to control content that violates norms, human beings are simply unable to keep up with the speed and connectivity of content creation around the world. This task can only be computationally managed by algorithms and AI, but these are also opaque, offering biased recommendations, search functions and are in part responsible themselves for the rise in extremism, conspiracy theories and destabilizing content online. However, there is some hope going forward in the engineering of greater friction in times of crisis. Instead of on/off censorship, engineering greater friction into sharing can help, at scale, control and curtail the flows of misinformation. The best example of this comes from India and WhatsApp. For years, apps – linked to ‘growth hacking’ made it as easy as possible to share and engage with content. In countries and contexts with little to no media literacy, this because quickly weaponised by actors who used the virality of content over social media, without any fact checking whatsoever, as a vector for misinformation spread at scale. With added friction – limits on the number of forwards, adding visual aids, adding an extra step to interact, in the backend, through algorithmic suppression of content with poor quality (e.g. clickbait articles) are a range of ways that from the app (front facing) to the algorithm (back-end) companies can and have invested in ways that in effect, reduce mindless sharing.

Protecting democracy, ethical principles and a free open Internet

The challenge of balancing the need to maintain a free and open Internet with the need for security and protection of human beings, data, ethical principles, human rights and democratic processes is daunting. It is essential to achieve a broad alliance pushing for practical change to prevent the spread of extremist content online and the glorification of the perpetrators of mass murder.  It is also important to protect users from fake news, misinformation and manipulation in particular in the terrorist context. From the Global South perspective, a key concern is also how and if measures undertaken by social media companies in response to Western demands, challenges and concerns, might undermine the ability of civil society to hold authoritarian governments accountable, and could weaponise processes and structures to clamp down on dissent. We must remain vigilant to ensure that at each step of the way these concerns are considered.

Social Media, Coming of Age

We have a responsibility to current and future generations to ensure a framework for all emerging technologies that respects basic human rights and ethical principles. Social media is evolving and society needs to figure out how these tools should be used now and in the future. There is a real risk of the race to the bottom with hate and the baser nature of human beings taking the lead as users gravitate toward clickbait and gruesome content. However, society seems to have fortunately reached an ethical border with the livestreaming of murder, just as we had reached a border in biotechnology with the cloning of a human being. We need to develop guidelines for how we, as a global society want to move and operate in the social media space. What kind of ethical principles need to be built into the algorithms and AI that will control our future content and interaction online?  How should social media companies evolve in their approaches and business models to take into account the human dimension? We need to shift and develop technical measures that consider the quality, not just quantity, of conversations online, the mental and physical health and age of consumers and the ways in which content is shared and manipulated.

For years, Facebook and other Silicon Valley companies prioritised ‘growth hacking’ by which they meant that the increase in user base for products and platforms overtook every other aspect of the business. At Facebook this was led by Sheryl Sandberg, the company’s COO. This is why the company is facing the issues it is today, and why Mark Zuckerberg’s pivot to the health of conversation and content is significant. It is a major change of course for the company, including the new emphasis on privacy over sharing, a concept scoffed at by Zuckerberg himself in the past. Though the contours of what the company will become are clear, it is unclear what exactly will be done to ensure the quality of conversations and content are engineered to be biased away from the toxic, violent and hateful. But across major platforms including YouTube, Twitter and Facebook’s Instagram, Messenger and WhatsApp platforms, there is a new emphasis on securing user privacy and at the same time engineering ways that also protect users from hate, harm and violence.

Added to this is a new interest, at the operating system level of phones and tablets on both iOS and Android, ways through which Apple and Google respectively are now keen to ensure users have a good balance of on-screen and off-screen time. This includes logging device and app usage at the OS level, and by engineering tweaks on apps like Instagram for example (first, deeply unpopular within Facebook) to give a visual indication of when new content was over, and the user was scrolling through content already seen or engaged with, thus ensuring the user spent less time on the app, not more. Less time on apps meant less advertisements seen, and less interactions with the app or operating system, in turn resulting in less monetisable action points, impacting, at scale, profits as well as the harvesting of user level engagement metrics. So, what appears to be a simple change is actually a major shift for companies that now prioritise the health of users over constant engagement and addiction to apps.

Proposed actions for Social Media companies and other key actors:

  1. Develop joint multi-stakeholder taskforces to consider the big picture of the human dimension of social media and develop ethical principles that could guide Social Media companies and users in the online world.
  2. Social Media companies in cooperation with government, law enforcement and civil society need to reinforce joint SWAT team responses for content that meets certain extreme criteria, e.g. in particular livestreaming of murder or other heinous acts.
  3. Prioritize the development of AI that could better define and distinguish types of content and support in the clamping down in emergency situations on the connectivity of terrorist content.(The real problem at the moment is the definition of terrorist content. No AI at present can easily distinguish between the media’s coverage of a terrorist incident which may include graphic violence, and a terrorist group’s promotion of violent ideology, which may include the same or similar graphic violence. AI can also be fooled, and without human review, can and has led to instances where content documenting human rights abuses in war have been entirely deleted, in effect contributing to the impunity of perpetrators.)
  4. Ensure existing AI and algorithms do not promote terrorist content and extremist views by pushing more such content to users. (YouTube’s recommendation engine / algorithm, widely and increasingly criticized for the promotion over time of increasingly extremist content, is undergoing major overhauls on these lines).
  5. Proactively remove hate speech and repeat offenders. (This is directly linked to how much of human and technical resources can be put in by these companies, and also the challenge of end-to-end encrypted channels / platforms like WhatsApp, where even the company doesn’t easily know the kind of content exchanged in groups.)
  6. Improve review mechanisms and responsiveness.
  7. Reinforce trusted reporting network that expedites the flagging of content vetted through experienced individuals and institutions.
  8. Improve the reporting mechanisms built into Facebook apps like Facebook Messenger to make it easier and simpler to report violent or hateful content.
  9. It is important in these difficult times also to remember that just as social media helps extremist ideology take seed and grow, it also helps in healing, empathy, gestures of solidarity, expressions of unity, the design of conciliatory measures and the articulation of grief and sympathy. In the immediate aftermath of the Christchurch attack, a cursory top-level study of the nearly 85,000 tweets generated in the 24 hours after the violence shows a global community outraged or dismayed at terrorism, an outpouring of love, empathy and solidarity, engagement that spans many continents and languages, addressing prominent politicians and journalists, featuring hundreds of smaller communities anchored to individuals based in New Zealand and in a manner overwhelmingly supportive of the Muslim community.” Sanjana Hattotuwa in Pulse Points

Does blocking social media help in the wake of a terrorist attack?

Reposted from the ICT4Peace Foundation website and published on 27 April 2019.

###

On Easter Sunday, Sri Lanka was hit by devasting terrorist attacks across the country, claimed by ISIS some days later. The attacks killed over 250, and injured many hundreds more.

Throughout the week, more arms caches, known associates of the suicide bombers and safe houses were discovered and raided.

In the wake of the terrorist attacks, with no warning, the Sri Lankan government blocked social media including Viber, Facebook and WhatsApp. It did not, however, block Twitter. Some, welcomed the move, including Ivan Segal from the renowned GlobalVoices (sharing some initial thoughts on Twitter), and Kara Swisher, a journalist from the New York Times. It is unclear if Swisher has ever visited Sri Lanka or studied, to any degree, the country’s complex media eco-systems.

Pushing back on the Western gaze, which used Sri Lanka’s tragedy to simplistically spin, somehow, that since Facebook allowed vast amount of toxicity on its platform, it could and would be weaponised in the aftermath of the terrorism to seed more violence, many commentators in Sri Lanka painted a far more nuanced, cautious approach. Placing the block, the second time the government of Sri Lanka has done it in just over a year, in context, activists, journalists and public commentators on the ground offered perspectives that were anchored to how inextricably entwined social media was in response, recovery, public information, political and crisis communications, news and credible information dissemination as well as a vector for an unprecedented national disaster to connect in grief and loss.

Sanjana Hattotuwa, Special Advisor at the ICT4Peace Foundation, was featured in international media talking about the social media block, including the possible reasons for it and the ramifications on account of it.

A comprehensive article on the attacks and the social media block was published on the Australian Broadcasting Corporation’s Religion and Ethics website. Read Can social media be a force for good in Sri Lanka after the Easter Sunday bombings? It’s complicated here.

Social media: promoter of democratic participation or purveyor of violence?, a podcast presented by Waleed Aly and Scott Stephens on Radio National in Australia, which was an extended take of a live broadcast on Australian radio, can be listened to here.

An interview with Kristie Lu Stout broadcast on CNN, Social media ban is not effective in Sri Lanka, can be viewed here.

Broadcast first on Saturday, Al Jazeera’s Listening Post also featured Sanjana’s input in a segment dealing with the attacks and the aftermath. See the programme, Sri Lanka Easter bombings: Debating the social media clampdown, here.