United Nations – Welcoming the United Nations Strategy and Plan of Action on Hate Speech

Excerpt from a letter penned to the UN Secretary-General penned by me on behalf of the ICT4Peace Foundation. Originally posted on the Foundation’s website on 19 June 2019.

###


Image courtesy Vice

The ICT4Peace Foundation congratulates the Secretary-General of the UN on the launch of the UN strategy and plan of action on hate speech. The Foundation’s research into and work on the complex, fluid dynamics of hate speech, over a decade and across five continents, strongly complements the capture and submission of the problem space by the Secretary-General in his remarks at the launch of the strategy.

As far back as 2010, after meetings with the Office of the UN Special Adviser on the Prevention of Genocide, the Special Adviser on the Prevention of Genocide, Mr. Francis Deng and the Special Adviser on the responsibility to protect, Mr. Edward Luck, the Foundation published ‘ICTs for the prevention of mass atrocity crimes‘. Some sections of the report, dealing with the challenges and opportunities of communications technology to prevent genocide, resonate deeply with the new plan of action against hate speech.

The Foundation’s interest in and commitment to this work, for well over a decade, spans work with many UN agencies including the Office for the High Commissioner on Human Rights, substantive input into the ‘Christchurch Call’ and diplomatic briefings in Switzerland. From Sri Lanka – which is twice mentioned in the Secretary-General’s remarks – to Myanmar, New Zealand to the Balkans, the Foundation’s research, training, workshops, output and reports have tackled head on the challenges around countering violence extremism online, and the rise of hate speech in online fora. The Foundation also fed into the High-Level Panel on Digital Cooperation, the framework of which as noted dovetails with what’s required to combat hate speech in both physical and virtual domains.

We recognize that the institutional mandate of the focal point of the action plan, Mr Adama Dieng, Special Adviser on the Prevention of Genocide, is well-placed to embrace the challenges around increasing hate speech generation and dissemination. The Foundation’s experience in this domain is anchored to lived experience and close to two decades of activism by colleagues from Sri Lanka, as well as a long history of diplomatic, institutional, systemic and substantive interventions to and within the UN system, in New York, Geneva and country-offices, including specific peacekeeping missions.

The Foundation, along with colleagues who are well-regarded experts in this domain, looks forward to – in person or electronically – supporting endeavors to widen and deepen this timely, important initiative, which undergirds the UN’s core values and mission.

For related tweets, see tweet thread here.

Full video & slidedeck of lecture: From Christchurch to Sri Lanka – The curious case of social media

First posted on the ICT4Peace Foundation’s website on 17 June 2019.

###

On 20 May 2019, Sanjana Hattotuwa, a Special Advisor at the ICT4Peace Foundation since 2006, gave a well-attended public lecture at the University of Zurich on the role, reach and relevance of social media in responding to kinetic and digital violence, including the potential as well as existing challenges around artificial intelligence, machine learning and algorithmic curation. The lecture was anchored to on-going doctoral research, data-collection and writing on the terrorist attacks in Christchurch, New Zealand in March and the Easter Sunday suicide bombings in Sri Lanka – Sanjana’s home.

A video of the full lecture, also requested by those who couldn’t attend the lecture in person, is now available on YouTube and embedded below.

The full slide deck used in the lecture can be downloaded as a PDF here. It’s also embedded below.

Sanjana’s presentation started with an overview of his doctoral research and scope of data-collection, anchored to Facebook and Twitter in particular. The daily capture and study of this data gives him perspectives at both a macro-level (quantitative) and with more precise granular detail (qualitative) which help in unpacking drivers of violence, key voices, leitmotifs and other key strains of conversations on social media after a violent incident. Comparing the terrorist incidents in Christchurch and Sri Lanka, Sanjana contextualised the global media coverage around both incidents and in particular, the criticism against social media following the live-streaming of the Christchurch incident. Calling social media an ‘accelerant for infamy’, Sanjana proposed an original thesis around how the science of murmuration and the study of mob mentality (based on the three key principles of adhesion, cohesion and repulsion), when applied to conversational and content related dynamics online, could provide insights into how violence spread and generated new audiences.

Sanjana then spoke about artificial intelligence (AI), and despite the more common framing by mainstream media, significant challenges around AI-based content curation faced by leading social media companies at present. Aside from Facebook and Twitter, Sanjana flagged the extremely problematic recommendation engines of YouTube, including the recent misrepresentation of the Notre Dame fire. Using two images, he also flagged how even the simplest of manipulation still baffled the most sophisticated of AI, looking at image classification (which is central to the identification of violent or hateful content online).

Using Sinhala – a language spoken only in Sri Lanka – Sanjana highlighted the challenges of natural language processing (NLP), which akin to AI, was central to content curation at scale. In one slide, he typed Sinhalese characters and in another, showed an image with characters embedded into it, noting something different to what was typed. Sanjana noted that the first, by itself, presented a number of challenges for companies that had for too long ignored the likes of Sinhalese or Burmese content generated on their platforms, while the second compounded those issues, by presenting to AI and ML architectures nuance, context and script training datasets at present aren’t based around or on.

Sanjana then went on to explain the dangerous consequences of ‘context conflation’ in a country or context with very high adult literacy and very poor media literacy. While some or all of this is known, Sanjana went on to then frame and focus, through hard data, the manner in which Twitter provided a global platform after the violence in Christchurch for people to discuss solidarity, express sadness and generate strength. A conversation far removed from hateful right-wing ideology or the promotion of violence took place, rapidly and vibrantly, on the same social media platforms that the global mainstream media chastised for having played a central role in the promotion of violence. Noting the unprecedented constitutional crisis in Sri Lanka late-2018 as well as the content produced after the Easter Sunday attacks, Sanjana again highlighted how social media in general, and Facebook and Twitter in particular – played a central role in democracy promotion, dissent, activism, pushback against authoritarian creep and the promotion of non-violent frames after a heinous terrorist attack.

Sanjana ended the lecture by looking at inflexion points – noting that social media companies, civil society and governments needed to recognise a historic opportunity to change the status quo, including core profit models and business practices, in order to ensure to extent possible social media didn’t provide ready platforms for fermenting or fomenting fear, hate, violence and terrorism.

Sanjana underscored why the #deletefacebook movement in the West would never take root in countries like Myanmar or Sri Lanka, and was a risible suggestion to hundreds of millions using Facebook’s spectrum of apps and services. He noted the dangers around the emulation, adaptation or adoption of regulation from the West in countries with a democratic deficit, while at the same time noting the importance of regulation’s introduction to govern companies that needed oversight to a greater degree than is present today. Linked to this, he noted that Silicon Valley’s business models were anchored to quantity over quality, and the generation of content irrespective of the timbre or tenor of that material – leading to the obvious weaponisation of platforms never meant to be Petrie dishes for terrorism and violent ideologies. Facebook’s recent pivot to privacy, announced by Mark Zuckerberg, Sanjana welcomed with cautious optimism, noting that while there was much to celebrate and welcome, it could also mean that academics would find it much harder or downright impossible, in the future, to study the generation and spread of violent extremism on social media. Sanjana spoke about social media as being central to the DNA of politics, political communication and social interactions in countries like Sri Lanka, noting that as a consequence, there is no alternative to the development of AI, ML and NLP techniques to deal with the tsunami of content generation growing apace, every day, already far beyond the ability of a few hundred humans to oversee and respond to. In both the penultimate and final slides, Sanjana spoke to the need to problematise the discussion of media and social media, noting how complex a landscape it really was, defying easy capture or explanation.

The lively and interesting Q&A session, which exceeded the allotted time, went into a number of aspects Sanjana touched on. The video above captures the Q&A segment as well.

Also read:
ICT4Peace input to Christchurch Call meeting in Paris
ICT4Peace was invited by RT Hon Jacinda Ardern to discuss “Christchurch Call to Action to Eliminate Terrorist and Violent Extremist Content Online“

National Dialogue limits in the age of digital media: ‘New dialogic processes’

Cross-posted from the ICT4Peace Foundation website. Originally published on 13 June 2019.

###

Special Advisor at the ICT4Peace Foundation, Sanjana Hattotuwa, joined the 2019 National Dialogues Conference via Skype video on 12 June to both present a short overview of the state-of-play and as a panellist discussing the interplay between politics, social media, conflict, peace and dialogue. As noted on the NDC’s website,

The National Dialogue Conferences are a continuation of conferences held in Helsinki, Finland since April 2014 onwards enjoying wide participation while deepening the understanding of dialogue processes among attendees. The Conference enables both a broad range of stakeholders from multiple countries and practitioners in the field internationally to take these collaborative lessons forward. These gatherings, familiarly known as NDCs, provide a space for joint reflection and in-depth discussion between practitioners, stakeholders and experts working with dialogue processes in different contexts. The Conference is organised by the Ministry for Foreign Affairs of Finland in cooperation with a consortium of NGOs consisting of Crisis Management Initiative, Felm and Finn Church Aid.

The panel consisted of,

  1. Ahmed Hadji, Team Leader and Co-Founder, Uganda Muslim Youth Development Forum
  2. Sanjana Hattotuwa, Special Advisor, ICT4Peace Foundation (video link)
  3. Achol Jok Mach, Specialist, PeaceTech Lab Africa
  4. Jukka Niva, Head of Yle News Lab, Finnish Broadcasting Company

The moderator was Matthias Wevelsiep, Development Manager – Digital Transition, FCA.

Sanjana’s presentation, titled ‘New dialogic processes‘ was a rapid capture of developments in a field he has researched on and worked in for over 15 years, which as was noted in his presentation, was long before what is now a global interest in both the underlying issues thwarted effective dialogue and new technologies, that both strengthen and erode support democratic exchanges.

Download a copy of his presentation as a PDF here.

With a title slide showcasing what at the time of the presentation were unprecedented public demonstrations in Hong Kong, Sanjana flagged well over a decade of work on strategic communications and dialogue processes anchored to conflict transformation that started in Sri Lanka in 2002, and the One-Text negotiations process that at the time, was in part anchored to software architectures that Sanjana designed and managed. Sanjana referenced a paper written 15 years ago to the month (Untying the Gordian Knot: ICT for Conflict Transformation and Peacebuilding), that as part of his Masters research anchored to the One-Text process in Sri Lanka, looked at how technology could play a more meaningful role in conflict transformation and peace negotiations processes.

Looking at how unceasing waves of content influenced and informed public conversations, Sanjana briefly highlighted the many inter-related fields of study around dialogue processes and communications, or ‘complex media ecologies’. He then offered a way for non-experts to visualise the dynamics of (social media) dialogues in contemporary societies, through murmuration or the swarm effect seen in nature, akin to mob-mentality (sans the violence). Anchored to his doctoral research, Sanjana then looked at the Christchurch terrorist attack in March, and how at scale – involving hundreds of thousands of tweets around key hashtags – Twitter had in the 7 days after the violence, captured events and conversations around it. More central to his doctoral work, Sanjana then focussed on the media landscape in Sri Lanka, looking at both Twitter and Facebook.

Covering the general state of conversation, an unprecedented constitutional crisis, the commemoration of the end of war a decade ago and the Easter Sunday terrorist attack, Sanjana proposed that it didn’t make any sense – in Sri Lanka and arguably in other countries and contexts too – to distinguish social media as a category entirely distinct from or somehow different to mainstream or traditional media. Offering in-depth data-captures around the volume of content production, the deep biases present in the content and key dynamics of sharing and engagement, Sanjana showcased the ’emotional contagion’ effect of how content online impacted how people felt.

Ending with the 90-9-1 principle, Sanjana cautioned against the simplistic study and reading of content online as markers of the health or effectiveness of national dialogues. Far more than the technology, Sanjana focussed on the operational logic(s) of dialogues in complex media ecosystems that were pegged to language, manner of expression, context, media literacy and a range of other factors.

In the ensuing discussion, Sanjana expanded some of these points and highlighted Finland’s emphasis on media literacy with children as a template that other countries could follow, to deal with the threat of misinformation over the long-term. In the short-term, Sanjana underscored the importance of bringing into the room technology companies – who he said were now entrenched gatekeepers of news and information far more than they chose to disclose publicly – as well as cognitive neuroscience, to study more and better the art of communication especially in or applicable around complex, protracted violent conflict.

Also read/watch: First of its kind workshop on ICTs and Constitution Building and Technology and Public Participation – What’s New?, courtesy International Idea.

UN Digital Cooperation – Questions to SG Guterres, Melinda Gates and Jack Ma

Cross-posted from ICT4Peace Foundation website. Originally posted on 10 June 2019. First two questions on (social) media literacy and the staggering bias present, today, in AI and ML architectures were penned and posted by me, complementing two others on AI’s weaponisation from a colleague.

###

To support the launch of the UN SG Guterres’ Report on Digital Cooperation on 10 June 2019, ICT4peace submitted the following four questions addressed to the UN SG, Melinda Gates and Jack Ma:

  1. In countries with poor media literacy, social media is a vector for spread of rumours that often result in kinetic reactions. How to harness the potential of social media to inform, and at the same time, reduce its impact as a driver of hate and violence through misinformation?
  2. Persons of colour aren’t part of many machine learning architectures, from design to dataset, leading to unsurprising racial bias in execution and selection. What can the UN do to ensure new forms of racism aren’t embedded into AI systems that will undergird politics, commerce, industry and travel?
  3. The converging nature of emerging technologies allow combinations of different weapons areas (AI/LAWS, cyber, bio-chemical, nuclear), areas that are currently dealt with in an isolated manner in GGEs and treaties. How can we break-up/connect those classical weapons-specific approaches to reflect this convergence?
  4. Major tech companies have taken on a pseudo-political role through ‘ethical’ principles that might protect certain basic/human rights. How to go about this shift of political tasks and the resulting incapacity to guarantee that regulations affecting HRs have political legitimacy?

Download the UN report here.

ICT4Peace had submitted its formal input to the High-Level Panel on Digital Cooperation in October 2018, which you can find as follows:

  • Download our reflections and recommendations as a PDF here.
  • Download summary of recommendations here.

ICT4Peace has been supporting the UN System-Wide Digital Cooperation since 2007, carrying out the first ever stocktaking of UN Crisis Information Management Capabilities in 2008, which lead to the adoption of the UN Secretary General’s Crisis Information Management Strategy (CIMS) in 2009 . The documents pertaining to this process since 2008 can be found here.

Input to Christchurch Call meeting in Paris

First posted on the ICT4Peace Foundation website. Features input given by me for Social Media companies and other key actors that fed into a document created for and tabled at the meeting held in Paris to launch the ‘Christchurch Call’. First published on 14 May 2019.

###

The Christchurch Call to Action to Eliminate Terrorist and Violent Extremist Content Online

In preparation of the Christchurch Call Meeting hosted by the Prime Minister RT Hon Jacinda Ardern in Paris on 14 May 2019, ICT4Peace prepared the following ICT4Peace Policy Paper as input to the conference.

Through our work at ICT4Peace over the past years, we have witnessed and analyzed the changing use of social media and its growing impact on critical issues related to democracy, political stability, freedom, communication and security. The euphoria about the role of social media as a primarily positive force during the Arab Spring has given way to a much more layered and complex picture of its role and uses across society and around the globe. The sheer enormity of today’s social media platforms, the volume of users and almost infinite mass of content, means that the containment of the spread of violent content, as witnessed after the Christchurch attack, proved almost impossible.

We have been working on these issues for many years now, including, launching on behalf  of  the UN Security Council  the Tech against Terrorism platform  with inter alia Facebook, Microsoft, Twitter and Telefonica; carrying out cybersecurity policy and diplomacy capacity building for inter alia ASEAN and the CLMV countries; working with the UN GGE and ASEAN on norms of responsible state behaviour for cybersecurity with the ASEAN regional Forum on CBMs; carrying out workshops  in Myanmar, Sri Lanka on online content verification and online security; participating in the CCW GGE discussions in Geneva on Lethal Autonomous Weapons Systems (LAWS), and analyzing the role of artificial intelligence and its role in peace-time threats such as surveillance, data privacy, fake news, justice, the changing parameters of health including the risks of certain biotechnological advances and other emerging technologies.

The challenge of controlling and removing terrorist content online

Despite now serious attempts by social media platforms to control content that violates norms, human beings are simply unable to keep up with the speed and connectivity of content creation around the world. This task can only be computationally managed by algorithms and AI, but these are also opaque, offering biased recommendations, search functions and are in part responsible themselves for the rise in extremism, conspiracy theories and destabilizing content online. However, there is some hope going forward in the engineering of greater friction in times of crisis. Instead of on/off censorship, engineering greater friction into sharing can help, at scale, control and curtail the flows of misinformation. The best example of this comes from India and WhatsApp. For years, apps – linked to ‘growth hacking’ made it as easy as possible to share and engage with content. In countries and contexts with little to no media literacy, this because quickly weaponised by actors who used the virality of content over social media, without any fact checking whatsoever, as a vector for misinformation spread at scale. With added friction – limits on the number of forwards, adding visual aids, adding an extra step to interact, in the backend, through algorithmic suppression of content with poor quality (e.g. clickbait articles) are a range of ways that from the app (front facing) to the algorithm (back-end) companies can and have invested in ways that in effect, reduce mindless sharing.

Protecting democracy, ethical principles and a free open Internet

The challenge of balancing the need to maintain a free and open Internet with the need for security and protection of human beings, data, ethical principles, human rights and democratic processes is daunting. It is essential to achieve a broad alliance pushing for practical change to prevent the spread of extremist content online and the glorification of the perpetrators of mass murder.  It is also important to protect users from fake news, misinformation and manipulation in particular in the terrorist context. From the Global South perspective, a key concern is also how and if measures undertaken by social media companies in response to Western demands, challenges and concerns, might undermine the ability of civil society to hold authoritarian governments accountable, and could weaponise processes and structures to clamp down on dissent. We must remain vigilant to ensure that at each step of the way these concerns are considered.

Social Media, Coming of Age

We have a responsibility to current and future generations to ensure a framework for all emerging technologies that respects basic human rights and ethical principles. Social media is evolving and society needs to figure out how these tools should be used now and in the future. There is a real risk of the race to the bottom with hate and the baser nature of human beings taking the lead as users gravitate toward clickbait and gruesome content. However, society seems to have fortunately reached an ethical border with the livestreaming of murder, just as we had reached a border in biotechnology with the cloning of a human being. We need to develop guidelines for how we, as a global society want to move and operate in the social media space. What kind of ethical principles need to be built into the algorithms and AI that will control our future content and interaction online?  How should social media companies evolve in their approaches and business models to take into account the human dimension? We need to shift and develop technical measures that consider the quality, not just quantity, of conversations online, the mental and physical health and age of consumers and the ways in which content is shared and manipulated.

For years, Facebook and other Silicon Valley companies prioritised ‘growth hacking’ by which they meant that the increase in user base for products and platforms overtook every other aspect of the business. At Facebook this was led by Sheryl Sandberg, the company’s COO. This is why the company is facing the issues it is today, and why Mark Zuckerberg’s pivot to the health of conversation and content is significant. It is a major change of course for the company, including the new emphasis on privacy over sharing, a concept scoffed at by Zuckerberg himself in the past. Though the contours of what the company will become are clear, it is unclear what exactly will be done to ensure the quality of conversations and content are engineered to be biased away from the toxic, violent and hateful. But across major platforms including YouTube, Twitter and Facebook’s Instagram, Messenger and WhatsApp platforms, there is a new emphasis on securing user privacy and at the same time engineering ways that also protect users from hate, harm and violence.

Added to this is a new interest, at the operating system level of phones and tablets on both iOS and Android, ways through which Apple and Google respectively are now keen to ensure users have a good balance of on-screen and off-screen time. This includes logging device and app usage at the OS level, and by engineering tweaks on apps like Instagram for example (first, deeply unpopular within Facebook) to give a visual indication of when new content was over, and the user was scrolling through content already seen or engaged with, thus ensuring the user spent less time on the app, not more. Less time on apps meant less advertisements seen, and less interactions with the app or operating system, in turn resulting in less monetisable action points, impacting, at scale, profits as well as the harvesting of user level engagement metrics. So, what appears to be a simple change is actually a major shift for companies that now prioritise the health of users over constant engagement and addiction to apps.

Proposed actions for Social Media companies and other key actors:

  1. Develop joint multi-stakeholder taskforces to consider the big picture of the human dimension of social media and develop ethical principles that could guide Social Media companies and users in the online world.
  2. Social Media companies in cooperation with government, law enforcement and civil society need to reinforce joint SWAT team responses for content that meets certain extreme criteria, e.g. in particular livestreaming of murder or other heinous acts.
  3. Prioritize the development of AI that could better define and distinguish types of content and support in the clamping down in emergency situations on the connectivity of terrorist content.(The real problem at the moment is the definition of terrorist content. No AI at present can easily distinguish between the media’s coverage of a terrorist incident which may include graphic violence, and a terrorist group’s promotion of violent ideology, which may include the same or similar graphic violence. AI can also be fooled, and without human review, can and has led to instances where content documenting human rights abuses in war have been entirely deleted, in effect contributing to the impunity of perpetrators.)
  4. Ensure existing AI and algorithms do not promote terrorist content and extremist views by pushing more such content to users. (YouTube’s recommendation engine / algorithm, widely and increasingly criticized for the promotion over time of increasingly extremist content, is undergoing major overhauls on these lines).
  5. Proactively remove hate speech and repeat offenders. (This is directly linked to how much of human and technical resources can be put in by these companies, and also the challenge of end-to-end encrypted channels / platforms like WhatsApp, where even the company doesn’t easily know the kind of content exchanged in groups.)
  6. Improve review mechanisms and responsiveness.
  7. Reinforce trusted reporting network that expedites the flagging of content vetted through experienced individuals and institutions.
  8. Improve the reporting mechanisms built into Facebook apps like Facebook Messenger to make it easier and simpler to report violent or hateful content.
  9. It is important in these difficult times also to remember that just as social media helps extremist ideology take seed and grow, it also helps in healing, empathy, gestures of solidarity, expressions of unity, the design of conciliatory measures and the articulation of grief and sympathy. In the immediate aftermath of the Christchurch attack, a cursory top-level study of the nearly 85,000 tweets generated in the 24 hours after the violence shows a global community outraged or dismayed at terrorism, an outpouring of love, empathy and solidarity, engagement that spans many continents and languages, addressing prominent politicians and journalists, featuring hundreds of smaller communities anchored to individuals based in New Zealand and in a manner overwhelmingly supportive of the Muslim community.” Sanjana Hattotuwa in Pulse Points

When a law is not the answer

Wonderful news said all the Sri Lankans. But why Queensland, all the Australians asked. Fifteen years ago, a Rotary World Peace Fellowship award offered seven universities around the world to undertake a Masters in Peace and Conflict Studies. I chose the University of Bradford. I was awarded a place at the University of Queensland, in Brisbane. I didn’t complain. The scholarship was a chance to get out of Sri Lanka and rigorously study what I had till then done on the ground, at a time when violent conflict dynamics were, after some years of relative calm, rising rapidly. My Australian friends, however, were concerned that I would face in Queensland a degree of discrimination and intolerance they said I would never encounter in Sydney or Melbourne. I didn’t know enough to argue and expected the worst. After two years of extensive travel within the state and country, I returned to Sri Lanka experiencing very little along the lines I was warned about. Others though, at the same time, had a different experience – never physically violent, but far more verbally abusive. For them and I, this othering was at the margins of society. Well over a decade ago and without social media, violent extremism and ideology had to be actively sought after to be engaged with. Racism wasn’t digitally dispersed.

It is with an enduring affection of Australia that I am deeply concerned about disturbing new legislation, passed hurriedly last week, which uses the terrorism in Christchurch to justify overbroad controls of social media. The central focus of my doctoral research at Otago University is technology as both a driver of violence and a deterrent. How, today, social media promotes hate or harm is well known and widely reported. As with any generalisation, though elements of truth exist, the simplification of a complex problem results in illegitimate targets of fear or anger. Social media companies, for their part, are irascibly unmoved by what for years those like me have warned them about, around the abuse of platforms by those who seek to profit from violence. Coherence and consistency in policies that respond to the seed and spread of violence are lacking and resisted. However, significant changes in stance, response and policies are coming. The terrorism in Christchurch is responsible for accelerating globally what was sporadically mentioned or implemented with regards to safeguards around the production and promotion of content inciting violence, hate and discrimination. However, we must resist what appear to be simple answers to complex challenges, whether it comes from governments or big technology companies.

Violent extremism has many drivers, both visible and hidden. It doesn’t bloom overnight. Social media, inextricably entwined in New Zealand’s socio-political, economic and cultural fabric as it is back home in Sri Lanka, cannot be blamed, blocked or banned in the expectation that everything will be alright thereafter. Driven by understandable concern around the dynamics of how the terrorism in Christchurch spread virally on social media, the Australian legislation – rushed through in just two days without any meaningful public debate, independent scrutiny or critical input – doesn’t address root causes of terrorism, extremism or discrimination.

Amongst other concerns and though it sounds very good, holding social media companies and content providers criminally liable for content is a very disturbing template and precedent. American corporate entities are now required to oversee to a degree technically infeasible and humanly impossible, information produced on or spread through their services. This risks the imposition of draconian controls over what’s produced, judged by hidden indicators, with little independent oversight and limited avenues for appeal. As a global precedent, the law is even more harmful, allowing comparatively illiberal governments to project or portray as the protection of citizens, parochial laws essentially that stifle democratic dissent.

David Kaye, the UN Special Rapporteur on the promotion and protection of the freedom of expression, is also deeply concerned. In an official letter to the Australian Minister of Foreign Affairs, Kaye stresses, amongst other more technical, procedural and legal points, the need for public review and proportionality, international legal obligations on the freedom of expression and imprecise wording in the law, which is entirely removed from how digital content is generated in society today, and by whom. And herein lies the danger for New Zealand too. Politicians, under pressure to respond meaningfully, need to assuage the fears of a grieving country through demonstrable measures. The tendency is to pick an easy target and push through solutions that look and sound strong. The underlying drivers of violence and conflict, however, simmer and fester. Measures taken to control and curtail gun ownership are welcome, and arguably, long overdue. Policymaking around social media, however, is a different problem set that cannot be as easily, or concretely, addressed.

This is not a submission to do nothing. Rather, it cautions against the understandable appeal of following the Australian response and law. Steps around the non-recurrence of domestic terrorism must certainly embrace aspects of social media regulation and related legislation. The public must be involved in this. We know already that social media reflects and refracts – mirroring values of consumers as well as, through ways academics are struggling to grasp fully, changing attitudes and perceptions of users over time. This requires governments to iteratively work with social media companies on checks and balances that systemically decrease violence in all forms.

Elsewhere in the world, politicians who know the least about social media seek to control it, and those who know more or better, often abuse it. Kiwis, led by PM Ardern’s government, have a historic opportunity to forge a response to terrorism – relevant and resonant globally – that incorporates how best government can work with technology companies to protect citizens from harm. Australia, with the best of intent, gets it very wrong. New Zealand, with a greater calling, must get it right.

###

This article was first published in the Otago Daily Times on 16 April 2019, under the title ‘A Historic Opportunity’.

Principles over promises: Responding to the Christchurch terrorism

Almost exactly a year ago, Facebook was in the news in New Zealand over a row with Privacy Commissioner John Edwards. The heated public exchange between Edwards and the company took place in the context of the Cambridge Analytica scandal, in which the private information of millions of Facebook users was harvested, illicitly, for deeply divisive, partisan and political ends. Edwards accused the company of breaching New Zealand’s Privacy Act. The company responded that it hadn’t and that the Privacy Commissioner had made an overbroad request which could not be serviced. Edwards proceeded to delete his account and warned others in New Zealand that continued use of Facebook could impact their right to privacy under domestic law. Just a few months prior, the COO of Facebook, Sheryl Sandberg, was pictured on Facebook’s official country page with New Zealand PM Jacinda Ardern. The caption of the photo, which captured the two women in an embrace after a formal meeting, flagged efforts the company was making to keep children safe. It is not surprising that Sandberg also wrote the paean to Ardern in last year’s Time 100 list of the most influential people.

The violence on the 15th of March in Christchurch dramatically changed this relationship. In response to the act of terrorism, Facebook announced, and for the first time, a ban on “peace, support and representation of white nationalism and separatism on Facebook and Instagram”. Two weeks after the killings in Christchurch, a message by Sandberg was featured on top of Instagram feeds in the country and featured in local media. The message noted that Facebook was “exploring restrictions on who can go Live depending on factors such as prior Community Standard violations” and that the company was “also investing in research to build better technology to quickly identify edited versions of violent videos and images and prevent people from re-sharing these versions.” Additionally, the company was removing content from, and all praise or support of several hate groups in the country, as well as Australia. Sandberg’s message called the terrorism in Christchurch “an act of pure evil”, echoing verbatim David Coleman, Australia’s immigration minister, in a statement he made after denying entry to far-right commentator Milo Yiannopoulos, who after the attack referred to Muslims as “barbaric” and Islam as an “alien religious culture”. Last week, New Zealand’s Chief Censor David Shanks, declared the document released by the killer as ‘objectionable’, which now makes it an offence to share or even possess it. Following up, authorities also made the possession and distribution of the killer’s live stream video an offence. Facebook, Twitter and Microsoft have all been to New Zealand in the past fortnight, issuing statements, making promises and expressing solidarity. Silicon Valley-based technology companies are in the spotlight, but I wonder, why now? What’s changed?

Since its debut in 2015, a report by BuzzFeed News published in June 2017 flagged that at least 45 instances of grisly violence including shootings, rape, murders, child abuse and attempted suicides were broadcast on Facebook Live. That number would be higher now, not including Christchurch. The Founder and CEO of Facebook, Mark Zuckerberg, in May 2017, promised that 3,000 more moderators, in addition to 4,500 already working, would be added to over live and native video content. Promises to do more or better are what Zuckerberg and Sandberg are very good at making, in the aftermath of increasingly frequent and major privacy, ethics, violence or governance related scandals Facebook is in the middle of. Less apparent and forthcoming, over time, is what really the company does, invests in and builds.

There are also inconsistencies in the company’s responses to platform abuses. In 2017, the live video on Facebook of a man bound, gagged and repeatedly cut with a knife, lasting half an hour, was viewed by 16,000 users. By the time it was taken down, it had spread on YouTube. A company spokesperson at the time was quoted as saying that “in many instances… when people share this type of content, they are doing so to condemn violence or raise awareness about it. In that case, the video would be allowed.” Revealingly, the same claim wasn’t made with the Christchurch killer’s production.

The flipside to this is the use of Facebook’s tools to bear witness to human rights abuse. In 2016, the killing of a young black American Philando Castile by the Police in Minnesota was live-streamed on Facebook by his girlfriend, Diamond Reynolds, who was present with him in the car. The video went viral and helped document police brutality. There is also clear documented evidence of how violence captured from a Palestinian perspective, as well as content on potential war crimes, is at greater risk of removal on social media platforms. In fact, more than 70 civil rights groups wrote to Facebook in 2016, flagging this problem of unilateral removals based on orders generated by repressive regimes, giving perpetrators greater impunity and murderers stronger immunity.

It is axiomatic that deleting videos, banning pages, blocking groups, algorithmic tagging and faster human moderation do not erase root causes of violent extremism. The use of WhatsApp in India to seed and spread violence is a cautionary tale in how the deletion of content on Facebook’s public platforms may only drive it further underground. The answer is not to weaken or ban encryption. As New Zealand shows us, it is to investigate ways through which democratic values address, concretely and meaningfully, existential concerns of citizens and communities. This is hard work and beyond the lifespan of any one government. It also cannot be replaced by greater regulation of technology companies and social media. The two go hand in hand, and one is not a substitute for the other. It is here that governments, as well as technology companies, stumble, by responding to violent incidents in ways that don’t fully consider how disparate social media platforms and ideologues corrosively influence and inform each other. Content produced in one region or country, can over time, inspire action and reflection in a very different country or community.

Take an Australian Senator’s response, on Twitter, to the Christchurch terrorism. Though condemned by the Australian PM, the very act of referring to the Senator and what he noted on Twitter promoted the content to different audiences, both nationally and globally. The Twitter account, as well as the Facebook page of the Senator in question, produce and promote an essential ideology indistinguishable from the Christchurch killer’s avowed motivations. It is the normalisation of extremism through the guise of outrage and selective condemnation. What should the response be?

In Myanmar, an independent human rights impact assessment on Facebook, conducted last year, resulted in the company updating policies to “remove misinformation that has the potential to contribute to imminent violence or physical harm”. And yet, it is unclear how what may now be operational in Myanmar is also applied in other contexts, including in First World countries at risk of right-wing extremism.

I wonder, does it take grief and violence on the scale of Christchurch to jolt politicians and technology companies to take action around what was evident, for much longer? And in seeking to capitalise on the international media exposure and attention around an incident in a First World country, are decisions made in or because of New Zealand risking norms around content production, access and archival globally, on social media platforms that are now part of the socio-political, economic and cultural DNA of entire regions? Precisely at a time when any opposition to or critical questioning of decisions taken on behalf of victims and those at risk of violence can generate hostility or pushback, we need to safeguard against good-faith measures that inadvertently risk the very fibre of liberal democracy politicians in New Zealand and technology companies seek to secure. An emphasis on nuance, context, culture and intent must endure.

So is meaningful investment, beyond vacuous promises. In 2016, Zuckerberg called live video “personal and emotional and raw and visceral”. After the Christchurch video’s visceral virality, it is unclear if Sandberg pushed this same line with PM Ardern. In fact, Facebook astonishingly allowed an Islamophobic ad featuring PM Ardern wearing a hijab, which was only taken down after a domestic website’s intervention. Clearly, challenges persist. Social media companies can and must do more, including changing the very business models that have allowed major platforms to grow to a point where they are, essentially, ungovernable.

Grieving, we seek out easy answers. Banning weapons and blocking extremist content helps contain and address immediate concerns. Ideas though are incredibly resilient, and always find a way to new audiences. The longer-term will of the government to address hate groups, violent extremism in all forms and the normalisation of othering, from Maori to Muslim, requires sober reflection and more careful policymaking. What happens in New Zealand is already a template for the world. We must help PM Ardern and technology companies live up to this great responsibility.

###

First published on Scoop NZ on 4 April 2019.