Facebook’s Human Rights Impact Assessment (HRIA) on Sri Lanka: Some brief thoughts

First reading

Facebook released the company’s Human Rights Impact Assessment for Sri Lanka today. At the time of writing, Joshua Brustein writing for Bloomberg has one of the first takes on the report, highlighting Facebook’s apology for its role in Sri Lanka’s violence. Commissioned by Facebook, the report was conducted and written by Article One. As noted in the Executive Summary of the HRIA,

It is important to note that the findings from this assessment do not include potential impacts after September 2018, when Article One conducted its two field trips to Sri Lanka. While the situation in Sri Lanka has changed significantly since that time, this assessment does not provide an overview of how those changes may have played out on the Facebook platform. Nor does it incorporate any Facebook interventions that may have taken place, on or off-platform, since September 2018.

For full disclosure, I spoke with Article One on 12 September 2018 as a key respondent. The long call outlined a number of significant issues around platform, product and due process. Facebook’s apology for “the very real human rights impacts that resulted” is welcome and goes beyond what the company had admitted to earlier. As lead researcher in the first evidence-based reports on Facebook’s role in inciting racism, Islamophobia, homophobia and other forms violence, published by the Centre for Policy Alternatives (CPA), the company was clearly and repeatedly warned about both the feared and real impact of toxic content produced for and engaged with on the company’s products, including WhatsApp and Messenger, for years before the riots in March 2018. Neither CPA nor I got any response till 2018. THe HRIA is long-overdue, but an important recognition of a rubicon crossed in 2018.

Article One and Facebook flag a number of ways things have changed since 2018, and for the better. This is also welcome news. Much of this was known and informed, to varying degrees, by strategic input, contextual guidance and grounded insight provided regularly and pro bono to Facebook, including at a senior management and policy level. These interactions since 2018, both virtual and in-person, have covered all major aspects of Facebook’s platform and product, including but not limited to the development and revision of Community Guidelines, country-specific moderation frameworks, desktop and mobile UI/UX design and development, translation services, quality assurance, the capture and presentation of coordinated inauthentic behaviour, the study of weaponisation on the platform, gamification of content, introducing senior representatives to senior government officials and leading civil society actors, escalating issues on behalf of victims, responding to ad policy revisions, the roll-out of political ad archives, responding to AI and ML development including the early provisioning of training corpora, providing input into algorithmic content flagging especially in Sinhala, modelling account and actor behaviours, electoral integrity and the use of the platform to support democratic electoral processes, the development and review of interstitials, the development and review of greater friction in core messaging products, human rights norms based input into the development of guidelines and corporate workflows, the development of privacy norms, the capture, analysis and reporting to company of sustained attacks against HRDs with the risk of imminent harm, research on the targeting and harassment of women and overseeing the company’s responses to user generated reports.

All this to counter any lingering belief that Zuckerberg’s assertion to Congress in April 2018, a month after the New York Times top of the fold headline on Sri Lanka’s riots, noting that AI forms the basis of the company’s fight against hate speech, has been realised to any meaningful degree. The company relies on humans, still. More on this later in relation to a recommendation by Article One. It is, however, a relief that Facebook is, prima facie, taking more human rights seriously in volatile markets like Sri Lanka.

The recommendations

Article One’s Executive Summary is clear on Facebook’s role in seeding, helping spread at scale and sustain the March 2018 riots,

Article One’s assessment showed that the Facebook platform contributed to spreading rumors and hate speech, which may have led to “offline” violence. Indeed, the assessment found that the proliferation of hate speech (e.g., Kill all Muslims, don’t even save an infant; they are dogs) and misinformation (e.g., that a Muslim restaurateur was adding sterilization pills to his customers’ food) may have contributed to unrest and in the case of the restaurateur and others, physical harm.

The report goes on to flag Facebook’s impact(s) on women, the LGBTQ+ community, HRDs and children. On the first point, the insights in and recommendations by Opinions, B*tch: Technology Based Violence Against Women in Sri Lanka, published in 2019, are enduringly valid and urgent. Article One’s recommendations cover 7 broad areas,

There’s a lot of really good feedback and insight here – which needs to be appreciated, welcomed and underscored.

The most obvious pushback will be around the development of AI to “predict when online hate speech and misinformation may trigger offline unrest and violence”. Here, Article One inhabits a reality very different to and far removed from what’s possible, and importantly, desirable. Facebook in its response rightly notes that this isn’t possible now and for a long time. What it doesn’t speak to is the desirability of having AI judge, especially given well-known problems around ML limitations when dealing with limited corpora, specific cultures, context-specific aesthetics, nuanced linguistic usage, sarcasm and irony. In other words, given that Sinhala is only spoken in Sri Lanka, it will be a while before we get to what Article One wants by way of technical proficiency just on the side of the company, without even going into the multivariate, enduringly fluid, wicked problem that is conflict early-warning (CEW) – a problem-space I’ve reflected on since 2006. I would like Facebook to help HRDs and government identify moments that feature a high-risk of kinetic, physical violence. I would caution against any sort of prediction engine, pegged to black-box algorithms, located in a private corporation, operating in the US, predominantly geared to enhance shareholder value through greater profit, that determines the likelihood of violence. Just no.

The recommendation to publish data on content moderators is timely, but Article One suggests the disclosure of location, which is unworkable and dangerous. This degree of granularity provides as easy escape for Facebook. HRDs have long-since asked for more aggregate data on content moderators, which the HRIA report notes increased to 57 over 2018 (over how many before, it is unclear). It is not known how many there are working today.

Article One wants Facebook to provide “the ability to opt out of Facebook-driven curation through an easily accessible function on the platform”. It is unclear what this means. In 2018, many (in the US) didn’t understand how the News Feed worked. By late 2018, the company had tweaked its black-box algorithm to reduce the virality of sensational (and often false) content. In April 2019, the company made it easier for end-users to customise the News Feed. Right now, it is prioritising one category of news, linked to the Coronavirus pandemic. None of these changes have been around opting-out of Facebook’s curation, which is akin to asking the News Feed to be more like Twitter’s timeline (the way it was, before the company decided to also do a Facebook and screw it up, which was thankfully resisted by users). With Facebook, Article One’s recommendation would lead to a volume of content that is unmanageable, given the velocity of updates from accounts one is linked to. The recommendation should have been around greater, more easily accessible consumer/user-led control of the News Feed and importantly, transparency around algorithmic choices related to curation. This also goes to the heart of what the current UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression David Kaye flagged in 2019, calling for the fullest incorporation of UDHR into content moderation.

Many of the other recommendations are good, but those familiar with users and use-cases will recognise a familiar problem. A lot of these, even when and if implemented by Facebook, don’t carry the necessary guidance in Sinhala (and Tamil) for user-adoption and awareness. Discoverability of new or enhanced features enabling greater (end-user) agency remains an enduring issue. More on this later dealing with Facebook’s response to the HRIA. ‘Levers of influence’ targeted at government don’t feature in Article One’s report, aside from expanding privacy laws. For e.g., noting how inextricably entwined Facebook’s platform and product DNA is with SME’s and commercial enterprise, to work with government in order ensure toxicity, violence and racism – which harm business and investor confidence – are kept out, and progressive corporate behaviour (including entities that creatively or aggressively pushback on hate) is recognised and rewarded.

There’s also an emphasis on Facebook as platform, instead of looking at Facebook as a concert of products, ranging from Messenger and WhatsApp to Instagram, which is where a lot of the content generation and related conversation has migrated to (and increasingly so, since 2018). This brings additional challenges the HRIA doesn’t address, but was obvious even at the time it was penned (archive of content inciting hate and violence, including what was exchanged on WhatsApp leading up to and during the March 2018 riots, here).

Facebook’s response(s) to HRIA report

We deplore this misuse of our platform. We recognize, and apologize for, the very real human rights impacts that resulted.

There’s a lot of progressive and I believe sincere feedback to the HRIA report by Facebook. Media reports will pick up on the most obvious of these points, so I’ll focus on some that are marginal but important. In no particular order,

Nowhere does Facebook mention the significant developments made in text detection within images (including memes) since 2018, when it first announced Rosetta (read Wired piece and also academic paper). Given the degree to which memes (including in Sinhala gossip pages) dominate public news and information flows on Facebook in Sri Lanka, the evolution of Rosetta over the past two years is significant, allowing for research like this. And yet, I am unable to find any recent update on Rosetta, outlining capabilities around the detection of non-Romantic scripts including Sinhala, and subsequent pipelining into the company’s semantic classifiers.

Though obviously resonant in and relevant to the HRIA, neither Article One nor Facebook flag how and if AI advances around hate speech detection (also announced today) and community challenges to better train AI classifiers, are relevant in the Sri Lankan context/market. This is classic Facebook behaviour, where one side of the company, or one team, doesn’t quite know what another side or team is up to or doing. I’ve lost count the number of times I have asked Facebook staff to just email one of their colleagues to find out more about something that impacts their mission and mandate.

Nowhere does Facebook mention how and if measures are being taken to address the abuse of Facebook Live, which even before and after the Christchurch massacre, remains a problem (in Sri Lanka).

From doxxing to abuse, racism and inciting hate, these videos go viral when launched from accounts of influencers who are in turn often partial to a political ideology. Live videos thus may not feature killing sprees that trigger algorithmic action, but often feature dog-whistling, veiled but clear references to specific communities, religions, groups and individuals, and in Sinhala, seem to avoid platform oversight to the degree this feature, and content broadcast on it, is monitored in English.

Facebook doesn’t reveal how many moderators are employed to overlook user generated reports or platform flagged content in Sri Lanka, working in Sinhala and Tamil. The “significant increase” noted in the response is utterly meaningless, because we do not know what the baselines were for 2018, 2019 or even the start of 2020.

It is true that Facebook regularly organises sessions with NGOs and HRDs. The fundamental disconnect though is between how the company seeks to empower those who fight for human rights, while at the same time, through a different arm of the company, works with political stakeholders and mainstream media to increase reach and engagement. The disinformation ecology in Sri Lanka unequivocally and for years places politicians and partisan TV channels as the primary producers of mis/mal/disinformation. What the company in effect does is negate what it helps HRDs and NGOs to do on the platform by helping amplify the worst content from the most divisive of accounts. The essential tensions here within Facebook, and the violence of these twin imperatives, is a global problem with local implications.

Facebook notes investments to mitigate voter interference. That’s not where the problem is in Sri Lanka. A presentation to the Centre for Monitoring Election Violence (CMEV) in November 2019, at the time of Sri Lanka’s last Presidential Election and a dedicated Google Doc flagging the (ab)use of Facebook during the election campaign highlight where the problem-space really is at today, and where it is heading towards. This also calls to question Facebook’s political ad archives and ad oversight, because in as much as I have studied it for doctoral research, propaganda generation and spread operates in a black economy entirely independent of, and with scant regard for, Facebook’s increasing oversight and regulation of campaign-authorised ads.

Facebook’s claims about advances around dehumanising speech, harassment and bullying are welcome, if not overdue. The broader application of these across products is expected, but more pressingly, even during Covid-19, it is evident that the platform is unable to deal with the volume and velocity of racist content. Two years after what the HRIA responds to, the same data signatures persist and pulsate across Facebook’s products and platforms. The data signatures of disinformation in Sri Lanka are evolving at pace, and much more complex than Facebook’s response would have you believe.

Claims about 3rd-party fact-checking in Sri Lanka have to be taken with a grain of salt, keeping aside more academic and global scepticism around the effectiveness of fact-checking on social media platforms.

The company claims it raised user awareness around non-consensual intimate images (NCII) approaches and in particular through the Not Without My Consent portal. https://www.facebook.com/safety/notwithoutmyconsent, Facebook notes in its response to the Sri Lankan HRIA, is available in 50 languages. You would think that what the company was getting at was the availability of this vital content in Sinhala. But no. The Sinhala version of Facebook doesn’t even have this section in the Safety Centre.

Profit, policy, principles

Fundamentally, neither the HRIA nor Facebook touch the thorny, global policy disconnect between everything the company says it does to stop mal/mis/disinformation and at the same time, the “get out of oversight free card” it offers to some of the largest and most influential producers of it. The internal tension around this controversial policy aside, there is a growing problem-space around the company’s unwillingness to fact-check content by politicians (save for claims around Coronavirus) and the degree to which, given Twitter’s position on political ads, Silicon Valley companies should be involved in all this. Implicating Facebook’s investments around electoral integrity, the weaponisation of the platform during Covid-19 – by amplifying communal, ethno-religious and militaristic frames inimical to constitutional rule – builds on what was done in November 2019 during the Presidential Election. The violent incompatibility between the Facebook’s own policies results in a Janus-faced company, simultaneously interested in protecting rights, yet allowing for the rapid growth of content and engagement that undermines rights. Something has to give.

The tension between profit and principle goes to the heart of Sri Lanka’s HRIA. Discontinued algorithms that favoured rapid growth over everything else created, in Sri Lanka and more globally, Hydra-headed problems the company is only since 2016 coming to grips with and trying to address. I am really looking forward to the evolution of Facebook says is the first iteration of the human rights defenders programme. At the same time, the company’s mixed messaging dilutes trust and erodes expectations. But what gives me hope?

Miranda Sissons, Facebook’s first human rights chief is less than a year into her job. Sissons, who I’ve known before she joined Facebook, is acutely aware of the problem-space, and at a global-scale. I’m not envious of the position she holds and how much is riding on what she does and gets right. Her role, along with parallel developments in the company, is no less than a late-stage rights-pivot for a company that is – beyond the headlines – still wedded to the growth of shareholder-value over principles, and profit over progressive policy-making. That’s an unholy equation easy for outsiders to critique, but harder to change. Perhaps small wins help, akin to those flagged in the HRIA. I just hope Sri Lanka meaningfully benefits from not just what’s promised and proclaimed, but what’s really done well and implemented too.