Facebook’s Civil Rights Audit: Quick take from outside the US

After an elephantine gestation, Facebook released its civil rights audit on 7th July 2020. The 90 page report, anchored in the main to civil rights issues in the US, is absolutely scathing in its criticism of Facebook. Nothing of what is captured in it is new or surprising to those of us who have flagged these issues for years.

Jotted down some points that sprang to mind speed-reading the report today, particularly because the scale and scope of the issues highlighted in the report around Facebook’s platform or product (ab)use have more direct, immediate and enduring implications for users globally, including in Sri Lanka.

Adding to a growing body of evidence over the years, the Civil Rights Audit (CRA) makes it abundantly clear that Mark Zuckerberg does not listen to anyone. This is a problem flagged by Stephen Levy in both his compelling book and a conversation with Techonomy’s David Kirkpatrick. Levy’s Facebook: The Inside Story, which is the most-insightful (and impartial) capture of Zuckerberg I’ve read to date, underscores what Zuckerberg himself has noted – all key and final decisions around content rest with him, and him alone.

“I have multiple instances in the book where Facebook was about to do something, and it either violated privacy or uncovered a vulnerability, and people close to Mark said ‘Don’t do this,’ or even said ‘This is wrong to do this.’ And Zuckerberg overruled them and did it anyway.” Levy gave the company a chance to address that very issue in his fact-checking process. He explicitly laid out that pattern in Zuckerberg’s behavior, exactly as he said it to us, for the company’s response. And in reply to this question, the public relations people returned him a single word: “Accurate.”

This is a problem. Zuckerberg doesn’t even heed guidance & policies the company has already established around content moderation on its platforms. It is unclear how a company can effectively oversee content if the person in charge, by default and despite internal and external advice, guidance, pushback and input, refuses to budge from bad decisions. As the CRA flags, the implications of these decisions, done for expedient or purely domestic (US) considerations, have more enduring global implications. This is something a lot of us, including the British government have noted, repeatedly.

As I wrote in Facebook’s Human Rights Impact Assessment (HRIA) on Sri Lanka: Some brief thoughts,

Fundamentally, neither the HRIA nor Facebook touch the thorny, global policy disconnect between everything the company says it does to stop mal/mis/disinformation and at the same time, the “get out of oversight free card” it offers to some of the largest and most influential producers of it. The internal tension around this controversial policy aside, there is a growing problem-space around the company’s unwillingness to fact-check content by politicians (save for claims around Coronavirus) and the degree to which, given Twitter’s position on political ads, Silicon Valley companies should be involved in all this. Implicating Facebook’s investments around electoral integrity, the weaponisation of the platform during Covid-19 – by amplifying communal, ethno-religious and militaristic frames inimical to constitutional rule – builds on what was done in November 2019 during the Presidential Election. The violent incompatibility between the Facebook’s own policies results in a Janus-faced company, simultaneously interested in protecting rights, yet allowing for the rapid growth of content and engagement that undermines rights. Something has to give.

Civil Rights in the US is, in innumerable ways, inextricably entwined with and connected to human rights everywhere. It is unclear the degree to which the company realises this at senior management level, even as staff I interact with more regularly do. In a recent call hosted by ICT4Peace Foundation, I noted in response to a question by a participant that the central challenge with Silicon Valley companies and their platforms is the significant variance with which they approach human rights standards. What was clearly not even remotely considered by the Founder-CEOs at the time of product or platform launch, is now integral to the health of platform, content, safety, security, civility, electoral integrity and timbre of democracy.

And yet, the CRA – throughout the report but especially in the first half – suggests that the incorporation of normative civil rights standards and oversight through staffing, financial and algorithmic resources connects not with Zuckerberg, but with Sheryl Sandberg, the COO of Facebook (not exactly a doyen of civil rights). The CRA doesn’t make it explicit, but there is an enduring tension – put politely – of one person in charge of the company’s most consequential decisions (disinterested in civil rights) and another person in charge of overseeing civil rights investments. The CRA makes it very clear that onboarding individuals and recent hires, while significant developments, do not adequately address the problems the audit highlights. What it doesn’t flag is the enduring fiction or hypocrisy – which unaddressed, gives the company wiggle-room to defend Zuckerberg – that the COO is capable of addressing issues that are pegged to and stemming from nescience, naïveté or obstinacy of CEO.

Reading the CRA, I realised the degree to which (class-action and other) lawsuits in the US have been an engine of policy revision and course-correction. This is a story in and of itself. Facebook is, by all available evidence, unwilling and unable to embrace civil rights fundamentals including non-discrimination and racism, without being taken to courts and settling lawsuits. Significant policy and algorithmic decisions including course-correction have only come about on account of settlements. This means that entities in the Global South, who do not have access to legal resources or the financial means to hire lawyers to go up against the company’s own legal defence are at a significant disadvantage (even compared to US counterparts) in forcing the company to be less discriminatory. In turn, this means the the Global South is hostage to (and benefits from) US lawsuits and settlements. Prima facie, grounds for greater coordination and collaboration between advocacy and legal teams from the Global South and the likes of ACLU and others.

The CRA doesn’t make it explicit, but it is very clear that bad-faith actors including leading politicians and their proxies will exploit every loophole and grey area in policy or variance in implementation to their advantage. Just on the day of the CRA’s release, for example, the company was flagged running an ad that amplified White Nationalism.

On Page 28, the CRA notes the company is – as part of civic engagement practices – putting out election day reminders.

On Election Day and the days leading up to it, Facebook is reminding people about the election with a notification at the top of their News Feed. These notices also encourage users to make a post about voting and connect them to election information. Facebook is issuing these reminders for all elections that are municipal-wide or higher and cover a population of more than 5,000 (even in years where there are no national elections) and globally for all nationwide elections considered free or partly-free by Freedom House (a trusted third party entity that evaluates elections worldwide).

This is something Facebook first did in Sri Lanka on the day of the 2015 General Election. As I wrote in Social Media and Elections: Sri Lanka’s Parliamentary Election 2015,

For the first time, Facebook encouraged all its users based in Sri Lanka to update their status messages around the election. While opt-in (users could choose to ignore the Facebook prompt) this move by Facebook around voter mobilisation resulted in thousands updating their Facebook status to reflect the fact they were going to vote, had voted and various shades of political opinion. The status message also linked to the Election Department’s official website, which promptly crashed on account of the traffic. 

There’s a whole section on political ads and oversight of them in the CRA. This is obviously a concern for the US moving into the 2020 Presidential Election. What’s baffling though is how the platform and product abuse to seed and spread propaganda operates independently of the company’s existing policies and capabilities to oversee. This propaganda by proxy and organic post is a problem growing at pace in Sri Lanka. I am entirely unconvinced we are the only country to face this. As I noted in Facebook’s Human Rights Impact Assessment (HRIA) on Sri Lanka: Some brief thoughts,

Facebook notes investments to mitigate voter interference. That’s not where the problem is in Sri Lanka. A presentation to the Centre for Monitoring Election Violence (CMEV) in November 2019, at the time of Sri Lanka’s last Presidential Election and a dedicated Google Doc flagging the (ab)use of Facebook during the election campaign highlight where the problem-space really is at today, and where it is heading towards. This also calls to question Facebook’s political ad archives and ad oversight, because in as much as I have studied it for doctoral research, propaganda generation and spread operates in a black economy entirely independent of, and with scant regard for, Facebook’s increasing oversight and regulation of campaign-authorised ads.

Pages 49-52 are around harassment, and here again, the CRA doesn’t address a tension I flagged on Twitter recently. One side of the company tries to address incitement to hate and violence, including combating harassment. Another side of the company – more powerful – enables actors who either produce or provide a platform for abuse, violence and hate with the tools they need for greater engagement. There is clearly no coordination whatsoever between these disparate teams (in the case of Sri Lanka, and I’m sure, elsewhere too). This leads to the promotion of hate and violence across mainstream media properties the company has invested in to strengthen reach of content.

This is a toxic economy, created and sustained by the company with the willing participation of journalists who prefer access (and its practical or perceived perks) over critical distance, probing questions and accountability.

The article by Jacob Silverman is essential reading. It is unclear how what the CRA is anchored is compatible with this excerpt of corporate culture and investments made that are antithetical to the expansion or entrenchment of any rights discourse within the company,

Warzel compares the company’s mentality to that of an intelligence agency. “I have former Facebook sources who will tell me an interesting tip and then lament that they don’t know a single person who could possibly confirm this, even though these people would like to confirm this, because they don’t own a single device that Facebook couldn’t forensically tap into to figure out the source of a leak.”

Facebook hires ex–CIA agents for its security operations, says Newton. (BuzzFeed has also reported on Facebook’s hiring of former intelligence officers.) After he started doing critical reporting on the company, he went through his own information security training.

In 2016, after Nuñez published a Gizmodo article on political bias in Facebook’s trending-topics feature, every one of his Facebook friends who worked at the company was individually called into a room and interrogated by company staff. Private messages between Nuñez and his friends were read back to them.

“It’s really unfortunate because it seems there are employees at Facebook who genuinely have a conscience, a sense of moral and ethical obligations, and want to see the company adhere to that,” Warzel says. “Every big powerful organization leaks, and that’s a way of holding it accountable outside the walls of that company.

Problems arising from the unwillingness of the company to give out operational details (even at aggregate level) as they pertain to content moderation, oversight, hate speech detection, content removal and related issues in Sri Lanka have been flagged, repeatedly.

This in turn highlights a central issue with the company PR machine which, through Zuckerberg, Clegg, Sandberg and others, constantly claims vast improvements in the detection of hate speech and automated content removal (even before user-generated reports). The claims follow a formulaic frame where {x}% of content now is claimed to be taken down or flagged over {x}% from a few years ago. More granular detail is never given. The company knows full well that content production is increasing exponentially, which means that the content in violation of Community Standards and Facebook policies which may be single-digits percentages, in numeric terms, means millions of posts, comments, videos and viral engagements. There’s a parallel with India’s biometric ID system, Adhaar, where a seemingly very low percentage of errors in reality means millions of citizens are impacted.

According to the study, 0.8 percent, 2.2 percent, and 0.8 percent of Public Distribution System (PDS) beneficiaries in rural Andhra Pradesh, Rajasthan, and West Bengal, respectively, are excluded from their entitlements due to Aadhaar-related factors. This extrapolates to about 2 million individuals a month. 

Via What you don’t know about Adhaar. Facebook’s claims, which make it look very good on paper, do not take into account the impact of what still gets through. In a country like Sri Lanka, digital hate can lead to very real physical and kinetic consequences.

The CRA (in pgs 87-88) flags some really interesting developments around the oversight of content over the company’s increasingly E2E encrypted platforms. AFAIK, these developments have not been revealed elsewhere by company or any reporting I’ve seen.

Although WhatsApp is already end-to-end encrypted and Messenger offers an opt-in end-to-end encrypted service, Facebook announced in 2019 that it plans to make its communication services, namely Messenger and Instagram Direct, fully end-to-end encrypted by default. To address concerns about shielding bad actors, Facebook indicates that alongside encryption, it is investing in new features that use advanced technology to help keep people safe without breaking end-to-end encryption and other efforts to facilitate increased reporting from users of harmful behavior/content communicated on encrypted messaging systems. More specifically, Facebook states that it is using data from behavioral signals and user reports to build and train machine-learning models to identify account activity associated with specific harms such as child exploitation, impersonation, and financial scams. When these potentially harmful accounts interact with other users, a notice will surface to educate users on how to spot suspicious behavior and avoid unwanted or potentially harmful interactions so that wrongdoers can be detected and people can be protected even without breaking end-to-end encryption. In addition, Facebook reports that it is improving its reporting options to make them more easily accessible to users by, for example, inserting prompts asking users if they want to report a person or content. Regardless of whether the content is end-to-end encrypted, Facebook permits users to report content that’s harmful or violates Facebook’s policies, and, in doing so, provide Facebook with the content of the messages. In other words, end-to-end encryption means that Facebook cannot proactively access message content on its own, but users are still permitted to voluntarily provide Facebook with encrypted content. This allows Facebook to continue to review and determine whether it is violating and then impose penalties and/or report the matter to law enforcement, if necessary.

This section addresses concerns around the (ab)use of the encrypted WhatsApp (and soon, perhaps all Facebook products and platforms) to seed or spread hate and violence. Again, this could all be hogwash, or for years to come, more tuned to Western contexts with content in English more than the myriad of languages used elsewhere. Worth probing more and keeping an eye out for.

The CRA ends on a note we can all agree with,

The Auditors believe it is imperative that Facebook commit to building upon the foundation it has laid. It is critical that Facebook not only invest in its civil rights leader (and his or her team), in bringing on expertise, and in developing civil rights review processes, but it must also invest in civil rights as a priority. At bottom, all of these people, processes, and structures depend for their effectiveness on civil rights being vigorously embraced and championed by Facebook leadership and being a core value of the company.

But as the authors repeatedly note throughout the report, it is one thing to flag concerns. It is quite another to get Mark Zuckerberg to do anything about it.

Plus ça change, plus c’est la même chose?