I was recently in Myanmar, where at the invitation of the Myanmar Information Management Unit (MIMU), I conducted an informal presentation on hate and dangerous speech monitoring plus counter-speech strategies, as well as social media strategies during and in response to elections.
In a subsequent conversation with someone from the excellent Phandeeyar initiative based in Yangon, we talked about what could be some of the most effective strategies over social media around counter-speech messaging to curtail and control the rise of expression that inflames communal tensions, with virtual content contributing over time to physical violence.
I noted the following points during the conversation, based on work in Sri Lanka monitoring hate and dangerous speech on Facebook in particular through the Centre for Policy Alternatives (Saving Sunil: A study of dangerous speech around a Facebook page dedicated to Sgt. Sunil Rathnayake, 2015 and Liking violence: A study of hate speech on Facebook in Sri Lanka, 2014).
Demographics are important: Youth (those between 18-24 in particular) stand the risk of radicalisation upon entering and engaging with online and mobile chat based fora. To appeal to this segment, iconic figures from youth (singers, actors, sportspersons, YouTube producers, hackers, IT industry leaders, young entreprenuers) are more important to leverage in counter-speech initiatives than say expressions from or iconography based around the dhamma. It is also the case that combined with geo-targetting, those who are held in high regard by this age group in local communities (ranging from monks in a community temple where this segment has gone for tuition or Sunday school to local business owners) can be leveraged, the emphasis being on the identification of influencers within that demographic, and furthermore, by geography.
Geo-targeting/geo-fencing: Easily done on Facebook, counter-speech content (ranging from pages to specific posts on Facebook) can be targetted to specific regions, at specific times, for specific communities. Wide-scatter promotions simply don’t work, either displaying on the screens of those who are already partial to the counter-speech content, or only sporadically appearing on the screens of those for whom it is most relevant. The larger the terrain of an audience, the greater the empahsis should be on ge0-fencing counter-speech content. For example, during an election, constituencies that have witnessed heightened communal or partisan violence can be targetted well before the day of the election with counter-speech messaging to prevent the spread of rumours and other inflammatory content.
Language: In a multi-lingual country like Sri Lanka, counter-speech is largely ineffective if it isn’t conducted in the language that dangerous and hate speech fora use in their interactions. Hate and dangerous speech on Facebook in Sri Lanka is almost exclusively in Sinhala, and counter-speech initiatives in either English or Tamil have no relevance or traction. Iconic counter-speech examples like Panzagar in Myanmar can be very effective, since they transcend the barriers of language. Short-form video can also be a powerful vector for counter-speech to reach target audiences, without necessarily being anchored to a single language.
Translation: This is not as easy as it sounds. Good translations that communicate ideas and meaning are hard to come by, and good translators (at least in Sri Lanka) are generally over-worked. Idioms, nuances, aphorisms and adages in languages differ, and native speakers of the language counter-speech content was originally produced in or for, and the language into which it will be translated into are very hard to come by.
Time: Counter-speech is a long-term process, and timinig is important in so far as what is expected as a result. Counter-speech to address and reduce electoral violence requires a different timeline to content that seeks to address deep-rooted communal or religious tension. Project oriented counter-speech campaigns, which are often driven by relatively short-term funding opportunities, are often too short for any meaningful impact.
Engagement more than likes: Facebook counter-speech initiatives often erroenously read the frequency or number of likes as a measure of popularity and reach. Firsly, Facebook likes are a misleading metric – since they can be very easily manipulated. Secondly, engagement accounts for readers who have commented on articles and/or shared them with or without comments on their timelines. As we’ve seen in Sri Lanka, engagement on dangerous and hate speech fora on Facebook is much higher than counter-speech fora and posts on the same platform, which is revealing.
Reasons for (social media) engagement: Counter-speech proponents need to do far more, and better research around why, and at what times, hate and dangerous speech content is produced and received with high levels of engagement. What drives the production cycles? Are there links to key political or cultural events? Is there a connection between the utterances of key individuals and the production of hate speech in online fora? Is there a connection between the speeches of political groups, politicians, religious leaders or other individuals and the engagement online using dangerous speech? Does hate speech increase in the lead up to an election, and if so, at what key points?
Law of diminishing returns: Remember the Kony 2012 video? Remember its follow up? Love it or hate it, many will remember the short film produced by Invisible Children in 2012, but not its sequel. While the first film is now a study in the generation of viral content online, the lesson is also that there is no guarantee what worked, every very well, once will generate the same metrics over subsequent attempts. There are some interesting studies on engineering virality for web content, which suggest that,
…if you’re trying to create content that will make a big splash, making the message positive is likely to help, and emotionality is key. Of course, more interesting, practically useful and surprising content is also more likely to go viral.
Counter-speech proponents need to look this research in greater detail, aiming to create content that doesn’t just go viral once. They should also keep in mind that content addressed to the same demographic will, unless very inventive, generate progressively less interest and interaction over time. The higher the frequency of content production sometimes risks the preception of counter-speech as spam, whereas too infrequent production also risks ineffective audience engagement. Context is critical to content.
Linked to the discussion above was a question posed to a colleague who attended a workshop on hate and dangerous speech in Brazil recently, held on the margins of the Internet Governance Forum (IGF).
Based on the SL experience, what is the best approach to counter hate speech?
I noted the following,
- Study the generation and spread of hate and dangerous speech by spoilers and other groups who are the lead architects of discord
- Demographics – carefully target those who haven’t yet been radicalized by their entry and participation in known FB groups that incite hate
- Geo targeting – Locate and address cities, provinces and locations (i.e. Aluthgama) that have historically had a prevalence to act on content that is digitally produced and disseminated
- Language – Use effective means through which to craft and communicate counter-speech (i.e. the use of slang and even baila)
- Make sure the counter-speech is localized and appeals to the target audience(s) in terms of optics
- On Facebook, counter-speech pages, groups and accounts must focus on engagement more than likes
- Constantly examine reasons for engagement and try to strengthen known drivers to enhance reach
- Encourage leading social media companies like Facebook to invest more in the algorithmic or machine examination of content posted.