In early April this year, I was invited to deliver a keynote address at the inaugural Build Peace conference, held at MIT Media Lab. My presentation was given, almost to the day, twenty years after the Rwandan genocide. Just before I went on stage, we all observed a minute’s silence to remember the victims.
The genocide wasn’t on my mind when I made the slides for the presentation, but brought into sharp focus the thrust of my submission, which was to look ten to fifteen years into the future, and how information and communications technologies (ICTs) would feature in both the genesis and inflammation of complex political emergencies (CPEs) – in other words, violent conflict – as well as aid in peacebuilding and conflict transformation.
I knew when preparing for the keynote that the previous speakers, discussions and breakout groups would showcase and debate what had happened over the past couple of years around the adoption and adaptation of ICTs for peacebuilding, and what was for most today, the state-of-the-art in using ICTs for conflict transformation. From even before my Masters thesis in 2004 and related research based on my technical input around a real world high-level conflict negotiations process, I submitted that ICTs would invariably change the praxis and theories of conflict mitigation and transformation.
I was, and remain deeply skeptical about fourth generation early warning architectures – less around recent and rapid advancements in say algorithmic filtering and predictive analytics around big data, and more around the enduring and often tragic lack of any political will to act on warnings, which technology has been unable to influence.
My presentation wasn’t about the present. It was about how with the evolution of technology, violent conflict itself, as well as the frameworks around its transformation, would change. Given the commemoration of the Rwandan genocide, and just as I was ascending the stage to deliver the keynote, I could not help but wonder – for all the apps, services, websites, tools and platforms showcased during the workshop, could any of us be certain that another genocide could be prevented, or acted upon quicker?
I began by briefly touching on my own experience with technology for peace-building, stretching over 12 years, starting with input into the technical architectures of the One-Text negotiations process in Sri Lanka, the setting up of Groundviews and examples like 30 Years Ago, which use web media to interrogate the violence around Sri Lanka’s anti-Tamil pogrom in 1983. In outlining my vision for the future I noted two caveats – the future is never really what we imagine it will be, and attempting to envision or outline it requires us to also acknowledge that we are products of our own ‘filter bubbles’, a term derived from Eli Pariser’s book and TED Talk.
I didn’t talk about a single technology by name in my presentation, opting instead to look at ICTs in peacebuilding from a meta or conceptual level. The first concept was that of radical inclusion, where like it or not, voices hitherto at the margins, periphery or violent erased could and would record their stories, and disseminate it to a wider public, through a range of media. I anchored this concept to three examples from Sri Lanka – how citizen journalism and civic media had helped bear witness and flag the murder of Ganesan Nimalaruban, thousands of citizens, largely Tamil, abducted even post-war,and the plight of IDPs whose lands had been appropriated by the State. Here were stories, I submitted, that wouldn’t have been recorded were it not for a range of advances in technologies to record, disseminate, archive and engage. I argued that whereas in the past, a conflict negotiations process was anchored largely to the politics and optics of inclusion, technology’s development would see this focus change to the management of exclusion, in a context where everyone – spoilers included – had a voice as powerful as the actors involved in a negotiations or peacebuilding process.
I then looked at the development of algorithmic analysis and modelling of conflict in particular, but society and polity in general, called it a new math of discrimination. Again taking from Pariser’s seminal work in this regard, but also looking at access to the web from a rights perspectives, I noted that how we experienced the web, and how our actions on the web and Internet were recorded, were governed by architectures of power and control invisible to us, residing in not just (illiberal) governments, but more often, in multinational corporations. I posed the question as to how in the future, someone utterly dependent on the web and Internet for basic needs, and discriminated against as a consequence of purchasing, viewing, browsing or downloading habits, could critically question the algorithms behind the targeting.
If anyone doubts the power of algorithms to shape our work, spent fifteen minutes watching Kevin Slavin’s TED Talk. What futures are we allowing corporate math to decide for us?
I then looked at the Internet of Things (IoT), and how what I called an ‘addressable world’ – including both the animate and inanimate – would change our interactions within and between, for example, communities, networks and identity groups. I have for a while followed in earnest the development of the game Watch Dogs, and posited the game’s unique narrative as one that wasn’t entirely removed from a very real future scenario. I asked the question, when does intelligence turn into surveillance? I asked how one could even remotely maintain control over privacy within ecosystem of competing owners, location sensors, proxy indicators, sentient nodes, ambient observation, pervasive automation.
On Slide 21, I posited key challenges from a rights and ethical perspective around the IoT, anchored to who controlled it. I submitted that no one at the workshop – which included some of the world’s leading minds – and few outside it had even begun to think about the implications the IoT would have on conflict transformation, including for example systemic (IP level) conflict resolution between networks, the possibility of low-level network failure leading to a cascading failure of higher-level essential services, and a new discrimination born of those who could afford to tailor the IoT to their needs, and those hostage to it.
I went on to speak about the privatisation of information on the web, including in peacebuilding domains. Noting that current conflict transformation models largely looked at conflict through a liberal democratic lens, or some form of an inclusive, participatory, discursive model of systemic transformation, the corporate ownership of content could mean that in the future, shareholders of key companies would have more power and control over public domain information than any government or transnational authority like the UN.
In discussions with close friend and tech visionary Ruha Devanesan, leading up the conference, we concurred that it was important to underscore the importance and existence of ICT innovation and intellectual resources within contexts of violent conflict. We were both concerned about the articulation of ICTs for peacebuilding as a new ‘white man’s burden‘ – where it was the West that was the sole repository of knowledge, innovation and technologies for conflict transformation.
Here I pitched a novel idea – the setting up of a Peace Tech Corps, on the lines of the hugely influential and valuable Peace Corps. In addition to the focus on ICTs for peace-building, I submitted that the enterprise could be a South-South exchange, focusing on innovation, knowledge resources and experience of those who lived in, came from and fight against violence, to help others in similar circumstances. In a nod to the compelling iHub concept, I also called for the establishment of tech incubators for peacebuilding.
My submission then focused on how peace negotiations would change as a consequence of the evolution in ICTs. I wondered what, for example, the future of the Chatham House rule would be in a world of wearable, digestible, biologically implanted, omnipresent computing. Slide 32 reflected peace negotiations as they stand today – closed door, high-level meetings attended by an elite, from which the majority of society and even polity is excluded. Vint Cerf, just before the Build Peace, appeared on a very interesting Google Hangout, and spoke towards the end of a future wherein fridges, connected to the Internet, could be used in an attack against the Bank of America. I went further, and asked what implication a DDoS attack using devices connected to the IoT could have on the information backbone and communications around a complex peace negotiations process. Instead of just leaving it at that, I also asked participants to visualise a future where the first building blocks of inter and intra-communal understanding could come, at an IP / systemic level, by our fridges and TVs exchanging, respectively, what the ‘other’ community ate and saw. The banality of reality TV and utterly unhealthy food consumption, seen through TVs and home appliances that communicated the very ingredients of life as it were, could help raise awareness around commonalities, shared hopes and desires, through family recipes, common ingredients and bad TV.
Could there be an IP range, for the IoT, dedicated to peace through peaceful means?
In contrast to the negotiations at the UN Security Council or even the Belfast Agreement at Stormont Castle, I looked at the rapid evolution of life-logging, and how a myriad of such content streams – one, theoretically, possible for each person on the planet – would fundamentally revise and shape conflict transformation processes. I asked how the realities of violent conflict, streamed live to millions of devices from multiple perspectives across a range of media could help or hinder a negotiations process. This I said was the future of Big Data, going beyond the merely episodic to a continuous, live, ever increasing stream of feeds we can already see the growth of today.
Fundamentally, I asked, how would technology change the way those born into the world today – digital natives – perceived justice?
I ended by looking at why technology in peacebuilding matters more than marketing spiels, snazzy presentations or visually compelling apps. I noted that the focus and intent of technology matters, and that it must always be directed to the strengthening of dignity and be used in within an ethical framework.
In painting a largely dystopian future, with open challenges to use the evolution of ICTs to strengthen peacebuilding and conflict transformation, I ended by appropriating Browning’s verse, calling upon participants to think beyond what they had seen, debated and imagined over the duration of Build Peace and anchor intellectual and technology development to what must be, yet still is not.
My presentation can be downloaded as a high-quality PDF from here, or viewed below.
A video of my submission, recorded live, can be seen below.