In early 2012, Patrick Meier emailed me (and I think a few others) asking the following questions: If you had some of the most cutting edge software developers at your disposal and funding were not an issue, what major software/computing innovations would have the greatest impact on disaster affected communities and humanitarian response? What are the most important gaps in humanitarian technology? What software challenges, if any, do you face in your own humanitarian work?
I rediscovered my email to him today, in light of some discussion I had in New York with the United Nations. Enough time has passed to publish what at the time was a bilateral exchange. I wonder, how many of these issues remain valid today, as those that will impact how we work in the years to come?
Machine translation and semantics
https://ict4peace.wordpress.com/2010/03/03/real-time-machine-translation-the-present-and-future/ and https://ict4peace.wordpress.com/2010/03/09/machine-translation-for-peacebuilding-and-conflict-transformation/ demonstrate how far even in a few years machine translation’s come. This is especially pertinent when non-English (script + language) data flows during most disasters (political or natural) eclipse the English you and I would be familiar with. During Strong Angel III we were shown a a real time translation of a TV broadcast – https://ict4peace.wordpress.com/2006/08/25/strong-angel-iii-real-time-broadcast-video-translation/ – but the technology still has a way to go before it can pick up nuances so vital for aid coordination during a crisis. I strongly feel however that NLP will play an increasing role in aid systems, and it appears the US govt (for parochial reasons) is getting into the act in a big way too – http://www.motherjones.com/politics/2012/04/fbi-twitter-data-mining, along with the phenomenal work (which I believe you’ve covered in your own blog) of Recorded Future (https://www.recordedfuture.com/2012/04/04/rise-of-the-muslim-brotherhood-in-context-of-the-egyptian-revolution/) not so much for their platform, but their underlying analysis engine. And in the field of semantics, platforms like Cognition http://cognition.com/ to Wolfram Alpha (powering Siri) are changing the way we interact with the web using strict Boolean logic. Can these new interfaces be applied to humanitarian platforms?
Not enough conversation is around the ethics of data generation, sharing, use and archival. https://ict4peace.wordpress.com/2007/10/30/humanitarian-information-systems-ethics-information-protection-and-information-dna/ and the more detailed https://ict4peace.wordpress.com/2006/10/15/how-much-information-should-we-share-in-peacebuilding-and-humanitarian-operations/ (with my post tsunami experience) are early cracks, and along with more recent and in-depth writing on the use of Big Data (A brief exploration of Open and Big Data: From investigative journalism to humanitarian aid and peacebuilding), deals with this issue which I feel is often underplayed. Intertwined with issues of privacy, safety and security, the ethics governing the use of crowd-sourced information is put on hold for what are often called more immediate needs, but if unaddressed, can increase the risk of communities that were vulnerable. Lives saved during a disaster, ironically through the appropriation of information generated by them, could leave to lives lost to civil strife within repressive regimes.
http://vimeopro.com/msradesignteam/portfolio/video/39564783 are hugely interesting experiments, though field utility is suspect for at least 5 – 10 years. The whole gamut of physical sensors interacting with virtual design elements that influence data representation is a model of thinking that can however deeply inform humanitarian aid dashboard design and deployment. Obviously, Google’s Project Glass https://plus.google.com/111626127367496192147/posts also holds promise at the field level for aid workers unfamiliar with the terrain. It is the most compelling vision to date of many other augmented reality platforms and apps already preset and working for Android and iOS. I actually started to talk about the use of augmented reality for humanitarian aid 6 years ago! See https://ict4peace.wordpress.com/2006/11/20/mobile-phones-augmenting-reality/ which I followed up in 2009 when Layar came to my notice https://ict4peace.wordpress.com/2009/06/20/layar-augmented-reality-through-mobiles-in-amsterdam/. I don’t know where Nokia’s at, my Layar has gone through many iterations.
Grassroots / citizen mapping
It may not be the case in every place and context, but essentially the technologies and tools for citizen mapping will grow. Products like http://www.event38.com/ProductDetails.asp?ProductCode=E382 will increasingly become hobbyist kits, complementing the kind of work done by grassroots mappers around the Gulf Oil Spill. Essentially, our view of the world is going to be increasingly plural – no one view will dominate another easily, with technology and tools to complement, confirm and contradict ground realities not just in the hands of govt’s, but in the hands of ordinary people too. The perceptions of Kibera to New York will change as a result, and this neo-geography will also inform identity – the sense of location within a society and community. From crisis to governance, these tools will play an increasing role.
In a blog post of yours from a while ago, you wanted a red button application for citizen journalism (http://irevolution.net/2010/05/02/future-of-news/). Now there’s one http://mashable.com/2012/05/03/instagram-citizen-journalism/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Mashable+%28Mashable%29. Of course, the Gulf Oil Spill resulted in a similar app – https://ict4peace.wordpress.com/2010/06/02/oil-slick-reporting-through-mobiles/. Along with the likes of Google + for iOS, FB Timeline, platforms like https://wavii.com/, and the really interesting http://bottlenose.com/home, we are looking at the, interestingly, the fracturing of a key need – personal information curation. There are really complex algorithms behind each of these platforms and apps, and their potential to be deployed in and adopted for dealing with the peaks of information generation during a crisis are as yet untested.
Visualisation and mobiles
Can we do what http://liveplasma.com does for books and music to missing persons registries and conflict drivers? Can we build remote field intelligence to mobiles so that what is shown on thin client apps in the field is geo-fenced, information rich, bandwidth frugal, contextual, updated, interactive and accountable? The combination of NFC, geo-positioning, data transmission via SMS, smart devices, multiplatform apps all exist – no one is really putting them together in the same eco-system, to create a HQ to remote aid worker ERM system of sorts. It can be done technically, but needs political vision and drive?
GAP: Archival, both the thinking and the tools
I first wrote about the problems of digital archiving in 2006 – https://ict4peace.wordpress.com/2006/06/30/ted-videos-and-digital-archiving/. The problem is growing. And fast. We’ve already lost, irrevocably, so much of the data produced during disaster even over the past 3 – 4 years. Given the pace at which information generation during and immediate after a disaster is increasing, the sheer technical challenges involved in archiving this information for posterity are significant, never mind the challenges over data governance, use and archival standards and media.
In 2006 I came up with six mantras – https://ict4peace.wordpress.com/2006/04/30/technology-for-humanitarian-aid-6-mantras/. They remain valid today, and will I submit also be valuable into the future. And please, more Failfares – https://ict4peace.wordpress.com/2010/03/25/learning-from-failure-failfare/. The marketing around specific platforms, apps and tools is already just to nauseating, because so much of it is disconnected from the more humbling ground realities. If we want a better future, let us start with our failures today.