Interesting content on ICT4Peace

The pros and cons of crowdsourcing election monitoring

Sharek’s Katrin Verclas has a great article looking at the pros and cons of crowdsourcing election monitoring, based on the experience of Lebanon recently. 

I agree that crowdsourcing anything leaves much to be desired in terms of accuracy and information fit to feed into critical decision support processes. This is why the ICT4Peace Foundation is working on a crisis information management demonstrator, built on top of Ushahidi, that has information qualification routines built in. The tool will not be for the masses, but for agencies with trusted networks of field personnel who will feed in information, with the system itself open to social media input that can be vetted by agency trusted personnel. This opens up the system to be wholly crowdsourced, à la the Lebanese model, or completely closed to those outside the trust network(s) of an agency / agencies working on a particular issue, in a certain region or towards a shared goal. The design also allows the system to be anything in between these two extremes, so that the key responders to a crisis can determine the best degree of openness.  The important point that even if different international and local agencies had different approaches to what degree the system should be made public (i.e. extend to untrusted, initially unverified crowdsourced information) the common underlying information management architecture and standards would make for far greater and easier interoperability and information harmonisation. 

I’m interested in how Ushahidi’s evolving Swift River concept tackles this problem, that Paul Currion has succintly and accurately expressed here.  

Until such time there’s a better solution, I’ll still be training election monitors at the Centre for Monitoring Election Violence (CMEV) how to enter data into Google Maps that are verified for accuracy in a timely manner. Right now, there’s no escaping the labour required for the task – each location and incident is entered into the map directly, no automated source from the web is used to populate maps. Helps us give as close to a real time image of the ground situation in the lead up to and on the day of election, more useful we are told by extensive feedback from local media, than a mashup that just puts unverified reports on a map along with other data streams.

10 thoughts on “The pros and cons of crowdsourcing election monitoring

  1. Thanks for this post, Sanjana.

    I think the pro’s and con’s are perfectly well know and obvious by now, at least those listed in the MobileActive post. Nothing new there. This is why I appreciate your post: you are actively implementing projects and thinking practically about how to address and overcome some of crowdsourcing’s well known challenges. Thank you.

  2. Hi, Sanjana — thanks for the post. Great that you are taking it all step further. I disagree with Patrick — the issues with crowdsourcing are generally not at all acknowledged in mainstream media over the hype over crowdsourcing particularly in elections. Patrick might know it all but that is certainly not reflected in the mainstream press or blog posts on crowdsourcing.

    Lebanon is interesting precisely because there are two parallel efforts: One using a systematic system of election observation with trained local volunteers using SMS (similar, I think, to what you are thinking about) spearheaded by LADE, the other a crowdsourced system built on Ushahidi (that also aggregates LADE and news reports, incidentally). This allows, for the first time, a side-by-side comparison of the efficacy of either — and crowdsourding in elections versus systematic election observation by thousands of trained volunteers does not come out that well.

  3. Hi Patrick,

    Thanks for your kind comments. We are the *only* elections monitoring body in Sri Lanka using Google Maps, Twitter, YouTube and basically Web 2.0 tools in pursuit of better governance. It’s a testimony then to how difficult these tools are to adapt and use esp. for non-technical, non-English speaking audiences. In this sense, I think we have a long way to go before crowdsourcing even amongst trusted networks is really embedded in the nature of the work you and I are familiar with through examples in the West, where romanic monolingualism plays a huge role in the adaptation and use of these ICTs.

    Dear Katrin,

    I think we all learn from the experience of others. You and Patrick are the shoulders those like us stand on to see farther, and I value both your experience and candour in your respective websites and blogs. I think you’re right – the experience of Lebanon suggests that placing crowdsourced information with vetted information can be a useful exercise, but again, this is contextual comparisons. But I wonder what you mean by efficacy? Efficacy in reporting or addressing the problems flagged on the map?

    Our maps in SL bear witness. They are a historical record for the public and the voter on election malpractices of those who invariably often end up getting elected to power. See India’s elections for example and how many with a criminal record were elected to Parliament. Is this to suggest that Vote India powered by Ushahidi was a failure? I think not, but just as with Patrick’s other domain of significant expertise – early warning of conflict – there is still a great divide between knowing and doing something.



  4. Dear Sanjana and Katrin,

    Thanks for your follow up comments.

    Katrin, you’re absolutely right, I’m coming at this from the humanitarian and human rights angle; and these communities are particularly concerned about the potential pitfalls of crowdsourcing. Perhaps this understanding is not as well understood in the field of journalism.

    This does surprise me, however, since the challenges associated with crowdsourcing are particularly self-evident and hardly difficult to understand. Obviously crowdsourcing presents a challenge to data validation since anyone can report whatever they feel. These issues are not new; the Wikipedia experiment raised these concerns years ago already. And who in the mainstream media hasn’t heard of Wikipedia?

    In sum, crowdsourcing as a methodology is not new and has already been applied at different scales, albeit not widely in the humanitarian and human rights fields. So I’d give the media the benefit of the doubt a bit more, but then again, I’m no expert on media myself.

    This aside, I do find your other point (about there being two parallel efforts that for the first time allow a side-by-side comparison) fascinating. This is exactly the kind of set up that presents for the possibility of very rich research. I hope you and/or other colleagues will jump on this quickly and produce in depth research on the comparative case study.

    Dear Sanjana,

    Thanks for your kind words, but I’m no swimmer and my shoulders are unlikely to take the weight of even my little brother.

    “In this sense, I think we have a long way to go before crowdsourcing even amongst trusted networks is really embedded in the nature of the work you and I are familiar with through examples in the West, where romanic monolingualism plays a huge role in the adaptation and use of these ICTs.”

    I would caution against casting aside lessons learned and best practices from the field of conflict early warning simply because they originated from projects originating in the West.

    “Crowdsourcing amongst trusted networks” is absolutely no different to what the conflict early warning field has been for the past 20 years. You identify, recruit and train field monitors, formulate indicators, develop a code-book to maximize inter-coder reliability and analyze the data to provide decision-support and guide operational response.

    There is a large amount of rich and valuable knowledge that was generated in the deployment of these early warning projects across Africa, Asia and South America, so I would not dismiss those lessons learned too quickly.

    The definition of crowdsourcing, strictly speaking, is the outsourcing of a task or tasks to an undefined, generally large group of people or community in the form of an open call. So Ushahidi in Gaza was not a crowdsourcing exercise per se, ie, “crowdsourcing among trusted networks” is perhaps an oxymoron. The crowd is not longer the crowd, an undefined group of people, but rather a trusted network. I often refer to “UN sourcing” for information collection systems that draw on UN personnel exclusively.

    So to summarize, “crowdsouricng among trusted networks” is nothing new; as a practice, this activity at the field-level has already been going on for decades. I would highly recommend the literature on famine early warning systems as an excellent source of rich and valuable information on do’s and don’ts vis-a-vis developing a trusted network for the reliable collection of information. In terms of the conflict literature, reading through reports on 3rd and 4th generation early warning systems would be worthwhile.

  5. Patrick and Sanjana — yes, we are looking closely at the data from Lebanon to do some in-depth comparisons. There is still some fallout from the election, so it might take a few more days before we have all of the data, but it’ll be fascinating to look at. If you have specific questions you’d like to see addressed, let me know.


  6. Great, great conversation. I would just like to contribute two small points:

    – There is nothing “obvious” about crowdsourcing; I personally find it quite wonderfully strange.

    – Any tool that can be used by an anonymous crowd can also be lovingly used by a elite cadre of trained personnel. Password protection is just a couple lines of code.

  7. @Chris, how is crowdsourcing wonderfully strange? It’s about democratizing information. It may produce strange (and wonderful) results, but I don’t find crowdsourcing as strange or not obvious.

    Again, I say this pointing to Wikipedia. At the time it was strange, and as the founder said this was something that “ended up working in practice, but we weren’t sure how it was working in theory.”

    There have been so many studies on crowdsourcing vis-a-vis Wikipedia that I do think we understand the pro’s and con’s a lot better than we did. That doesn’t preclude the possibility of having wonderfully strange results, but nor is crowdsourcing a black box.

  8. Dear all,

    I wonder if any of the crowdsourced information and resulting datasets provided early warning of the post-election scenario in Iran? To a degree, that’s an unfair question – because the actions of the regime may not be linked to verifiable facts on the ground. The resulting disconnect is a big problem for early warning in general, and not unique to Iran. Nevertheless, did the crowdsourcing *during* election suggest malpractices that could have led to some analysis that the vote was rigged?

    Following on, where the systems set up for election monitoring useful in post-election swarming against the regime?


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s