Information visualisation through Microsoft Photosynth: Potential for human rights documentation?

The video below really says it all. Sadly, Photosynth does not yet run natively on a Mac, but the concept behind this information visualisation is astounding. 

I’ve been following Photosynth’s development for a while (this TED video is a very early version – the programme now has more models and more features) and the potential this already demonstrates to change the way we see and manage digital visual data is quite remarkable. 

Just imagine how useful this technology would be in documenting sites of genocide, human rights violations or just neighbourhoods, places and communities at risk. A large problem of field level HR monitoring and violations logging is the lack of precise geographical coordinates (see earlier post on Geo-location and human rights) as well as, in many instances, a total lack of visual documentation.

A system that integrates Photosynth into its location database can over time create powerful visualisations of location data (from mapping the physical environs to complex walk-throughs of incident locations) that can help in HR protection, advocacy, activism and even in legal proceedings.

Using devices like an iPhone (that geo-codes photos taken), the Ricoh 500SE, a product like Eye-Fi Explore or even through geo-coding photos on Flickr (works great for batches of older digital photos or those that have been scanned in) you can get a wealth of image data to buttress other event and processual data related to HR abuses to create databases of great depth and scope.

This is something I’ll be both looking at closely and pursuing in my own work.

7 thoughts on “Information visualisation through Microsoft Photosynth: Potential for human rights documentation?

  1. A while back, I forget where, I read about a prototype mobile phone application which enabled phones to orient themselves to one another using BT/GIS.

    It was touted as some civil society, peer-to-peer counter surveillance system for use in collating media taken from multiple sources at a single event. The idea is that the phone would know what not just the xy but the rotation, aspect and direction the camera was taking a photo.

    This would, apparantly, provide enough data to a pic to be able to assemble a three dimensional diorama of an event, similar to Photosynth.

    Wonder if it ever got off the ground, or died under its own geekery.

  2. Will attempt to find the article. Fascinating idea, though it fails on your lowest-common-denomenator-for-mobile-phone-apps test.

Comments are closed.