Is technology neutral?

Technology

Suzanne Mikawa asks a fundamental question here:

Is technology neutral?

As we have seen, technology can be neutral, and as you mentioned “technology can act as a catalyst to engender trust” in a post-disaster situation. It is also clear that in many situations, technology is certainly not neutral; technology can marginalize people, incite divisiveness, block knowledge flow. And by engendering trust, does that mean that technology is neutral? Is technology a neutral “tool” that is designed to carry out the will of the user? What are the repercussions of this debate for post-disaster and humanitarian relief operations? Does an affected population view the technologies and “leave behinds” as neutral? I think that this ties back into your question about what are the social, political and cultural implications of leaving behind technologies? How does one map the consequences?

These questions have an excellent probative value in exploring some of our assumption in the use, adoption and promotion of technology for humanitarian aid and peacebuilding.

In The military and the use of technology I highlighted a sentence that in a recent article in ArsTechnica I found problematic. The article stated that technology is morally neutral – its the way that it is used that makes the difference. I don’t agree. A Walther PPK would make, I am sure, a pretty good paper-weight, but it was made for a specific purpose and it is rather good when used for its intended purpose, and less elegant when used for others. The utility of driving a M1 Abrams down in New York is also high suspect, and while I could get from New York to London faster, I don’t think the air force is about to offer commercial rates for trans-Atlantic crossings in a JSF fighter when it is launched.

The point is that all technology is shaped by socio-economic, cultural and political imperatives. Brian Martin captures this succinctly:

It is true that many technologies can be used for both good and bad purposes, and for different purposes. But usually neutrality is taken to have a stronger meaning, such as that technologies are equally easy to use for different purposes, which is not helpful when comparing compact disks and cruise missiles. The approach taken here, a standard one in studies of technology, is that technologies are constructed for specific purposes and, as a result, are usually easier to use for those purposes. Users can choose and modify technologies for their own purposes, but are constrained by the physical reality of artifacts and the inertia of associated social systems.

There is a lot of writing about the ways that society influences technologies. On the one side is the view that technologies are autonomous, following their own trajectories. On the other side is the view that technologies are largely determined by their origins and inevitably serve the purposes of their creators. A middle view is that technologies are “shaped” by the social conditions and groups that led to their creation but, once created, they can, within limits, be directly used or modified for other purposes.

As this webpage notes, whether one accepts the neutrality of technology depends on one’s valuing philosophy – whether one tends toward the pragmatic and situational, or the absolute and authoritarian. Those who believe that technology is neutral argue that “guns don’t kill people, people do”, or that a knife can be used to “cook, kill, or cure.” Those who believe the opposite counter with evidence that technology cannot be evaluated in a vacuum and that there are traits common to all technological developments: (1) technological objects are unique; they are designed to function in a particular and limited way, and (2) technological objects are intertwined with their environment; they interact in unique ways with the rest of reality. As Is Technology Neutral? – A Funny Question, an excellent short essay on ICTs notes at the end:

In other words, it’s the final use of ICT in a specific setting, and ontological perspective that will ultimately decide if technology is neutral or not; it does not make sense to address this question when technology is detached from the context it is meant to be a part of.

I would argue, perhaps controversially, that technology does not have to be neutral to engender trust. Particularly in a peace processes and conflict transformation, tags such as neutrality have no meaning, but impartiality does. Impartiality facilitates constructive relationships, progressive dialogue amongst antagonists and engenders trust. An impartial approach is not necessarily value neutral – I am, for instance, committed to peace by a process of negotiations and in the design, selection, adoption and application of technology, I will use that which I believe will strengthen the capacity of all parties to enter into and stay in a process of negotiations – from technologies that influence the broad context such as television and radio, to those that facilitate inter-personal communication in secure workspaces, such as Groove Virtual Office.

Many of the essays I’ve read that explore neutrality and technology don’t address impartiality, which I believe is fundamental to building trust in the virtual and physical world. Mohamed Wahab began an interesting conversation thread on The Culture of Trust and Technological Applications during Cyberweek 2005 organised by the UMass Center for Information Technology and Dispute Resolution, wherein he states:

I personally think that in ODR people should be reassured and educated not only about the security and integrity of the technology used but the dispute resolution process as well. In some cases, and depending on our definition of ODR, technology may not be entirely integrated or embedded in the process itself but merely an aiding tool. Thus, in such cases the role of the human factor may well be as important as the technology utilized, which requires a higher level of trust that encompass both the technology used and the process utilized.

I believe that trust is mutable – that it is socially determined, and is deeply rooted to a particular context. In mediation for instance, some cultures trust a mediator who is perceived to be strictly impartial – i.e. an outsider – while as Lederach points out, other cultures trust a mediator who is insider-partial. In a related thread at Cyberweek 2005, I made the following observations:

The original topic of this discussion, trust in a networked society, raised some interesting issues for us in the design of our solutions. Was it the platform that created trust? Was it the public stature of the actors who were part of it or seen to be supporting it? Was it the interactions in virtual domains? Was it the exchanges in physical domains that led to trust in virtual domains? Was it “happy accidents” like the tsunami, which through massive trauma, led to enhanced trust of those who participated in online debates related to power sharing?

Is the idea of security mutable? Put another way, is the idea of trust and security inherently linked to process that are rooted in a certain time / place? My thesis in our on going work in Sri Lanka with some a diverse range of actors (including terrorist groups and / or their proxies) has been to understand and promote new determinants of trust in virtual domains – that are linked to physical symbols, but recognise that for trust to be created between previously warring actors, much more than traditional ways of measuring trust need to be constructed.

Information is indeed the key, as you rightfully point out. In my work towards media reform in Sri Lanka, I’ve developed and promoted Right to Information legislation and created modules / handouts / books for journalists to use RTI legislation in supporting a just and sustainable peace process. Of course, RTI legislation has yet to see the light of day in Sri Lanka. However, transparent and accountable virtual information exchange regimes can support trust building between parties.

We’ve develop tools that allow parties to meta tag their data, help them understand how each party came to a decision, the information in support of that decision, what was left out and why. We’ve developed comprehensive research spaces for each party to store information (propaganda) generated by them.

Through our on going work, we are learning that security and trust change according to: 1) the actor 2)the context 3) age / ethinicity / caste / religion / geo-political location of each actor 4) IT literacy.

An impartial approach works in Sri Lanka because of the highly emotive and charged nature of the conflict, where the real or perceived partiality to any one cause or actor can be quite detrimental to efforts to reach out to others. An impartial approach also works in Sri Lanka because of the existence of many truths – multiple and competing truths of different actors and different communities jostle, violently, for primacy, necessitating a mediation that eschews simple solutions and instead brings antagonists into frameworks that help them flesh out a shared vision with mutual compromise and respect. Any technology that aids in this process is useful. It may be the case that in other contexts, technology should aid an insider-partial approach.

Finally, I don’t believe that we can somehow will technology to be neutral. In Writing in pacifism to technology – An impossible vision? I note:

To limit the use of technology to pacifism is, I would argue, to stunt its development. Whether we like it or not, most of the radical advancements in technology come not from R&D into their peaceful uses, but from billions of dollars spent on how we can obliterate “enemies” and “terrorists”. However, the appropriation of tools used in war by those interested in non-violence, conflict resolution and peacebuilding has occured throughout history.

In our support for peace, we should not dictate how tools are to be used, but demonstrate by example and by our support, how the same tools can be used to bring people together, enhance reconciliation and address the root causes of terrorism.


Part of Colin Rule’s response, in his blog, to this post is worth repeating here:


Technology will be used by the warriors and the peacemakers alike. I like to think of those of us in the ODR field as being in the latter camp, but to deny any part of human nature is folly. I don’t know if I agree with Sanjana when he says, “…most of the radical advancements in technology come not from R&D into their peaceful uses, but from billions of dollars spent on how we can obliterate ‘enemies’…” — I’d suggest that greed outpaces power in its role as an engine in the evolution of ICT — but he is dead on when he forces us to face the true origins of the boxes on our desktops.

Given the complexity of the issues here, I think the best way to proceed is with a deep sense of humility – that technology only plays a supporting role to (socio-political) processes that are engineered by humans. While the technologies for war are evident around us, those that support peaceful dialogue are less evident, but no less plentiful. Our challenge is to use them, best we can and best we see fit, to bring about sustainable conflict transformation.

4 thoughts on “Is technology neutral?

  1. >A Walther PPK would make, I am sure, a pretty good paper-weight, but it was made for a specific purpose and it is rather good when used for its intended purpose, and less elegant when used for others.

    You are getting the wrong ideas here. A Walther is made to kill and maim. It can be used to guard against wrong-doers and miscreants, or it can be used to perpetuate murder and robbery.

Comments are closed.