Technologies performing remote sensing, crowd mapping, individual identification through facial recognition and big data analytics have significantly impacted mass atrocity response over the past 15 years. These include smartphone apps, remote sensing platforms such as satellite imagery analysis and surveillance drones, social media and data aggregation platforms.
Such technologies are primarily adopted due to their low-cost relative to analogue intervention, and their ability to be remotely deployed in otherwise inaccessible or insecure environments. The specific applications of these technologies and platforms are diverse and constantly evolving, but can generally be divided into two broad categories:
- Prevention/Response applications seek to create novel situational awareness capacity to protect populations and inform response activities.
- Justice/accountability use-cases aim to detect and/or document evidence of alleged crimes for judicial and/or advocacy purposes.
These ICTs are now effectively treated as indispensable force multipliers that supplement or supplant traditional mass atrocity response activities. However, in the absence of validation of these claims, adoption of these technologies can be said to be largely supply-driven.
As ICT use in mass atrocity and human security crisis response has been mainstreamed over the last two decades, so has a set of generalized and hitherto largely invalidated claims about their effects on the nature and effectiveness of response. These claims constitute technological utopianism—the notion that technological change is inevitable, problem-free, and progressive. Moreover, the adoption of this technology-reliant and remote posture encodes within it the idea that the direct result of deploying these technologies and platforms is the prediction, prevention, and deterring of mass atrocity related crimes—a form of technological utopianism known as solutionism, which holds that the right technology can solve all of mankind’s problems.
Within atrocity response, this approach is exemplified by the much-publicized Eyes on Darfur campaign, where the public viewing of satellite images from Darfur was framed as action in and of itself—the assumption being that simply “knowing about atrocities” is enough to mobilize mass empathy and as a result engender political action. Implicit in this is the idea that technology itself can fundamentally alter the calculus of whether and how mass atrocities occur. The adoption of this view by civil society, we argue, means that responders are not simply adopting a set of tools and techniques, but a theory of change, built upon a technologically utopian worldview.
Underlying this theory of change is the imbuing of these platforms and technologies with an inherent “ambient protective effect”—e.g. transforming the threat matrix of a particular atrocity producing environment in a way that improves the human security status of the targeted population. The underlying assumption of this protective effect is that increased volumes of novel and otherwise unobtainable data over a large-scale geographic area or environment may cause one, some, or all of several potential ambient protective effects which will prevent or mitigate the effects of mass atrocities.
Our article argues that the human security community—particularly mass atrocity responders—must come to terms with the fact that there is a difference between knowing about atrocities and doing something about them. Monitoring is a precondition for protection, but it is does not have a protective effect in and by itself.
More research is needed to determine the validity of the assumptions encoded into ICT use, and to address their relationship to a growing body of scholarship indicating possible direct and indirect pernicious effects of attempting to project a PPE through technology. In some cases, these may be exposing civilians to new, rapidly evolving risks to their human security and mutating the behavior of mass atrocity perpetrators in ways which harm target populations (for example by providing perpetrators with sitting duck targets through real-time information about population movements; or about settlements and survivors not harmed in a bombing campaign, for example) . To do no harm to civilians, we must start by recognizing that the unpredictable knock-on effects of ICT use can cause real harm to civilians—for example, crowd-sourced data can be used to foment violence as well as preventing it—and that the prevailing technological utopianism may prevent responders from noticing.
This post comes from Kristin Bergtora Sandvik and Nathaniel A. Raymond. Kristin is a Research Professor in Humanitarian Studies at the Peace Research Institute Oslo (PRIO) and a professor of Sociology of Law at the University of Oslo. Nathaniel is the Director of the Signal Program on Human Security and Technology at the Harvard Humanitarian Initiative. This post was also published on the ATHA blog of the Harvard Humanitarian Initiative.