Tag Archives: technology

From Principle to Practice: Humanitarian Innovation and Experimentation

Written by

Humanitarian organizations have an almost impossible task: They must balance the imperative to save lives with the commitment to do no harm. They perform this balancing act amidst chaos, with incredibly high stakes and far fewer resources than they need. It’s no wonder that new technologies that promise to do more with less are so appealing.

By now, we know that technology can introduce bias, insecurity, and failure into systems. We know it is not an unalloyed good. What we often don’t know is how to measure the potential for those harms in the especially fragile contexts where humanitarians work. Without the tools or frameworks to evaluate the credibility of new technologies, it’s hard for humanitarians to know whether they’re having the intended impact and to assess the potential for harm. Introducing untested technologies into unstable environments raises an essential question: When is humanitarian innovation actually human subjects experimentation?

Humanitarians’ use of new technologies (including biometric identification to register refugees for relief, commercial drones to deliver cargo in difficult areas, and big data-fueled algorithms to predict the spread of disease) increasingly looks like the type of experimentation that drove the creation of human subjects research rules in the mid-20th century. In both examples, Western interests used untested approaches on African and Asian populations with limited consent and even less recourse. Today’s digital humanitarians may be innovators, but each new technology raises the specter of new harms, including biasing public resources with predictions over needs assessment, introducing coordination and practical failures through unique indicators and incompatible databases, and significant legal risks to both humanitarians and their growing list of partners.

For example, one popular humanitarian innovation uses big data and algorithms to build predictive epidemiological models. In the immediate aftermath of the late 2014 Ebola outbreak in West Africa, a range of humanitarian, academic, and technology organizations called for access to mobile network operators’ databases to track and model the disease. Several organizations got access to those databases—which, it turns out, was both illegal and ineffective. It violated the privacy of millions of people in contravention of domestic regulation, regional conventions, and international law. Ebola was a hemorrhagic fever, which requires the exchange of fluids to transmit—a behavior that isn’t represented in call detail records. More importantly, the resources that should have gone into saving lives and building the facilities necessary to treat the disease instead went to technology.

Without functioning infrastructure, institutions, or systems to coordinate communication, technology fails just like anything else. And yet these are exactly the contexts in which humanitarian innovation organizations introduce technology, often without the tools to measure, monitor, or correct the failures that result. In many cases, these failures are endured by populations already under tremendous hardship, with few ways to hold humanitarians accountable.

Humanitarians need both an ethical and evidence-driven human experimentation framework for new technologies. They need a structure parallel to the guidelines created in medicine, which put in place a number of practical, ethical, and legal requirements for developing and applying new scientific advancements to human populations.

The Medical Model

“Human subjects research,” the term of art for human experimentation, comes from medicine, though it is increasingly applied across disciplines. Medicine created some of the first ethical codes in the late 18th and early 19th centuries, but the modern era of human subject research protections started in the aftermath of World War II, evolving with the Helsinki Declaration (1975), the Belmont Report (1978), and the Common Rule (1981). These rules established proportionality, informed consent, and ongoing due process as conditions of legal human subjects research. Proportionality refers to the idea that an experiment should balance the potential harms with the potential benefit to participants. Informed consent in human subjects research requires that subjects understand the context and the process of the experiment prior to agreeing to participate. And due process, here, refers to a bundle of principles, including assessing subjects’ need “equally,” subjects’ ability to quit a study, and the continuous assessment of whether an experiment balances methods with the potential outcomes.

These standards defined the practice of human subjects research for the much of the rest of the world and are essential for protecting populations from mistreatment by experimenters who undervalue their well-being. But they come from the medical industry, which relies on a lot of established infrastructure that less-defined industries, such as technology and humanitarianism, lack, which limits their applicability.

The medical community’s human subjects research rules clearly differentiate between research and practice based on the intention of the researcher or practitioner. If the goal is to learn, an intervention is research. If the goal is to help the subject, it’s practice. Because it comes from science, human subjects research law doesn’t contemplate that an activity would use a method without researching it first. The distinction between research and practice has always been controversial, but it gets especially blurry when applied to humanitarian innovation, where the intention is both to learn and to help affected populations.

The Belmont Report, a summary of ethical principles and guidelines for human subject research, defines practice as “designed solely to enhance the well-being of a client or patient and that have a reasonable expectation of success,” (emphasis added). This differs from humanitarian practice in two major ways: First, there is no direct fiduciary relationship between humanitarians and those they serve, and so humanitarians may prioritize groups or collective well-being over the interests of individuals. Second, humanitarians have no way to evaluate the reasonableness of their expectation of success. In other words, the assumptions embedded in human subjects research protections don’t clearly map to the relationships or activities involved in humanitarian response. As a result, these conventions offer humanitarian organizations neither clear guidance nor the types of protections that exist for well-regulated industrial experimentation.

In addition, human subjects research rules are set up so that interventions are judged on their potential for impact. Essentially, the higher the potential for impact on human lives, the more important it is to get informed consent, have ethical review, and for subjects to extricate themselves from the experiment. Unfortunately, in humanitarian response, the impacts are always high, and it’s almost impossible to isolate the effects generated by a single technology or intervention. Even where establishing consent is possible, disasters don’t lend themselves to consent frameworks, because refusing to participate can mean refusing life-saving assistance. In law, consent agreements made under life-threatening circumstances are called contracts of adhesion and aren’t valid. The result is that humanitarian innovation faces fundamental challenges in knowing how to deploy ethical experimentation frameworks and in implementing the protections they require.

First Steps

The good news is that existing legal and ethical frameworks lay a strong foundation. As Jacob Metcalf and Kate Crawford lay out in a 2016 paper, there are significant enough similarities between biomedical and big data research to develop new human subjects research rules. This January, the United States expanded the purview of the Common Rule to govern human subjects research funded by 16 federal departments and agencies. Despite their gaps, human subjects research laws go a long way toward establishing legally significant requirements for consent, proportionality, and due process—even if they don’t yet directly address humanitarian organizations.

Human rights-based approaches such as the Harvard Humanitarian Initiative’s Signal Code go further, adapting human rights to digital humanitarian practice. But, like most rights frameworks, it relies on public infrastructure to ratify, harmonize, and operationalize. There are proactive efforts to set industry-focused standards and guidelines, such as the World Humanitarian Summit’s Principles for Ethical Humanitarian Innovation and the Digital Impact Alliance’s Principles for Digital Development. And, of course, there are technology-centric efforts beginning to establish ethical use standards for specific technologies—like biometric identification, drone, and big data—that offer specific guidance but include incentives that may not be relevant in the humanitarian context.

That said, principles aren’t enough—we’re now getting to the hard part: building systems that actualize and operationalize our values. We don’t need to decide the boundaries of innovation or humanitarianism as industries to begin developing standards of practice. We don’t need to ratify an international convention on technology use to begin improving procurement requirements, developing common indicators of success for technology use, or establishing research centers capable of testing for applicability of new approaches to difficult and unstable environments. A wide range of industries are beginning to invest in legal, organizational, and technological approaches to building trust—all of which offer additional, practical steps forward.

For humanitarians, as always, the stakes are high. The mandate to intervene comes with the responsibility to know how to do better. Humanitarians hold themselves and their work to a higher standard than almost any other field in the world. They must now apply the same rigor to the technologies and tools they use.


This post originally appeared on the blog of Stanford Social Innovation Review.

Humanitarian experimentation

Written by

Humanitarian actors, faced with ongoing conflict, epidemics, famine and a range of natural disasters, are increasingly being asked to do more with less. The international community’s commitment of resources has not kept pace with their expectations or the growing crises around the world. Some humanitarian organizations are trying to bridge this disparity by adopting new technologies—a practice often referred to as humanitarian innovation. This blog post, building on a recent article in the ICRC Review, asserts that humanitarian innovation is often human experimentation without accountability, which may both cause harm and violate some of humanitarians’ most basic principles.

While many elements of humanitarian action are uncertain, there is a clear difference between using proven approaches to respond in new contexts and using wholly experimental approaches on populations at the height of their vulnerability. This is also not the first generation of humanitarian organizations to test new technologies or approaches in the midst of disaster. Our article draws upon three timely examples of humanitarian innovations, which are expanding into the mainstream of humanitarian practice without clear assessments of potential benefits or harms.

Cargo drones, for one, have been presented as a means to help deliver assistance to places that aid agencies otherwise find difficult, and sometimes impossible, to reach. Biometrics is another example. It is said to speed up cumbersome registration processes, thereby allowing faster access to aid for people in need (who can only receive assistance upon registration). And, in the case of responding to the 2014 outbreak of Ebola in West Africa, data modelling was seen as a way to help in this response. In each of these cases, technologies with great promise were deployed in ways that risked, distorted and/or damaged the relationships between survivors and responders.

These examples illustrate the need for investment in ethics and evidence on the impact of development and application of new technologies in humanitarian response. It is incumbent on humanitarian actors to understand both the opportunities posed by new technologies, as well as the potential harms they may present—not only during the response, but long after the emergency ends. This balance is between, on the one hand, working to identify new and ‘innovative’ ways of addressing some of the challenges that humanitarian actors confront and, on the other hand, the risk of introducing new technological ‘solutions’ in ways that resemble ‘humanitarian experimentation’ (as explained in the article). The latter carries with it the potential for various forms of harm. This risk of harm is not only to those that humanitarian actors are tasked to protect, but also to humanitarian actors themselves, in the form of legal liability, loss of credibility and operational inefficiency. Without open and transparent validation, it is impossible to know whether humanitarian innovations are solutions, or threats themselves. Aid agencies must not only to be extremely attentive to this balance, but also should do their utmost to avoid a harmful outcome.

Framing aid projects as ‘innovative’, rather than ‘experimental’, avoids the explicit acknowledgment that these tools are untested, understating both the risks these approaches may pose, as well as sidestepping the extensive body of laws that regulate human trials. Facing enormous pressure to act and ‘do something’ in view of contemporary humanitarian crisis, a specific logic seems to have gained prominence in the humanitarian community, a logic that conflicts with the risk-taking standards that prevail under normal circumstances. The use of untested approaches in uncertain and challenging humanitarian contexts provokes risks that do not necessarily bolster humanitarian principles. In fact, they may even conflict with the otherwise widely adhered to Do No Harm principle. Failing to test these technologies, or even explicitly acknowledge that they are untested, prior to deployment raises significant questions about both the ethics and evidence requirements implicit in the unique license afforded to humanitarian responders.

In Do No Harm: A Taxonomy of the Challenges of Humanitarian Experimentation, we contextualize humanitarian experimentation—providing a history, examples of current practice, a taxonomy of potential harms and an analysis against the core principles of the humanitarian enterprise.

***

Kristin Bergtora Sandvik, SJD Harvard Law School, is a Research Professor at the Peace Research Institute Oslo and a Professor of Sociology of Law at the University of Oslo. Her widely published socio-legal research focuses on technology and innovation, forced displacement and the struggle for accountability in humanitarian action. Most recently, Sandvik co-edited UNHCR and the Struggle for Accountability (Routledge, 2016), with Katja Lindskov Jacobsen, and The Good Drone (Routledge, 2017).

Katja Lindskov Jacobsen, PhD International Relations Lancaster University, is a Senior Researcher at Copenhagen University, Department of Political Science, Centre for Military Studies. She is an international authority on the issue of humanitarian biometrics and security dimensions and is the author of The Politics of Humanitarian Technology (Routledge, 2015). Her research has also appeared in Citizenship Studies, Security Dialogue, Journal of Intervention & Statebuilding, and African Security Review, among others.

Sean Martin McDonald, JD/MA American University, is the CEO of FrontlineSMS and a Fellow at Stanford’s Digital Civil Society Lab. He is the author of Ebola: A Big Data Disaster, a legal analysis of the way that humanitarian responders use data during crises. His work focuses on building agency at the intersection of digital spaces, using technology, law and civic trusts.

Unpacking the Myth of ICT’s Protective Effect in Mass Atrocity Response

Written by

Information Communication Technologies (ICTs) are now a standard part of the mass atrocity responder’s toolkit, being employed for evidence collection and research by NGOs, governments, and the private sector. One of the more notable justifications for their use has been to supplement or improve the protection of vulnerable populations. In a new article published in the Genocide Studies and Prevention Journal, we argue that there is little evidence for the assertion of this protective effect by ICTs in mass atrocity producing environments, which we have labeled the Protective or Preventative Effect (PPE). This blog post argues that the mass atrocity community needs to engage more critically with a widespread perception that ICTs have innate protective effects in mass atrocity response. More testing and validation of potential harms is necessary to ensure that civilians on the ground are not negatively affected by ICTs. Risks to individuals and communities include for example the theft,  appropriation and distortion of personal data, geotracking of ground movements and surveillance of speech, communication, movements and transactions through hand-held devices

Technologies performing remote sensing, crowd mapping, individual identification through facial recognition and big data analytics have significantly impacted mass atrocity response over the past 15 years. These include smartphone apps, remote sensing platforms such as satellite imagery analysis and surveillance drones, social media and data aggregation platforms.

Such technologies are primarily adopted due to their low-cost relative to analogue intervention, and their ability to be remotely deployed in otherwise inaccessible or insecure environments. The specific applications of these technologies and platforms are diverse and constantly evolving, but can generally be divided into two broad categories:

  • Prevention/Response applications seek to create novel situational awareness capacity to protect populations and inform response activities.
  • Justice/accountability use-cases aim to detect and/or document evidence of alleged crimes for judicial and/or advocacy purposes.

These ICTs are now effectively treated as indispensable force multipliers that supplement or supplant traditional mass atrocity response activities. However, in the absence of validation of these claims, adoption of these technologies can be said to be largely supply-driven.

As ICT use in mass atrocity and human security crisis response has been mainstreamed over the last two decades, so has a set of generalized and hitherto largely invalidated claims about their effects on the nature and effectiveness of response. These claims constitute technological utopianism—the notion that technological change is inevitable, problem-free, and progressive. Moreover, the adoption of this technology-reliant and remote posture encodes within it the idea that the direct result of deploying these technologies and platforms is the prediction, prevention, and deterring of mass atrocity related crimes—a form of technological utopianism known as solutionism, which holds that the right technology can solve all of mankind’s problems.

Within atrocity response, this approach is exemplified by the much-publicized Eyes on Darfur campaign, where the public viewing of satellite images from Darfur was framed as action in and of itself—the assumption being that simply “knowing about atrocities” is enough to mobilize mass empathy and as a result engender political action. Implicit in this is the idea that technology itself can fundamentally alter the calculus of whether and how mass atrocities occur.  The adoption of this view by civil society, we argue, means that responders are not simply adopting a set of tools and techniques, but a theory of change, built upon a technologically utopian worldview.

Underlying this theory of change is the imbuing of these platforms and technologies with an inherent “ambient protective effect”—e.g. transforming the threat matrix of a particular atrocity producing environment in a way that improves the human security status of the targeted population. The underlying assumption of this protective effect is that increased volumes of novel and otherwise unobtainable data over a large-scale geographic area or environment may cause one, some, or all of several potential ambient protective effects which will prevent or mitigate the effects of mass atrocities.

Our article argues that the human security community—particularly mass atrocity responders—must come to terms with the fact that there is a difference between knowing about atrocities and doing something about them. Monitoring is a precondition for protection, but it is does not have a protective effect in and by itself.

More research is needed to determine the validity of the assumptions encoded into ICT use, and to address their relationship to a growing body of scholarship indicating possible direct and indirect pernicious effects of attempting to project a PPE through technology. In some cases, these may be exposing civilians to new, rapidly evolving risks to their human security and mutating the behavior of mass atrocity perpetrators in ways which harm target populations (for example by providing perpetrators with sitting duck targets through real-time information about population movements;  or about settlements and survivors not harmed in a bombing campaign, for example) . To do no harm to civilians, we must start by recognizing that the unpredictable knock-on effects of ICT use can cause real harm to civilians—for example, crowd-sourced data can be used to foment violence as well as preventing it—and  that the prevailing technological utopianism may prevent responders from noticing.

This post comes from Kristin Bergtora Sandvik and Nathaniel A. Raymond. Kristin is a Research Professor in Humanitarian Studies at the Peace Research Institute Oslo (PRIO) and a professor of  Sociology of Law  at the University of Oslo. Nathaniel is the Director of the Signal Program on Human Security and Technology at the Harvard Humanitarian Initiative. This post was also published on the ATHA blog of the Harvard Humanitarian Initiative.

Conundrums in the Embrace of the Private Sector

Written by

The humanitarian sector faces an unprecedented number of crises globally. The growing operational and financial deficit in the capacity of governments and humanitarian organizations to respond has led to calls for changes in the way such crises are understood and managed.  This involves a strong focus on cooperation and partnerships with the private sector.  A large part of the allure is the notion that private-public partnerships will make humanitarian response faster by entrenching market-oriented rationalities, thus enhancing effectiveness. This is also how the private sector presents itself:

One should never underestimate the power of private companies who offer aid. Companies are almost always focused on efficiency, good negotiation, building their reputation (their brand) and getting things done on time and on budget (Narfeldt 2007).

Here, I will try to complicate this narrative by pointing out some conundrums in the vigorous humanitarian embrace of the private sector.

Back in 2007, Binder and Witte noted the emergence of a new form of engagement through partnerships between companies and traditional humanitarian actors, often based on a desire to demonstrate corporate social responsibility (CSR) and to motivate employees. In parallel, they observed that the War on Terror had enlarged the scope of traditional work with a role for commercial players to provide relief services. Today, these trends continue as public-private partnerships have emerged as a (donor) preferred humanitarian strategy to increase efficiency and accountability (see for example Drummond and Crawford 2014), goals that to some degree seem to merge as efficiency has become an important way of demonstrating accountability. The rationale for a greater inclusion of the private sector in humanitarian action is that partners can contribute to humanitarian solutions with different expertise and resources. Private companies are profit-driven and thus incentivized to comply with the specific deliverables and time frames set out in contracts. Donors are attracted to low overhead and lesser need for constant engagement and monitoring. Moreover, the private sector owns much of the infrastructure on which information and communication technologies are based.

The objections to private sector engagements are well-known and predictable. The outsourcing of humanitarian action has been criticized by commentators pointing to the loss of ground truth, and to the often poor-quality resulting from the private actors’ lack of understanding of humanitarian action, contextual knowledge, and crisis management skills. It is argued that companies are, by their very nature, mainly interested in “brand, employee motivation and doing more business” (Wassenhove 2014). Intensified private sector engagement thus leads to a marketization of humanitarian values (Weiss 2013) where “the humanitarian ethos is gradually eroded” (Xaba 2014).

In the following, I will instead question the idea of efficacy by challenging some of the assumptions underlying the turn to the private sector. I consider how the call for intensified cooperation overlooks persistent tensions inherent in the humanitarian market and in actors’ rationalities. I also identify what seems to be a fairly prevalent sentiment, namely, the assumption that such cooperation may serve the double objective of delivering humanitarians from the much-loathed Results-Based Management (RBM) regime while simultaneously delivering aid more effectively.

The first difficulty is structural: the turn to business cooperation is informed by the notion that the humanitarian market is inherently efficient and effective because it is a regular market. However, as noted by Binder and Witte, the humanitarian market may be characterized as a “quasi-market,” which exhibits an indirect producer–consumer relationship. In the market for humanitarian relief, the consumer (i.e. the aid recipient) neither purchases nor pays for the delivered service. Aid agencies are the producers, donors the buyers, and aid recipients the consumers. As a result, the market is loaded with asymmetries and uncertainties: Donors have difficulty determining whether the services they pay for are indeed adequately delivered, while recipients have few means of effectively making complaints or airing grievances. Nielsen and Santos (2013) note, for example, the often unanticipated and inappropriate delivery of equipment, as well as personnel. In a trenchant critique, Krause (2014) describes this as a market where agencies produce projects for a quasi-market in which institutional donors are the consumers and populations in need are part of the product being packaged and sold by relief organizations.

Interestingly, the currently most successful technology-based humanitarian endeavor is also a concerted attempt to remedy the quasi-status of the humanitarian market: Over the last decade, the international development community has invested heavily in the so-called financial inclusion agenda, aiming to make poor people less aid-dependent; this is sometimes labelled ‘resilience through asset creation.’ The partnership between the World Food Program and MasterCard, for example, uses “digital innovation to help people around the world to break the cycle of hunger and poverty.” For the World Food Programme, this is part of a broader strategy to move away from food aid and to improve food security through cash assets. As I have noted elsewhere, the underlying rationale is that access to financial services such as credit and savings will “create sizeable welfare benefits” as beneficiaries of aid are drawn further into the market economy as customers. The goal of implementing “cost-effective” electronic payment programs is to help beneficiaries “save money, improve efficiencies and prevent fraud.” The belief is that cash can ‘go where people cannot’ andprovide them with choice. However, while these strategies are motivated explicitly by the desire to turn the beneficiary more directly into a customer, the accountability regime constructed around these systems remains directed upwards to donors.

The second assumption to be examined is that of shared motivation and shared values, going beyond disapproving criticisms of ‘neoliberal governance strategies.’  I think it is important to recognize that call for intensified private sector collaboration masks a rather thin shared understanding of both the nature of humanitarian work and of the competence, presence, and relevance of the private sector, and that this impinges on how this collaboration plays out. Binder and Witte observed that past attempts to pursue partnerships with corporate agencies have often been frustrated as agencies have been unclear about the intended outcomes for the partnership, or have viewed it as a way of developing a long-term funding arrangement. According to Nielsen (2014), private-humanitarian collaboration is currently characterized by underlying disagreement about what constitutes ‘meaningful’ innovation, and how that impinges on responsible innovation and on accountability and CSR more broadly; there is a sense that the humanitarian customer often “does not know what s/he wants.” The private sector actor is frustrated about having to take all the risk in the development of products, while humanitarians fret about taking on future risks, as they will be the ones to face public condemnation and donor criticism if the product fails to aid beneficiaries in the field. Mays et al. (2012) identify a mismatch between humanitarian and business systems, leading to a clash between entrepreneurial and humanitarian values and the imperative to save lives and alleviate suffering. This resonates with my own observations, as humanitarians complain about being offered inadequate or unfeasible solutions; about being used as stepping stones to market access to the greater UN market; or simply about differences in rationality, where the private sector partner frames the transaction commercially by ‘thinking money’ and the humanitarian partner by ‘activity on the ground.’

Finally, the erstwhile push for business management approaches to humanitarian action was the result of a push for greater accountability and a need to professionalize humanitarian work. Perhaps the most significant import was Results-Based Management (RBM), a management strategy “focusing on performance and achievements of outputs, outcome and impact,” which provides a framework and tools for not only planning activities, but also risk management, performance monitoring, and evaluation. Over the course of time, humanitarians have become exasperated and frustrated with the RBM rationale, both because it is sometimes seen to be contrary to principled humanitarian assistance, and more often because RBM and the results agenda engenders a type of bureaucratization where humanitarians feel that they are “performing monitoring” instead of monitoring performance (borrowed from Welle 2014).

While some humanitarians now strive for a shift towards systems accountability (where they will be held to account with respect to their responsibility for maintaining functional and workable supply-chains or information sharing systems, not specifically demarcated deliverables), others see the private sector as the solution to the RBM straightjacket. There seems to have emerged a notion that increased private sector involvement may in fact allow humanitarians to kill two birds with one stone. Much of the attraction of partnerships and outsourcing to the private sector seems to be that RBM obligations can be offloaded to these actors, through subcontracting and outsourcing that details deliverables and outcomes. Hence, the private sector is both envisioned to be faster at delivering RBM-like outputs — now imagined as a separate objective for humanitarian actors — and quicker to deliver humanitarian response.

***

Note: This blog, written by Kristin Bergtora Sandvik (PRIO), was originally posted on the website of the Advanced Training Program on Humanitarian Action (ATHA).

Humanitarian innovation, humanitarian renewal?

Written by

The continued evolution of the humanitarian innovation concept needs a critical engagement with how this agenda interacts with previous and contemporary attempts to improve humanitarian action.

Accountability and transparency have been central to discussions of humanitarian action over the past two decades. Yet these issues appear generally to be given scant attention in the discourse around humanitarian innovation. The humanitarian innovation agenda is becoming a self-contained field with its own discourse and its own set of experts, institutions and projects – and even a definitive founding moment, namely 2009, when the ALNAP study on innovation in humanitarian action was published.[1] While attempts to develop a critical humanitarian innovation discourse have borrowed extensively from critical discussions on innovation in development studies, humanitarianism is not development done in a hurry but has its own distinct challenges, objectives and methodologies.

I will focus here on concrete material innovations, most commonly referred to as ‘humanitarian technology’. Discussions on such humanitarian innovations regularly acknowledge the need to avoid both fetishising novelty in itself and attributing inherently transformative qualities to technology rather than seeing how technology may fit into and build upon refugees’ existing resources.

Renewing humanitarianism

While it is obvious that internal and external reflections on a humanitarian industry and a humanitarian ethos in need of improvement are much older pursuits, I will start – as most scholars in humanitarian studies do today – with the mid-1990s and the ‘Goma-moment’. To recover from the moral and operational failures of the response to the Rwanda genocide and the ensuing crisis in the Great Lakes region of Africa, humanitarianism turned to human rights based approaches (HRBA) to become more ethical, to move from charitable action to social contract. Yet HRBA always suffered from an intrinsic lack of clarity of meaning as well as the problem of states being the obliged parties under international human rights, a particular problem in the context of displacement, whether internal or across borders.

A decade or so later, in the aftermath of the 2004 Indian Ocean tsunami and in the face of accusations about poor governance, insufficient coordination, incompetence and waste, the humanitarian enterprise embarked on institutional reform to become better. Responses were to be maximised through Humanitarian Coordinators, funding was to become more efficient through Central Emergency Response Funds and, most importantly in the everyday life of humanitarian practitioners, the Cluster approach allocated areas of responsibility to the largest humanitarian actors.

The need for greater accountability and transparency were drivers for both HRBA (with its moral intricacies) and humantiarian reform (with its bureaucratic complexities). What is now happening with accountability and transparency within the technological-innovation-as-renewal paradigm?

If Rwanda and the Indian Ocean tsunami were the events ushering in HRBA and humanitarian reform, Haiti was the much heralded game-changer for technology whose use there (despite many practical problems and malfunctioning solutions) is generally assessed as positive.[2] In the years since, a host of new technology actors, initiatives, technical platforms and methodologies has emerged. New communications technology, biometrics, cash cards, drones and 3D printing have all captured the humanitarian imagination.

Thinking about problems and difficulties is often framed in terms of finding technical solutions, obtaining sufficient funding to move from pilot phases to scale, etc. However, as ideas about progress and inevitability dominate the field, the technology is seen not as something we use to get closer to a better humanitarianism but something which, once deployed, is itself a better, more accountable and transparent humanitarianism.

So institutionalised have transparency and accountability become that they have now vanished off the critical radar and become part of the taken-for-granted discursive and institutional framework. Accountability and transparency are assumed to be automatically produced simply by the act of adopting and deploying new technology. (Interestingly, the third tenet usually listed with accountability and transparency, efficiency, is also a basic assumption of this agenda.)

Accountability, participation and transparency

A 2013 report published by UN OCHA, Humanitarianism in the Network Age, argues that “everyone agrees that technology has changed how people interact and how power is distributed”.[3] While technology has undoubtedly altered human interaction, an assumption that proliferating innovative humanitarian technology unveils power, redistributes power or empowers needs to be subjected to scrutiny.

The classic issues in humanitarian accountability – to whom it is owed and by whom, how it can be achieved and, most crucially, what would count as substantively meaningful accountability – remain acutely difficult to answer. These issues also remain political issues which cannot be solved only with new technical solutions emphasising functionality and affordability; we cannot innovate ourselves out of the accountability problem, in the same way as technology cannot be seen as an empty shell waiting to be filled with (humanitarian) meaning.

This speaks particularly to the quest for participation of those in need of humanitarian protection and assistance, “helping people find innovative ways to help themselves”. In practice, we know that humanitarians arrive late in the field – they are not (at least not outside their own communications) the first responders. Affected individuals, their neighbours and communities are. Yet we should be concerned if the engagement with technological innovation also becomes a way of pushing the resilience agenda further in the direction of making those in need more responsible than well-paid humanitarian actors for providing humanitarian aid.

The arrival of the private sector as fully respectable partners in humanitarian action is in principle a necessary and desirable development. Nevertheless, while expressing distaste for the involvement of the private sector in humanitarian response is passé, talk of the importance of local markets and of ‘local innovation’, ‘indigenous innovation’ or ‘bottom-up innovation’ inevitable begs the questions: is the private sector one of the local participants as well as those in humanitarian need, and what do they want out of the partnership?

The current drive towards open data – and the belief in the emancipatory potential of open data access – means that transparency is a highly relevant theme on the humanitarian innovation agenda. Yet, on a pragmatic level, in an avalanche of information, it is difficult to see what is not there, particularly for individuals in crisis with limited access to information technology or with limited (computer) literacy.

Accountability and transparency thus seem to be missing in the implementation of the humanitarian innovation agenda, although innovation should be a means to enhance these objectives (among others) to produce a better humanitarianism.

Conclusions

First, we must beware of the assumption of automatic progress. We may be able to innovate ourselves out of a few traditional challenges and difficulties but most will remain, and additionally there will be new challenges resulting from the new technology.

Second, innovation looked at as a process appears suspiciously like the reforms of yesteryear. What, for example, is the difference between ‘bottom-up innovation’ and the ‘local knowledge’ valued in previous efforts to ensure participation? And are the paradigm shifts of innovation really much different from the moral improvement agenda of approaches such as the human-rights-based humanitarian aid?

Third, the increasingly self-referential humanitarian innovation discourse itself warrants scrutiny. With almost no talk of justice, social transformation or redistribution of power, we are left with a humanitarianism where inclusion is about access to markets, and empowerment is about making beneficiaries more self-reliant and about putting the label ‘humanitarian’ onto the customer concept in innovation theory.

 

***

[1] www.alnap.org/resource/9207
[2] See the IFRC World Disasters Report 2013 on Technology and Humanitarian Innovation.
www.ifrc.org/publications-and-reports/world-disasters-report/world-disasters-report-2013/
[3] www.unocha.org/hina

 


***

This blog is based on Kristin B. Sandvik’s article, ‘Humanitarian innovation, humanitarian renewal?’, published in a special Forced Migration Review supplement on ‘Innovation and refugees’.

A Humanitarian Technology Policy Agenda for 2016

Written by

The World Humanitarian Summit in 2016 will feature transformation through innovation as a key theme. Leading up to the summit, OCHA has voiced the need to “identify and implement….positions that address operational challenges and opportunities” (OCHA 2013) relating to the use of information technology, big data and innovations in humanitarian action.

In this blog post we sketch out four areas in need of further research over the next two years to provide policymakers, humanitarian actors and other stakeholders with up to date and relevant research and knowledge.

1.    Empowerment and Accountability

  • Pivoting humanitarian action: Maximizing user-benefit from technology

Affected populations are the primary responders in disasters and conflict zones, and actively use information technology to self-organize, spread information about their condition, call for aid, communicate with humanitarian actors, and demand accountability. New technologies also have the potential to put responders at the center of the entire life cycle of humanitarian action – from needs assessment and information gathering, to analysis, coordination, support, monitoring and evaluation.  It is crucial that member states, humanitarian organizations and volunteer & technical communities (V&TCs) improve their actions to take advantage of this opportunity. The 2016 Summit should strengthen the end-user perspective in the development of guidelines for V&TCs.

  • The changing meanings of accountability

Increasingly over the last 20 years, the humanitarian community has focused on issues of agency accountability and professionalization of humanitarian action, vis-à-vis donors as well as beneficiaries. However, the technological revolution in humanitarian action and the increasingly central role of large telecom and tech companies makes it necessary to broaden the focus of accountability considerations.  For example, OCHA is now considering developing guidelines for how formal humanitarian organizations and V&TCs should cooperate with these companies. Leading up to the 2016 Summit, there is a need for more reflection and research on how technology can be used to enhance accountability in humanitarian action for all parties, including new actors.


2.    The role of aggregated data

Data collection and the importance of aggregated data have come to occupy an important role in humanitarian action. As illustrated by the 2013 World Disasters Report, big data and remote sensing capabilities provide an unprecedented opportunity to access contextual information about pending and ongoing humanitarian crises. Many notable initiatives such as the UN Global Pulse suggest that the development of rigorous information management systems may lead to feasible mechanisms for forecasting and preventing crises. Particular attention should be paid to three issue areas:

  • Veracity and validity

Multiple data transactions and increased complexity in data structures increase the potential for error in humanitarian data entry and interpretation. Data that is collected or generated through digital or mobile mechanisms will often pose challenges, especially regarding verification. Although significant work is underway to establish software and procedures to verify data, understanding the limitations posed to veracity and validity of humanitarian data will be critical.

  • Identity and anonymity

As humanitarian data is aggregated and made public, the chances for re-identification of individuals and groups increase at an unknown rate. This phenomenon, known as the mosaic effect, is widely recognized but little understood. There is little understanding of the dangers that shared anonymous data would pose in a humanitarian context, where data may be limited, but the potential damage of re-identification would be quite extreme.

  • Agency and (dis)empowerment

The aggregation of humanitarian data from multiple data streams and sources decreases the likelihood that individuals and groups reflected in that data will be aware of, and able to influence, the way in which that data is used.  This principle, sometimes referred to as informational self-determination, presents a challenge to digital and mobile data collection contexts generally, but is highly problematic in humanitarian contexts, where risks associated with personal information are particularly grave.


3.    Enabling and regulating V&TCs

Remote volunteer and technical communities (V&TCs) now play an increasingly important role in humanitarian contexts – generating, aggregating, classifying and even analyzing data, in parallel to, or sometimes in collaboration with more established actors and multilateral initiatives. They increasingly enjoy formalized relationships with traditional humanitarian actors, processing and generating information in the support of humanitarian interventions. Yet individual volunteers participating in such initiatives are often less equipped than traditional humanitarian actors to deal with the ethical, privacy and security issues surrounding their activities, although some work is underway. Although in many ways the contribution of V&TCs represents a paradigm shift in humanitarian action, the digital and volunteering revolution has also brought new concerns with regards to the knowledge and understanding of core humanitarian principles and tasks, such as ‘do no harm’ and humanity, neutrality and impartiality.

In considering the above issues, attention should be paid to inherent trade-offs and the need to balance competing values, including the following two:

  • Data responsibility vs. efficiency. There is an inherent tension between efficiency and data responsibility in humanitarian interventions. Generally, protecting the privacy of vulnerable groups and individuals will require the allocation of time and resources—to conduct risk assessments, to engage and secure informed consent, to implement informational security protocols. In humanitarian contexts, the imperative to act quickly and decisively may often run counter to more measured actions intended to mitigate informational risks to privacy and security
  • Western values vs. global standards. It has also been argued that privacy is a Western preoccupation, without any real relevance to victims of a humanitarian crisis facing much more immediate and pressing threats. This argument highlights the important tension between mitigating informational risks to privacy and security, and the need to efficiently and effectively expedite humanitarian aid. It does not account for the concrete risks posed to individual and collective security by irresponsible data management, however.

This is our modest contribution to an agenda for research and policy development for humanitarian technology. We would like to join forces with other actors interested in these challenges to contribute to a necessary debate on a number of issues that touch upon some of the core principles for humanitarian action. The ambition is to strengthen humanitarian action in an innovative and accountable manner, making us better equipped to help people in need in the future.

Note: This blog, written by Kristin Bergtora Sandvik (PRIO), Christopher Wilson (The Engine Room) and John Karlsrud (NUPI), was originally posted on the website of the Advanced Training on Humanitarian Action Project (ATHA).

The Rise of the Humanitarian Drone: Giving Content to an Emerging Concept

Written by

Kristin Bergtora, who directs the Norwegian Center for Humanitarian Studies (and sits on the Advisory Board of the Humanitarian UAV Network, UAViators), just co-authored this important study on the growing role of UAVs or drones in the humanitarian space. Kristin and fellow co-author Kjersti Lohne consider the mainstreaming of UAVs as a technology-transfer from the global battlefield. “Just as drones have rapidly become intrinsic to modern warfare, it appears that they will increasingly find their place as part of the humanitarian governance apparatus.” The co-authors highlight the opportunities that drones offer for humanitarian assistance and explore how the notion of the humanitarian UAV will change humanitarian practices.

CorePhil DSI

Kristin and Kjersti are particularly interested in two types of discourse around the use of UAVs in humanitarian settings. The first relates to the technical and logistical functions that UAVs might potentially fulfill as humanitarian functions. The second relates to the discourse around ethical uses of UAVs. The co-authors “analyze these two types of discourse” along with “their broader implications for humanitarian action.” The co-authors make the following two assumptions prior to carrying out there analysis. First, technologies change the balance of power (institutional power). Second, “although UAV technology may still be relatively primitive, it will evolve and proliferate as a technological paradigm.” To this end, the authors assume that the use of UAVs will “permeate the humanitarian field, and that the drones will be operated not only by states or intergovernmental actors, but also by NGOs.”

The study recognizes that the concept of the “humanitarian drone” is a useful one for military vendors who are urgently looking for other markets given continuing cuts in the US defense budget. “As the UAV industry tries to influence regulators and politicians […] by promoting the UAV as a humanitarian technology,” the co-authors warn that the humanitarian enterprise “risks becoming an important co-constructor of the UAV industry’s moral-economy narrative.” They stress the need for more research on the political economy of the humanitarian UAV.

That being said, while defense contractors are promoting their large surveillance drones for use in humanitarian settings, “a different group of actors—who might be seen as a new breed of ‘techie humanitarians’—have entered the race. Their aim is to develop small drones to conduct SAR [search and rescue] or to provide data about emergencies, as part of the growing field of crisis mapping.” This “micro-UAV” space is the one promoted by the Humanitarian UAV Network (UAViators), not only for imaging but for multi-sensing and payload delivery. Indeed, as “the functions of UAV technologies evolve from relief-site monitoring to carrying cargo, enabling UAVs to participate more directly in field operations, ‘civil UAV technologies will be able to aid considerably in human relief […].”

Screen Shot 2014-06-24 at 2.30.27 PM

As UAVs continue collecting more information on disasters and the impact of humanitarian assistance, they will “part of the ongoing humanitarian challenge of securing, making sense of and maintaining Big Data, as well as developing processes for leveraging credible and actionable information in a reasonable amount of time. At the same time, the humanitarian enterprise is gradually becoming concerned about the privacy implications of surveillance, and the possible costs of witnessing.” This an area that the Humanitarian UAV Network is very much concerned about, so I hope that Kristen will continue to push for this given that she is also on the Advisory Board of UAViators.

In conclusion, the authors believe that the “focus on weaponized drones fails to capture the transformative potential of humanitarian drones and their possible impact on humanitarian action, and the associated pitfalls.” They also emphasize that “the notion of the humanitarian drone is still an immature concept, forming around an immature technology. It is unclear whether the integration of drones into humanitarian action will be cost-effective, ethical or feasible.” I agree with this but only in part since Kristin and Kjersti do not include small or micro-UAVs in their study. The latter are already being integrated in a cost-effective & ethical manner, which is in line with the Humanitarian UAV Network’s mission.

Screen Shot 2014-06-24 at 2.29.46 PM

More research is needed on the role of small-UAVs in the humanitarian space and in particular on the new actors deploying them: from citizen journalists and local, grassroots communities to international humanitarian organizations & national NGOs. Another area ripe for research is the resulting “Big Data” that is likely to be generated by these new data collection technologies.

Note: This blog, written by Patrick Meier (PhD), was originally posted on the website of iRevolution.

ICCM – The Annual Gathering of a Global Digital Village

Written by

Guro Åsveen is a master student at the University of Stavanger, department of Societal Safety Science. In the spring of 2014 she will be writing her thesis on humanitarian technology and emergency management in Kenya.

 


On November 8 this year, one of the most powerful storms ever recorded, typhoon Yolanda (Haiyan) struck the Philippines in a mass of rain, wind and destruction. Reflecting on this on-going crisis and on the role of technology in humanitarian response situations, crisis mappers from across the world recently gathered in Nairobi for the annual International Conference of Crisis Mapping (ICCM)[1].

The ICCM 2013 was the fifth conference since the start-up in 2009. Patrick Meier, co-funder of the Crisis Mappers network, held the opening speech. Commenting on the value of partnerships, Meier cited an old African saying, “It takes a village”, implying that when people work together they can make anything happen. He asked: How can the crisis mapping community best contribute to help save lives in a crisis situation?

 Towards a more digitalized response

In the Philippines and elsewhere, the affected communities are undoubtedly the most important part of the response village. When disaster strikes, members of the local communities immediately start to organize help for their friends and neighbours, using the resources already in place. In the crisis literature, this acute phase is known as “the golden hour” which is when the chances of saving lives are the greatest. The long-standing myths that portray victims of disasters as dysfunctional and helpless are thus proven to be incorrect. In fact, one study found that nine out of ten lives saved in a crisis are due to local and non-professional helpers[2].

Nonetheless, even if there is no replacement for the crucial peer-to-peer assistance during crisis, the offering of help should and do not stop at the local or even national level. As for the crisis mappers, they have a dual approach: While at the one hand seeking to engage with other NGOs and traditional humanitarians, they are also speaking directly to locals on the ground. With the use of technology and crisis mapping, the volunteer and technical communities (V&TCs) are offering tools for crisis-affected populations through which the populations can communicate their needs. In practice this means monitoring social media and reading SMS and e-mails from victims during crisis.

Serving as an example of a formal partnership between a mapping community and the traditional humanitarian sector, the Digital Humanitarian Network (DHN) was requested to make a map for UN-OCHA as part of the preparations for the Yolanda response operation[3]. For OCHA, who holds the difficult task of coordinating international efforts, digital mapping has meant getting access to real-time data and needs assessments without themselves having to be physically present in the affected communities. Although it might be debatable whether or not this off-site positioning is in fact profitable when dealing with information and disaster management, many nevertheless highlight the potential for new technology to bring about alternative solutions to logistical challenges, thereby enabling a more rapid disaster response.

Technology in and out of Africa

When looking at the history of crisis mapping on the African continent, one of the most influential platforms for sharing digital information had its starting point in the aftermath of the 2007 Kenyan presidential election and is named “Ushahidi”, meaning “testimony”. The name reflects the role of the citizens and the volunteers who gave their testimony of post-electoral violence through sending SMS and posting on-line what they saw and experienced during that time. It further developed into an innovative and influential digital community where people can turn either for receiving or for sharing information. Another platform, “Uchaguzi”, was launched in the preparation for a new election in the spring of 2013, and through excessive mapping of the situation in different parts of Kenya, history was successfully prevented from repeating itself[4].

Another Kenyan mapping project worth mentioning is the MapKibera. Kibera is the largest slum in Eastern Africa, located in Nairobi. With a population of approximately one million inhabitants, the Kibera slum is a prominent part of the city. Mapping is utilized in search for hotspots of crime and also as a strategy to empower and build resilience among those most vulnerable. MapKibera is in many ways a great example of how making maps can help bring change to a community. Before this project, Kibera was undetectable on any maps and therefore invisible to anyone outside the slum[5].

 10 per cent technology, 90 per cent human

One thing we tend to forget when talking about mapping and humanitarian technology is that although these may serve as effective tools, all is useless without someone to gather the information, verify it and visualize it for the public or the intended user. The Crisis Mappers network has over 6,000 members from 169 different countries and the Standby Task Force (SBTF) has approximately a thousand members from 70 different countries. With a variety of nationalities and professional backgrounds, these members are to be counted as a human resource. Crisis mapping, as it was stated several times throughout the conference, is only ten per cent about the technology; the rest is dependent on human effort and judgement.

Concerning human partaking in technology, one of the main challenges discussed at the ICCM was how to deal with Big Data. Some challenged the terminology, arguing that there are too many myths and unnecessary concerns related to the concept, “Big Data”. They argued: For most people working with information technology on a daily basis, data is still data; every bits and pieces of information speaks to their original sources which will not change just because more data is shared in a larger format. In conclusion, if the format is too large for us to handle, then the problem is not data but format.

Others find the biggest challenge to be the gathering of data and how we choose between relevant and irrelevant information. If we do not qualify what type of questions are absolutely necessary to ask in a crisis situation and if we cannot agree on any standards, we may face an escalating problem with information overload and owner-ship issues related to extra sensitive and/or unverified information in the future.

Many questions stand unanswered: Is there a need to professionalize the crisis mapping community? Should it be acting as a fully independent actor, or instead work to fulfil the needs of the traditional humanitarian sector? Should the main focus be on entering into formal relationships with already established partners, or more directly on supporting disaster-prone communities and peer-to-peer engagement? Is it possible to make the technology available to a broader audience and thereby decrease the digital divide? Will we be able to use the technology in prevention and disaster risk reduction? How can crisis map technologists balance the support for open data and at the same time respect information that is private or confidential? Should unverified data be published and on whose command? Can contributors of information give or withhold consent on their own behalf or are they simply left with having to trust others to do the picking for them?

These are all high-priority questions in the “new age” of humanitarianism. Considering that crisis mapping is still an emerging field, it may take a while for it to find its role and place in the world of humanitarian affairs. The value of partnerships may be key when coming to terms with both the professionalized and traditional response organizations, as well as with the slum-inhabitants of Nairobi. In either case, technology, people and collaboration remain equally central to humanitarian efforts.

 


[1] To read more about the conference and the Crisis Mappers network, visit http://crisismappers.net

[2] Cited in IFRCs World Disaster Report, 2013. The full report can be downloaded from http://worlddisastersreport.org

[3] Study the map and read more about the Yolanda response on-line: http://digitalhumanitarians.com/profiles/blogs/yolanda

[4] Omeneya, R. (2013): Uchaguzi Monitoring and Evaluation Report. Nairobi: iHub Research

[5] For visiting the MapKibera website, go to http://mapkibera.org

Killer Robots: the Future of War?

Written by

In September 2013, PRIO and the Norwegian Centre for Humanitarian Studies hosted the breakfast seminar “Killer Robots: the Future of War?”. The goal of the seminar was to contribute to the public debate on autonomous weapons, and identify key ethical and legal concerns relating to robotic weapon platforms. The event was chaired by Kristin B. Sandvik (PRIO), and the panellists were Alexander Harang (Director, Fredslaget), Kjetil Mujezinovic Larsen (Professor of Law, Norwegian Centre for Human Rights, UiO) and Tobias Mahler (Postdoctoral Fellow, Norwegian Research Center for Computers and Law, UiO). Based on the panel discussion, the following highlights the prospects of banning autonomous weapons and legal and ethical challenges in light of current technological development.

 Killer robots and the case against them

As a result of technological advancement autonomous weapon platforms, or so-called lethal autonomous weapons (LAR), may well be on the horizon of future wars. Such development, however, raises legal and ethical concerns that need discussion and assessment. Chairing the seminar, Kristin Bergtora Sandvik, highlights that such perspectives are absent in current political debates in Norway, and points out that “autonomous weapons might not be at your doorstep tomorrow or next week, but they might be around next month, and we think that it is important that we begin thinking about this, begin understanding what this is actually about, and what the complications are for the future of war.”

Killer robots are defined as weapon systems that identify and attack without any direct human control. As outlined in the Human Rights Watch Losing Humanity Report, unmanned robotic weapons can be divided into three categories. First, human controlled systems, or human in the loop systems, are weapon systems that can perform tasks delegated to them independently, but where humans are in the loop. This category constitutes the currently available LAR technology. Second, human supervised systems, or human on the loop systems, are weapon systems that can conduct targeting processes independently, but theoretically remain on the real-time supervision of a human operator who can override these automatic decisions. Third, fully autonomous systems, or the human out of the loop systems, are weapon systems that can search, identify, select and attack targets without any human control.

Alexander Harang highlights four particular issues when using such weapon systems. Firstly, killer robots may potentially lower the threshold of armed conflict. As Harang emphasizes, “it is easier to kill with a joystick than a knife”. Secondly, the development, deployment and use of armed autonomous unmanned systems should be prohibited, as machines should not be allowed to make the decision to kill people. Thirdly, the range and deployment of weapons carried out by unmanned systems is threatening to other states and should therefore be limited. Fourthly, that the arming of unmanned weapon platforms with nuclear weapons should be a banned.

As a response to these challenges, the Campaign to Stop Killer Robots urgently calls upon the international community to establish an arms control regime to reduce the threat posed by robotic systems. More specifically, the Campaign calls for an international agreement to prohibit fully autonomous weapon platforms. The Campaign is an international coalition of 43 NGOs based in twenty countries, supported by eight international organisations, a range of scientists, Nobel laureates and regional and national NGOs. The Campaign has already served as a forum for high-level discussion. So far, 24 states at the UN Human Rights Council have participated in talks. The Campaign has also brought these demands further at the 2013 meeting on the Convention on Certain Conventional Weapons (CCW), where more than 20 state representatives participated. Harang emphasizes that “the window of opportunity is open now, and [the issue] should be addressed before the military industrial complex proceeds with further development of these weapon systems.”

Finally, Harang notes the difficulties in establishing clear patterns of accountability in war. Who is responsible when a robot kills in the battlefield? Who is accountable in the event of malfunction where an innocent civilian is killed? In legal terms, it is unclear where the responsibility and accountability lies, and whether this is somewhere in the military chain of command or with the software developer. One thing is certain: the robot cannot be held accountable or be persecuted if IHL is violated.

 

The legal conundrum

Although unmanned robotic technology is developing rapidly, there is a slow evolution on the laws which governs these matters. In the legal context it is important to assess how autonomous weapon systems exist and conform to existing legislation; may it be international humanitarian law, human rights law or general international law. Harang emphasizes that this technology also challenges arms control regimes and the existing disarmament machinery. In particular, this issue raises concerns with regards to humanitarian law, in which distinction between civilian and combatants in war is a requirement. Addressing such legal concerns, Kjetil Mujezinovic Larsen reflects on how fully autonomous weapons can be discussed in light of existing international humanitarian law. Larsen sets out some legal premises for discussion on whether such weapons are already illegal and whether they should be banned or not.

Under IHL, autonomous weapon platforms can either be inherently unlawful or potentially unlawful. Such weapons can then be evaluated with considerations to two particular principles of IHL, namely that of proportionality and distinction. Inherently unlawful weapons are always prohibited. Some weapons are lawful, but might be used in an unlawful manner. Where do autonomous weapons fit?

Larsen explains that unlawful weapons are weapons that, by construct, cause superfluous injury or unnecessary suffering, such as chemical and biological weapons. As codified under IHL, such weapons are unlawful with regards to the principle of proportionality, for the protection of combatants. This prohibition does not immediately apply to autonomous weapons, because it is concerned with the effect of the weapons on the targeted individual, not with the manner of engagement. The concern with autonomous weapons lies precisely in the way they are deployed. So, if autonomous weapons are used to deploy chemical, biological or nuclear weapons, then they would clearly be unlawful.

Furthermore, as outlined in IHL, any armed attack must be targeted at a military target. This is to ensure that the attack distinguishes between civilians and combatants. If a weapon is incapable of making that discrimination, it is inherently unlawful. Due to the inability of robots to discriminate between civilians and combatants, using them would imply uncontrollable effects. Thus, such weapons are incapable of complying with the principles of distinction, which is fundamental in international humanitarian law.

The Human Rights Watch’s Losing Humanity Report states that “An initial evaluation of fully autonomous weapons shows that even with the proposed compliance mechanisms, such robots would appear to be incapable of abiding by the key principles of international humanitarian law. They would be unable to follow the rules of distinction, proportionality, and military necessity”. However, as Christof Heyns states in his report to the Human Rights Council “it is not clear at present how LARs could be capable of satisfying IHL and IHRL requirements [.]”

As Larsen highlights, the question of compliance is a big controversy in the legal sphere. From one legal viewpoint, the threshold for prohibiting weapons is rather high. Hard-core IHL lawyers will say that prohibition will only apply if there are no circumstances whatsoever where an autonomous weapon can be used lawfully. For example, there are defensive autonomous weapons that are programmed to destroy incoming missiles. Autonomous weapons are also used to target military objectives in remote areas where there is no civilian involvement. Under these circumstances, autonomous weapons do not face the problem of distinction and discrimination. However, the presumption of civilian status in IHL states that in case of doubt as to whether a civilian or an individual is a combatant or a civilian, he or she should be treated as a civilian. Will technology be able to make such assessments and take precautions to avoid civilian casualty?  How can an autonomous weapon be capable of doubt, and act on doubt?

In addition to such legal concerns, Larsen also discusses a range of ethical and societal concerns. Some argue that autonomous weapons will make it easier to wage war, because there is less risk of death and injury to own soldiers. Such technology can also make it easier for authoritarian leaders to suppress their own people, because the risk of a military coup is reduced. Furthermore, using autonomous weapons increase the distance between the soldier and the battlefield, and make human emotions and ethical considerations irrelevant. The nature of warring would change, as robots cannot show compassion or mercy.

On the other hand, some scholars argue that such weapons may be advantageous in terms of IHL. Soldiers, under psychological pressure and steered by emotions, can choose to disobey IHL. An autonomous weapon would not have the reason or capacity to snap, and robots may achieve military goals with less violence. This is based on the argument that soldiers can kill in order to avoid being killed. As robots would not be subject to such a dilemma, it could be easier for them to capture and not kill the enemy.

Potentially, autonomous weapons can make the use of violence more precise, leading to less damage and risk for civilians. This, however, requires a substantial development of software. Throughout history, weapons have always been a passive tool that humans have actively manipulated to achieve a certain purpose. Larsen suggests that if active manipulation is taken out of the equation, perhaps autonomous weapons cannot be considered as weapons in the IHL sense. Perhaps the IHL is as such insufficient to resolve the legal disputes about LAR. This would call for the establishment of new laws and regulations to outline the issue of accountability. Alternatively, a ban could resolve the dispute of the level of unlawfulness, by constituting them as inherently unlawful. Regardless, Larsen emphasizes the urgent need of a comprehensive and clear legal framework, particularly due to the rapid technological development in this field. Larsen also notes that lawyers have to defer to technology experts to define whether such technology can comply with current legal frameworks.

 

Technological determinism?

Due to technological advancement, Tobias Mahler argues that it is realistic to expect automated and autonomous technology to be implemented in all spheres of society in the near future. In this context, how realistic is a ban of killer robots? Mahler views the chances to be slim, and foresees a technological domino effect, implying that once some states acquire autonomous robots other states are expected to follow. From a technological and military perspective, the incentives for doing so are fairly strong.

In addition to the conventional features of LARs, such as surveillance equipment, robustness and versatility, robots can also be programmed to communicate with each other. This would imply programming different vehicles to share and exploit the information they collect, advancing the strategic approach to finding and attacking targets. Such communication between machines is already used in civilian technology such as autonomous vehicles, and is also assumed to be in use in the military complex. Such development and advanced of military technology is not presented to the public, due to strategic and security considerations. Thus, the technological opportunities of LARs are immense for the military sector.

Mahler emphasizes that although the military hardware may look frightening, the real threat lies in the algorithms of the software determining the decisions that are made. It is the software that controls the hardware and makes decisions concerning human lives. Robots rely on human specifications on what to do through software. Due to limitations of what programmers can specify, software development is prone to shortcomings and challenges. How do we deal with the artificial intelligence of autonomous robots?

Software malfunctions as well as hacking are problems in all spheres where technology is used. In a future comprised by technology any device could cause potential harm for civilians. In this context, Mahler suggests that there is still not full clarity to what a killer robot is. Questioning the relative lethality of autonomous weapons, he suggests that “in 20 year, when everything will be autonomous, you might be killed by a door.” However, he points out that the concerns related to autonomous weapon systems should be ignored or avoided. This argument simply points to that such challenges are present in both the civilian context and the military context.” Nevertheless, it is unclear who the responsible party would be when using killer robots.

Other concerns raised by Mahler regard whether LAR technology differs from other types of weapon technology, and may change the nature of war. In a war situation, would soldiers prefer to be attacked by another soldier, or a killer robot? How will the dehumanization of war impact soldiers and the public? It is correct to assume that soldiers would prefer to fight with other soldiers? A soldier in a combat situation could make an ethical consideration and show mercy, contrary to robots. However, there is not much evidence which suggests that mercy is commonly used among soldiers. On the other hand, governments could gain great public support by promoting LARs as a means to limiting loss of soldiers. As Mahler states, “people are really concerned about loss of lives of their soldiers, and if there is any way to protect them, then one might go that way.”

One of the questions that remain unanswered is whether software-developers are able to program software sufficiently advanced for autonomous war machines. One way of dealing with such concerns would be to develop robots that comply with IHL. Mahler ponders whether a pre-emptive ban may be too late in light of the current technological development. Perhaps the aim should be to regulate the robots and artificial intelligence in a way so they comply with the current legislation.

In this regard, Mahler points out the need for further development of the current conceptual framework of war and the law of armed conflict. Perhaps the current concepts used in IHL may be insufficient for the future of war. For instance, in a situation where robots are fighting robots, who are considered to be combatants under IHL? Is it the software programmer or the president who decided to send out the killer robot? Future technology could perhaps be able to distinguish between civilians and combatants using face recognition or iris scans. For now, however, this issue remains unresolved.

Regardless of technological inevitability, further discussion on this issue is necessary. Legal, ethical and societal challenges must be identified, and the means to solve these challenges must be specified. Addressing these issues is important in order to curb unintended humanitarian consequences and implications in the future. Perhaps these consequences may be avoided through a ban on LAR system or that current concepts of IHL need to be broadened in order to tackle legal shortcomings. Maybe software developers will one day be able to write programs that comply with IHL. Nevertheless, it is important to discuss and address these issues based on present knowledge and tools we have in place. The future of war is still not determined.

Literature:

United Nations General Assembly – Human Rights Council (2013) “Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns”. Available at http://www.ohchr.org/Documents/HRBodies/HRCouncil/RegularSession/Session23/A-HRC-23-47_en.pdf

Human Rights Watch (2012) “Losing Humanity Report”. Available at http://www.hrw.org/node/111291/section/1

Campain to Stop Killer Robots (2013) “Who we are”. Available at http://www.stopkillerrobots.org/coalition

The complete video of the “Killer Robots: the future of War?” seminar is available here.

Norwegian Centre for Humanitarian Studies
Contact: Centre Director Maria Gabrielsen Jumbert margab@prio.org, PRIO, PO Box 9229 Grønland, NO-0134 Oslo, Norway