Tag Archives: ethics

From Principle to Practice: Humanitarian Innovation and Experimentation

Written by

Humanitarian organizations have an almost impossible task: They must balance the imperative to save lives with the commitment to do no harm. They perform this balancing act amidst chaos, with incredibly high stakes and far fewer resources than they need. It’s no wonder that new technologies that promise to do more with less are so appealing.

By now, we know that technology can introduce bias, insecurity, and failure into systems. We know it is not an unalloyed good. What we often don’t know is how to measure the potential for those harms in the especially fragile contexts where humanitarians work. Without the tools or frameworks to evaluate the credibility of new technologies, it’s hard for humanitarians to know whether they’re having the intended impact and to assess the potential for harm. Introducing untested technologies into unstable environments raises an essential question: When is humanitarian innovation actually human subjects experimentation?

Humanitarians’ use of new technologies (including biometric identification to register refugees for relief, commercial drones to deliver cargo in difficult areas, and big data-fueled algorithms to predict the spread of disease) increasingly looks like the type of experimentation that drove the creation of human subjects research rules in the mid-20th century. In both examples, Western interests used untested approaches on African and Asian populations with limited consent and even less recourse. Today’s digital humanitarians may be innovators, but each new technology raises the specter of new harms, including biasing public resources with predictions over needs assessment, introducing coordination and practical failures through unique indicators and incompatible databases, and significant legal risks to both humanitarians and their growing list of partners.

For example, one popular humanitarian innovation uses big data and algorithms to build predictive epidemiological models. In the immediate aftermath of the late 2014 Ebola outbreak in West Africa, a range of humanitarian, academic, and technology organizations called for access to mobile network operators’ databases to track and model the disease. Several organizations got access to those databases—which, it turns out, was both illegal and ineffective. It violated the privacy of millions of people in contravention of domestic regulation, regional conventions, and international law. Ebola was a hemorrhagic fever, which requires the exchange of fluids to transmit—a behavior that isn’t represented in call detail records. More importantly, the resources that should have gone into saving lives and building the facilities necessary to treat the disease instead went to technology.

Without functioning infrastructure, institutions, or systems to coordinate communication, technology fails just like anything else. And yet these are exactly the contexts in which humanitarian innovation organizations introduce technology, often without the tools to measure, monitor, or correct the failures that result. In many cases, these failures are endured by populations already under tremendous hardship, with few ways to hold humanitarians accountable.

Humanitarians need both an ethical and evidence-driven human experimentation framework for new technologies. They need a structure parallel to the guidelines created in medicine, which put in place a number of practical, ethical, and legal requirements for developing and applying new scientific advancements to human populations.

The Medical Model

“Human subjects research,” the term of art for human experimentation, comes from medicine, though it is increasingly applied across disciplines. Medicine created some of the first ethical codes in the late 18th and early 19th centuries, but the modern era of human subject research protections started in the aftermath of World War II, evolving with the Helsinki Declaration (1975), the Belmont Report (1978), and the Common Rule (1981). These rules established proportionality, informed consent, and ongoing due process as conditions of legal human subjects research. Proportionality refers to the idea that an experiment should balance the potential harms with the potential benefit to participants. Informed consent in human subjects research requires that subjects understand the context and the process of the experiment prior to agreeing to participate. And due process, here, refers to a bundle of principles, including assessing subjects’ need “equally,” subjects’ ability to quit a study, and the continuous assessment of whether an experiment balances methods with the potential outcomes.

These standards defined the practice of human subjects research for the much of the rest of the world and are essential for protecting populations from mistreatment by experimenters who undervalue their well-being. But they come from the medical industry, which relies on a lot of established infrastructure that less-defined industries, such as technology and humanitarianism, lack, which limits their applicability.

The medical community’s human subjects research rules clearly differentiate between research and practice based on the intention of the researcher or practitioner. If the goal is to learn, an intervention is research. If the goal is to help the subject, it’s practice. Because it comes from science, human subjects research law doesn’t contemplate that an activity would use a method without researching it first. The distinction between research and practice has always been controversial, but it gets especially blurry when applied to humanitarian innovation, where the intention is both to learn and to help affected populations.

The Belmont Report, a summary of ethical principles and guidelines for human subject research, defines practice as “designed solely to enhance the well-being of a client or patient and that have a reasonable expectation of success,” (emphasis added). This differs from humanitarian practice in two major ways: First, there is no direct fiduciary relationship between humanitarians and those they serve, and so humanitarians may prioritize groups or collective well-being over the interests of individuals. Second, humanitarians have no way to evaluate the reasonableness of their expectation of success. In other words, the assumptions embedded in human subjects research protections don’t clearly map to the relationships or activities involved in humanitarian response. As a result, these conventions offer humanitarian organizations neither clear guidance nor the types of protections that exist for well-regulated industrial experimentation.

In addition, human subjects research rules are set up so that interventions are judged on their potential for impact. Essentially, the higher the potential for impact on human lives, the more important it is to get informed consent, have ethical review, and for subjects to extricate themselves from the experiment. Unfortunately, in humanitarian response, the impacts are always high, and it’s almost impossible to isolate the effects generated by a single technology or intervention. Even where establishing consent is possible, disasters don’t lend themselves to consent frameworks, because refusing to participate can mean refusing life-saving assistance. In law, consent agreements made under life-threatening circumstances are called contracts of adhesion and aren’t valid. The result is that humanitarian innovation faces fundamental challenges in knowing how to deploy ethical experimentation frameworks and in implementing the protections they require.

First Steps

The good news is that existing legal and ethical frameworks lay a strong foundation. As Jacob Metcalf and Kate Crawford lay out in a 2016 paper, there are significant enough similarities between biomedical and big data research to develop new human subjects research rules. This January, the United States expanded the purview of the Common Rule to govern human subjects research funded by 16 federal departments and agencies. Despite their gaps, human subjects research laws go a long way toward establishing legally significant requirements for consent, proportionality, and due process—even if they don’t yet directly address humanitarian organizations.

Human rights-based approaches such as the Harvard Humanitarian Initiative’s Signal Code go further, adapting human rights to digital humanitarian practice. But, like most rights frameworks, it relies on public infrastructure to ratify, harmonize, and operationalize. There are proactive efforts to set industry-focused standards and guidelines, such as the World Humanitarian Summit’s Principles for Ethical Humanitarian Innovation and the Digital Impact Alliance’s Principles for Digital Development. And, of course, there are technology-centric efforts beginning to establish ethical use standards for specific technologies—like biometric identification, drone, and big data—that offer specific guidance but include incentives that may not be relevant in the humanitarian context.

That said, principles aren’t enough—we’re now getting to the hard part: building systems that actualize and operationalize our values. We don’t need to decide the boundaries of innovation or humanitarianism as industries to begin developing standards of practice. We don’t need to ratify an international convention on technology use to begin improving procurement requirements, developing common indicators of success for technology use, or establishing research centers capable of testing for applicability of new approaches to difficult and unstable environments. A wide range of industries are beginning to invest in legal, organizational, and technological approaches to building trust—all of which offer additional, practical steps forward.

For humanitarians, as always, the stakes are high. The mandate to intervene comes with the responsibility to know how to do better. Humanitarians hold themselves and their work to a higher standard than almost any other field in the world. They must now apply the same rigor to the technologies and tools they use.


This post originally appeared on the blog of Stanford Social Innovation Review.

Humanitarian experimentation

Written by

Humanitarian actors, faced with ongoing conflict, epidemics, famine and a range of natural disasters, are increasingly being asked to do more with less. The international community’s commitment of resources has not kept pace with their expectations or the growing crises around the world. Some humanitarian organizations are trying to bridge this disparity by adopting new technologies—a practice often referred to as humanitarian innovation. This blog post, building on a recent article in the ICRC Review, asserts that humanitarian innovation is often human experimentation without accountability, which may both cause harm and violate some of humanitarians’ most basic principles.

While many elements of humanitarian action are uncertain, there is a clear difference between using proven approaches to respond in new contexts and using wholly experimental approaches on populations at the height of their vulnerability. This is also not the first generation of humanitarian organizations to test new technologies or approaches in the midst of disaster. Our article draws upon three timely examples of humanitarian innovations, which are expanding into the mainstream of humanitarian practice without clear assessments of potential benefits or harms.

Cargo drones, for one, have been presented as a means to help deliver assistance to places that aid agencies otherwise find difficult, and sometimes impossible, to reach. Biometrics is another example. It is said to speed up cumbersome registration processes, thereby allowing faster access to aid for people in need (who can only receive assistance upon registration). And, in the case of responding to the 2014 outbreak of Ebola in West Africa, data modelling was seen as a way to help in this response. In each of these cases, technologies with great promise were deployed in ways that risked, distorted and/or damaged the relationships between survivors and responders.

These examples illustrate the need for investment in ethics and evidence on the impact of development and application of new technologies in humanitarian response. It is incumbent on humanitarian actors to understand both the opportunities posed by new technologies, as well as the potential harms they may present—not only during the response, but long after the emergency ends. This balance is between, on the one hand, working to identify new and ‘innovative’ ways of addressing some of the challenges that humanitarian actors confront and, on the other hand, the risk of introducing new technological ‘solutions’ in ways that resemble ‘humanitarian experimentation’ (as explained in the article). The latter carries with it the potential for various forms of harm. This risk of harm is not only to those that humanitarian actors are tasked to protect, but also to humanitarian actors themselves, in the form of legal liability, loss of credibility and operational inefficiency. Without open and transparent validation, it is impossible to know whether humanitarian innovations are solutions, or threats themselves. Aid agencies must not only to be extremely attentive to this balance, but also should do their utmost to avoid a harmful outcome.

Framing aid projects as ‘innovative’, rather than ‘experimental’, avoids the explicit acknowledgment that these tools are untested, understating both the risks these approaches may pose, as well as sidestepping the extensive body of laws that regulate human trials. Facing enormous pressure to act and ‘do something’ in view of contemporary humanitarian crisis, a specific logic seems to have gained prominence in the humanitarian community, a logic that conflicts with the risk-taking standards that prevail under normal circumstances. The use of untested approaches in uncertain and challenging humanitarian contexts provokes risks that do not necessarily bolster humanitarian principles. In fact, they may even conflict with the otherwise widely adhered to Do No Harm principle. Failing to test these technologies, or even explicitly acknowledge that they are untested, prior to deployment raises significant questions about both the ethics and evidence requirements implicit in the unique license afforded to humanitarian responders.

In Do No Harm: A Taxonomy of the Challenges of Humanitarian Experimentation, we contextualize humanitarian experimentation—providing a history, examples of current practice, a taxonomy of potential harms and an analysis against the core principles of the humanitarian enterprise.

***

Kristin Bergtora Sandvik, SJD Harvard Law School, is a Research Professor at the Peace Research Institute Oslo and a Professor of Sociology of Law at the University of Oslo. Her widely published socio-legal research focuses on technology and innovation, forced displacement and the struggle for accountability in humanitarian action. Most recently, Sandvik co-edited UNHCR and the Struggle for Accountability (Routledge, 2016), with Katja Lindskov Jacobsen, and The Good Drone (Routledge, 2017).

Katja Lindskov Jacobsen, PhD International Relations Lancaster University, is a Senior Researcher at Copenhagen University, Department of Political Science, Centre for Military Studies. She is an international authority on the issue of humanitarian biometrics and security dimensions and is the author of The Politics of Humanitarian Technology (Routledge, 2015). Her research has also appeared in Citizenship Studies, Security Dialogue, Journal of Intervention & Statebuilding, and African Security Review, among others.

Sean Martin McDonald, JD/MA American University, is the CEO of FrontlineSMS and a Fellow at Stanford’s Digital Civil Society Lab. He is the author of Ebola: A Big Data Disaster, a legal analysis of the way that humanitarian responders use data during crises. His work focuses on building agency at the intersection of digital spaces, using technology, law and civic trusts.

Norwegian Centre for Humanitarian Studies
Contact: Centre Director Maria Gabrielsen Jumbert margab@prio.org, PRIO, PO Box 9229 Grønland, NO-0134 Oslo, Norway