Author Archives: Amanda Cellini

From Principle to Practice: Humanitarian Innovation and Experimentation

Written by

Humanitarian organizations have an almost impossible task: They must balance the imperative to save lives with the commitment to do no harm. They perform this balancing act amidst chaos, with incredibly high stakes and far fewer resources than they need. It’s no wonder that new technologies that promise to do more with less are so appealing.

By now, we know that technology can introduce bias, insecurity, and failure into systems. We know it is not an unalloyed good. What we often don’t know is how to measure the potential for those harms in the especially fragile contexts where humanitarians work. Without the tools or frameworks to evaluate the credibility of new technologies, it’s hard for humanitarians to know whether they’re having the intended impact and to assess the potential for harm. Introducing untested technologies into unstable environments raises an essential question: When is humanitarian innovation actually human subjects experimentation?

Humanitarians’ use of new technologies (including biometric identification to register refugees for relief, commercial drones to deliver cargo in difficult areas, and big data-fueled algorithms to predict the spread of disease) increasingly looks like the type of experimentation that drove the creation of human subjects research rules in the mid-20th century. In both examples, Western interests used untested approaches on African and Asian populations with limited consent and even less recourse. Today’s digital humanitarians may be innovators, but each new technology raises the specter of new harms, including biasing public resources with predictions over needs assessment, introducing coordination and practical failures through unique indicators and incompatible databases, and significant legal risks to both humanitarians and their growing list of partners.

For example, one popular humanitarian innovation uses big data and algorithms to build predictive epidemiological models. In the immediate aftermath of the late 2014 Ebola outbreak in West Africa, a range of humanitarian, academic, and technology organizations called for access to mobile network operators’ databases to track and model the disease. Several organizations got access to those databases—which, it turns out, was both illegal and ineffective. It violated the privacy of millions of people in contravention of domestic regulation, regional conventions, and international law. Ebola was a hemorrhagic fever, which requires the exchange of fluids to transmit—a behavior that isn’t represented in call detail records. More importantly, the resources that should have gone into saving lives and building the facilities necessary to treat the disease instead went to technology.

Without functioning infrastructure, institutions, or systems to coordinate communication, technology fails just like anything else. And yet these are exactly the contexts in which humanitarian innovation organizations introduce technology, often without the tools to measure, monitor, or correct the failures that result. In many cases, these failures are endured by populations already under tremendous hardship, with few ways to hold humanitarians accountable.

Humanitarians need both an ethical and evidence-driven human experimentation framework for new technologies. They need a structure parallel to the guidelines created in medicine, which put in place a number of practical, ethical, and legal requirements for developing and applying new scientific advancements to human populations.

The Medical Model

“Human subjects research,” the term of art for human experimentation, comes from medicine, though it is increasingly applied across disciplines. Medicine created some of the first ethical codes in the late 18th and early 19th centuries, but the modern era of human subject research protections started in the aftermath of World War II, evolving with the Helsinki Declaration (1975), the Belmont Report (1978), and the Common Rule (1981). These rules established proportionality, informed consent, and ongoing due process as conditions of legal human subjects research. Proportionality refers to the idea that an experiment should balance the potential harms with the potential benefit to participants. Informed consent in human subjects research requires that subjects understand the context and the process of the experiment prior to agreeing to participate. And due process, here, refers to a bundle of principles, including assessing subjects’ need “equally,” subjects’ ability to quit a study, and the continuous assessment of whether an experiment balances methods with the potential outcomes.

These standards defined the practice of human subjects research for the much of the rest of the world and are essential for protecting populations from mistreatment by experimenters who undervalue their well-being. But they come from the medical industry, which relies on a lot of established infrastructure that less-defined industries, such as technology and humanitarianism, lack, which limits their applicability.

The medical community’s human subjects research rules clearly differentiate between research and practice based on the intention of the researcher or practitioner. If the goal is to learn, an intervention is research. If the goal is to help the subject, it’s practice. Because it comes from science, human subjects research law doesn’t contemplate that an activity would use a method without researching it first. The distinction between research and practice has always been controversial, but it gets especially blurry when applied to humanitarian innovation, where the intention is both to learn and to help affected populations.

The Belmont Report, a summary of ethical principles and guidelines for human subject research, defines practice as “designed solely to enhance the well-being of a client or patient and that have a reasonable expectation of success,” (emphasis added). This differs from humanitarian practice in two major ways: First, there is no direct fiduciary relationship between humanitarians and those they serve, and so humanitarians may prioritize groups or collective well-being over the interests of individuals. Second, humanitarians have no way to evaluate the reasonableness of their expectation of success. In other words, the assumptions embedded in human subjects research protections don’t clearly map to the relationships or activities involved in humanitarian response. As a result, these conventions offer humanitarian organizations neither clear guidance nor the types of protections that exist for well-regulated industrial experimentation.

In addition, human subjects research rules are set up so that interventions are judged on their potential for impact. Essentially, the higher the potential for impact on human lives, the more important it is to get informed consent, have ethical review, and for subjects to extricate themselves from the experiment. Unfortunately, in humanitarian response, the impacts are always high, and it’s almost impossible to isolate the effects generated by a single technology or intervention. Even where establishing consent is possible, disasters don’t lend themselves to consent frameworks, because refusing to participate can mean refusing life-saving assistance. In law, consent agreements made under life-threatening circumstances are called contracts of adhesion and aren’t valid. The result is that humanitarian innovation faces fundamental challenges in knowing how to deploy ethical experimentation frameworks and in implementing the protections they require.

First Steps

The good news is that existing legal and ethical frameworks lay a strong foundation. As Jacob Metcalf and Kate Crawford lay out in a 2016 paper, there are significant enough similarities between biomedical and big data research to develop new human subjects research rules. This January, the United States expanded the purview of the Common Rule to govern human subjects research funded by 16 federal departments and agencies. Despite their gaps, human subjects research laws go a long way toward establishing legally significant requirements for consent, proportionality, and due process—even if they don’t yet directly address humanitarian organizations.

Human rights-based approaches such as the Harvard Humanitarian Initiative’s Signal Code go further, adapting human rights to digital humanitarian practice. But, like most rights frameworks, it relies on public infrastructure to ratify, harmonize, and operationalize. There are proactive efforts to set industry-focused standards and guidelines, such as the World Humanitarian Summit’s Principles for Ethical Humanitarian Innovation and the Digital Impact Alliance’s Principles for Digital Development. And, of course, there are technology-centric efforts beginning to establish ethical use standards for specific technologies—like biometric identification, drone, and big data—that offer specific guidance but include incentives that may not be relevant in the humanitarian context.

That said, principles aren’t enough—we’re now getting to the hard part: building systems that actualize and operationalize our values. We don’t need to decide the boundaries of innovation or humanitarianism as industries to begin developing standards of practice. We don’t need to ratify an international convention on technology use to begin improving procurement requirements, developing common indicators of success for technology use, or establishing research centers capable of testing for applicability of new approaches to difficult and unstable environments. A wide range of industries are beginning to invest in legal, organizational, and technological approaches to building trust—all of which offer additional, practical steps forward.

For humanitarians, as always, the stakes are high. The mandate to intervene comes with the responsibility to know how to do better. Humanitarians hold themselves and their work to a higher standard than almost any other field in the world. They must now apply the same rigor to the technologies and tools they use.


This post originally appeared on the blog of Stanford Social Innovation Review.

New PRIO Policy Brief – Colombia as a Case for Rights-Based Approaches

Given the transient nature of humanitarian assistance, durable solutions for forced displacement and exit strategies for humanitarian actors require careful engagement with a host state. This highlights a central challenge for the humanitarian sector: how to relate to states? Drawing on a case study of the Norwegian Refugee Council’s use of rights-based approaches (RBA) in Colombia, Julieta Lemaitre (Universidad de los Andes) suggests in a new PRIO Policy Brief that the importance of RBA in humanitarian aid lies in fostering the mutual dependency between beneficiaries and states. In this way, the humanitarian actor’s work is focused on enhancing state capacity to provide for its citizens, as well as supporting and empowering community engagement. This is particularly relevant when humanitarian aid is provided in the context of relatively strong states with the capacity to provide humanitarian aid.

You can read the policy brief here.

End impunity! Reducing conflict-related sexual violence to a problem of law

Written by

In our recent article, End impunity! Reducing conflict-related sexual violence to a problem of law, we question the taken-for-granted center-stage position of international criminal justice in international policy responses to conflict-related sexual violence. We address how central policy and advocacy actors explain such violence and its consequences for targeted individuals in order to promote and strengthen the fight against impunity. With the help of apt analytical tools provided by framing theory, we show how the UN Security Council and Human Rights Watch construct a simplistic understanding of conflict-related sexual violence in order to get their message and call for action across to wider audiences and constituencies – including a clear and short causal chain, and checkbox-solutions. The narrowing down of complexity serves important purposes, in that it brings with it opportunities for action in a field within which ‘the urge to do something’ has gained a particular stronghold.

However, by framing conflict-related sexual violence as first and foremost a criminal – and individualized – act, the multilayered, complex, social, and collective phenomenon of harm that it also is, is increasingly peeled away from understandings of the problem. This narrative about conflict-related sexual violence and its solution resonates and gains support because of its simplicity. It reduces sexual violence into clear-cut categories of rational, individual and evil perpetrators and powerless, broken victims – ideal causality on the one hand, massive suffering in need of legal catharsis on the other; in short, to a problem against which something can be done. Individualization of guilt corresponds poorly, however, to the collective crime and structural explanations that academic theories about conflict-related sexual violence underscore. Thus, the cost of the simplistic narrative is that the phenomenological understanding gets separated from its enabling social structures, including the collective out of which the phenomenon arises. Moreover, the deterrence rationale upon which the call for criminal prosecutions is based carries limited empirical weight.

We therefore ask for a more precise recognition of what criminal law can and cannot do with conflict-related sexual violence, and hold that the problem with the focus on ending impunity is not that it is an irrelevant task, but that it is not the solution its proponents claim it to be. Paralleling criticism of carceral feminism domestically, we see a need for greater attention to the political, economic and gendered inequalities and structures within which sexual violence take place. Conflict-related sexual violence is indeed part of a repertoire of illegitimate warfare, and a reaction to the chaotic, desperate and demoralizing experiences that war brings with it, but it is also the result of gendered hierarchies, subordination, and poverty, and a continuum of violence that transgresses war and peace.

It is important to recognize the narrative processes at work that keep favoring criminal law – and to question whose voices and what stories matter, what reality “fits,” and what complexities are lost. This is important not because criminal law is inherently bad – but because conflict-related sexual violence is not a problem that can be exclusively solved in the court room.

This post first appeared on the blog of Law & Society Review

Humanitarian experimentation

Written by

Humanitarian actors, faced with ongoing conflict, epidemics, famine and a range of natural disasters, are increasingly being asked to do more with less. The international community’s commitment of resources has not kept pace with their expectations or the growing crises around the world. Some humanitarian organizations are trying to bridge this disparity by adopting new technologies—a practice often referred to as humanitarian innovation. This blog post, building on a recent article in the ICRC Review, asserts that humanitarian innovation is often human experimentation without accountability, which may both cause harm and violate some of humanitarians’ most basic principles.

While many elements of humanitarian action are uncertain, there is a clear difference between using proven approaches to respond in new contexts and using wholly experimental approaches on populations at the height of their vulnerability. This is also not the first generation of humanitarian organizations to test new technologies or approaches in the midst of disaster. Our article draws upon three timely examples of humanitarian innovations, which are expanding into the mainstream of humanitarian practice without clear assessments of potential benefits or harms.

Cargo drones, for one, have been presented as a means to help deliver assistance to places that aid agencies otherwise find difficult, and sometimes impossible, to reach. Biometrics is another example. It is said to speed up cumbersome registration processes, thereby allowing faster access to aid for people in need (who can only receive assistance upon registration). And, in the case of responding to the 2014 outbreak of Ebola in West Africa, data modelling was seen as a way to help in this response. In each of these cases, technologies with great promise were deployed in ways that risked, distorted and/or damaged the relationships between survivors and responders.

These examples illustrate the need for investment in ethics and evidence on the impact of development and application of new technologies in humanitarian response. It is incumbent on humanitarian actors to understand both the opportunities posed by new technologies, as well as the potential harms they may present—not only during the response, but long after the emergency ends. This balance is between, on the one hand, working to identify new and ‘innovative’ ways of addressing some of the challenges that humanitarian actors confront and, on the other hand, the risk of introducing new technological ‘solutions’ in ways that resemble ‘humanitarian experimentation’ (as explained in the article). The latter carries with it the potential for various forms of harm. This risk of harm is not only to those that humanitarian actors are tasked to protect, but also to humanitarian actors themselves, in the form of legal liability, loss of credibility and operational inefficiency. Without open and transparent validation, it is impossible to know whether humanitarian innovations are solutions, or threats themselves. Aid agencies must not only to be extremely attentive to this balance, but also should do their utmost to avoid a harmful outcome.

Framing aid projects as ‘innovative’, rather than ‘experimental’, avoids the explicit acknowledgment that these tools are untested, understating both the risks these approaches may pose, as well as sidestepping the extensive body of laws that regulate human trials. Facing enormous pressure to act and ‘do something’ in view of contemporary humanitarian crisis, a specific logic seems to have gained prominence in the humanitarian community, a logic that conflicts with the risk-taking standards that prevail under normal circumstances. The use of untested approaches in uncertain and challenging humanitarian contexts provokes risks that do not necessarily bolster humanitarian principles. In fact, they may even conflict with the otherwise widely adhered to Do No Harm principle. Failing to test these technologies, or even explicitly acknowledge that they are untested, prior to deployment raises significant questions about both the ethics and evidence requirements implicit in the unique license afforded to humanitarian responders.

In Do No Harm: A Taxonomy of the Challenges of Humanitarian Experimentation, we contextualize humanitarian experimentation—providing a history, examples of current practice, a taxonomy of potential harms and an analysis against the core principles of the humanitarian enterprise.

***

Kristin Bergtora Sandvik, SJD Harvard Law School, is a Research Professor at the Peace Research Institute Oslo and a Professor of Sociology of Law at the University of Oslo. Her widely published socio-legal research focuses on technology and innovation, forced displacement and the struggle for accountability in humanitarian action. Most recently, Sandvik co-edited UNHCR and the Struggle for Accountability (Routledge, 2016), with Katja Lindskov Jacobsen, and The Good Drone (Routledge, 2017).

Katja Lindskov Jacobsen, PhD International Relations Lancaster University, is a Senior Researcher at Copenhagen University, Department of Political Science, Centre for Military Studies. She is an international authority on the issue of humanitarian biometrics and security dimensions and is the author of The Politics of Humanitarian Technology (Routledge, 2015). Her research has also appeared in Citizenship Studies, Security Dialogue, Journal of Intervention & Statebuilding, and African Security Review, among others.

Sean Martin McDonald, JD/MA American University, is the CEO of FrontlineSMS and a Fellow at Stanford’s Digital Civil Society Lab. He is the author of Ebola: A Big Data Disaster, a legal analysis of the way that humanitarian responders use data during crises. His work focuses on building agency at the intersection of digital spaces, using technology, law and civic trusts.

Call for Papers: Oslo Migration Conference 2018

The Interdisciplinary Conference on Migration: Vulnerability, Protection, and Agency, to be held in Oslo 24-25 May 2018, has just announced its call for papers, due 15 December.

Under the auspices of the University of Oslo, Faculty of Law, the conference is organized by the research group on International Law and Governance in collaboration with the research group on Human Rights, Armed Conflict, and Law of Peace and Security at the University of Oslo, and Peace Research Institute Oslo (PRIO).

Moving beyond the abyss

Migration is presenting a challenge to migration practitioners, policymakers and academics given the manifestation of extra-territorial approaches, increased reliance on technology, and weakening of accountability for violations of rights.

We call for papers proposing how to move beyond the abyss, welcoming perspectives from law and the social sciences (including geography, anthropology, sociology, criminology, and international relations). Interdisciplinary approaches are encouraged. We call for paper proposals from scholars, policy makers, or practitioners, at different stages of their careers, Phd candidates, post-docs, and professors. Proposals for a poster session will also be evaluated.

Topics may include: 

  • Accountability mechanisms for extra-territorial action by States, IOs, NGOs, and Corporate Actors
  • Towards de-construction of walls (literal and metaphorical)
  • New interpretations for protection standards across or beyond normative regimes

Read more here.

Sandvik, Jacobsen and McDonald publish new article in International Review of the Red Cross

In their newly published article Do no harm: A taxonomy of the challenges of humanitarian experimentation in the International Review of the Red Cross, Kristin Bergtora Sandvik (PRIO/UiO), Katja Lindskov Jacobsen (University of Copenhagen) and Sean Martin McDonald (FrontlineSMS) explore the notion of ‘humanitarian experimentation’. Whether through innovation or uncertain contexts, managing risk is a core component of the humanitarian initiative – but, as they note, not all risk is created equal. There is a stark ethical and practical difference between managing risk and introducing it, which is mitigated in other fields through experimentation and regulation. This article identifies and historically contextualizes the concept of humanitarian experimentation, which is increasingly prescient, as a range of humanitarian sub-fields embark on projects of digitization and privatization. This trend is illustrated in their article through three contemporary examples of humanitarian innovation (biometrics, data modeling, cargo drones), with reference to critical questions about adherence to the humanitarian ‘do no harm’ imperative. This article outlines a broad taxonomy of harms, intended to serve as the starting point for a more comprehensive conversation about humanitarian action and the ethics of experimentation.
You can access the article here.

Sandvik moderates ‘Aleppo: The Fall’ panel

On 14 October, Kristin Bergtora Sandvik (PRIO/UiO) moderated the panel ‘Aleppo: The Fall‘  hosted by SPACE: Syrian Peace Action Centre at Litteraturhuset in Oslo.

In 2016, one of the worst human tragedies took place in Aleppo. After a few months of siege and indiscriminate shelling, tens of thousands of people were evicted from the city. Beyond the horrific scenes of bombardment and forced mass eviction, little reflection has followed on how and why these violations happened and what the implications are for the present and future Syria.

Why did Aleppo fall? Who is responsible and how to be held accountable? What was the role of the local armed factions in Aleppo? Who was negotiating on behalf of the civilians? Who was forced to leave eastern Aleppo and who was allowed to return after the fall? What is happening in Aleppo today? What are the protection needs of civilians living in Aleppo under Assad?

In answering these questions, Lina Shamy gave personal testimony of living under Aleppo’s siege before she was forced to leave with the last busses in 2016; Dr. Mohamad Katoub addressed the inhumane situation under the siege and put it into context with the use of siege as a war tactic against civilians in many other locations around Syria; finally, Karam Nachar reflected on the meaning and implications of Aleppo’s catastrophe, ending with an outlook on the future of an increasingly fragmented country.

Lemaitre publishes new article in Third World Quarterly

In her newly published article Humanitarian aid and host state capacity: the challenges of the Norwegian Refugee Council in Colombia in Third World Quarterly, Julieta Lemaitre (Universidad de Los Andes) investigates how humanitarian actors can operate within a host state with significant subnational variations in willingness and capacity to meet obligations. This is an issue of pressing importance, given the expansion of humanitarian aid to middle-income countries with growing state capacity, but with persistent infrastructural weakness in their periphery. Lemaitre’s article illustrates the challenges and potentialities of engaging these states through the case study of the Norwegian Refugee Council (NRC) in Colombia. It describes the way the NRC has located its offices in peripheral areas, and how its activities have fostered the rule of law, successfully using rights-based approaches to strengthen subnational state institutions, activate and mobilize citizen demands and bridge national and subnational administrations. She concludes that these activities, operated by officers with extensive practical knowledge and local trust networks, can open the way for durable solutions for humanitarian crisis, but can also provoke backlash from subnational actors.

You can access the article here.

Jumbert discussed the situation in Libya on Ekko radio program

In light of new reports from MSF on the dire situation in migration detention centres in Libya, Karine Nordstrand (president, MSF Norway) and Maria Gabrielsen Jumbert (PRIO) were interviewed about the situation in Libya, the EU’s policies to fund and train Libyan coast guards, and the general trends in the number of migrants and refugees seeking to cross the Mediterranean to reach Europe.

Listen to their interview from 14 September on NRK Radio’s program Ekko here (in Norwegian).

Unpacking the Myth of ICT’s Protective Effect in Mass Atrocity Response

Written by

Information Communication Technologies (ICTs) are now a standard part of the mass atrocity responder’s toolkit, being employed for evidence collection and research by NGOs, governments, and the private sector. One of the more notable justifications for their use has been to supplement or improve the protection of vulnerable populations. In a new article published in the Genocide Studies and Prevention Journal, we argue that there is little evidence for the assertion of this protective effect by ICTs in mass atrocity producing environments, which we have labeled the Protective or Preventative Effect (PPE). This blog post argues that the mass atrocity community needs to engage more critically with a widespread perception that ICTs have innate protective effects in mass atrocity response. More testing and validation of potential harms is necessary to ensure that civilians on the ground are not negatively affected by ICTs. Risks to individuals and communities include for example the theft,  appropriation and distortion of personal data, geotracking of ground movements and surveillance of speech, communication, movements and transactions through hand-held devices

Technologies performing remote sensing, crowd mapping, individual identification through facial recognition and big data analytics have significantly impacted mass atrocity response over the past 15 years. These include smartphone apps, remote sensing platforms such as satellite imagery analysis and surveillance drones, social media and data aggregation platforms.

Such technologies are primarily adopted due to their low-cost relative to analogue intervention, and their ability to be remotely deployed in otherwise inaccessible or insecure environments. The specific applications of these technologies and platforms are diverse and constantly evolving, but can generally be divided into two broad categories:

  • Prevention/Response applications seek to create novel situational awareness capacity to protect populations and inform response activities.
  • Justice/accountability use-cases aim to detect and/or document evidence of alleged crimes for judicial and/or advocacy purposes.

These ICTs are now effectively treated as indispensable force multipliers that supplement or supplant traditional mass atrocity response activities. However, in the absence of validation of these claims, adoption of these technologies can be said to be largely supply-driven.

As ICT use in mass atrocity and human security crisis response has been mainstreamed over the last two decades, so has a set of generalized and hitherto largely invalidated claims about their effects on the nature and effectiveness of response. These claims constitute technological utopianism—the notion that technological change is inevitable, problem-free, and progressive. Moreover, the adoption of this technology-reliant and remote posture encodes within it the idea that the direct result of deploying these technologies and platforms is the prediction, prevention, and deterring of mass atrocity related crimes—a form of technological utopianism known as solutionism, which holds that the right technology can solve all of mankind’s problems.

Within atrocity response, this approach is exemplified by the much-publicized Eyes on Darfur campaign, where the public viewing of satellite images from Darfur was framed as action in and of itself—the assumption being that simply “knowing about atrocities” is enough to mobilize mass empathy and as a result engender political action. Implicit in this is the idea that technology itself can fundamentally alter the calculus of whether and how mass atrocities occur.  The adoption of this view by civil society, we argue, means that responders are not simply adopting a set of tools and techniques, but a theory of change, built upon a technologically utopian worldview.

Underlying this theory of change is the imbuing of these platforms and technologies with an inherent “ambient protective effect”—e.g. transforming the threat matrix of a particular atrocity producing environment in a way that improves the human security status of the targeted population. The underlying assumption of this protective effect is that increased volumes of novel and otherwise unobtainable data over a large-scale geographic area or environment may cause one, some, or all of several potential ambient protective effects which will prevent or mitigate the effects of mass atrocities.

Our article argues that the human security community—particularly mass atrocity responders—must come to terms with the fact that there is a difference between knowing about atrocities and doing something about them. Monitoring is a precondition for protection, but it is does not have a protective effect in and by itself.

More research is needed to determine the validity of the assumptions encoded into ICT use, and to address their relationship to a growing body of scholarship indicating possible direct and indirect pernicious effects of attempting to project a PPE through technology. In some cases, these may be exposing civilians to new, rapidly evolving risks to their human security and mutating the behavior of mass atrocity perpetrators in ways which harm target populations (for example by providing perpetrators with sitting duck targets through real-time information about population movements;  or about settlements and survivors not harmed in a bombing campaign, for example) . To do no harm to civilians, we must start by recognizing that the unpredictable knock-on effects of ICT use can cause real harm to civilians—for example, crowd-sourced data can be used to foment violence as well as preventing it—and  that the prevailing technological utopianism may prevent responders from noticing.

This post comes from Kristin Bergtora Sandvik and Nathaniel A. Raymond. Kristin is a Research Professor in Humanitarian Studies at the Peace Research Institute Oslo (PRIO) and a professor of  Sociology of Law  at the University of Oslo. Nathaniel is the Director of the Signal Program on Human Security and Technology at the Harvard Humanitarian Initiative. This post was also published on the ATHA blog of the Harvard Humanitarian Initiative.

Norwegian Centre for Humanitarian Studies
Contact: Centre Director Maria Gabrielsen Jumbert margab@prio.org, PRIO, PO Box 9229 Grønland, NO-0134 Oslo, Norway