Tag Archives: technology

New Directions in Humanitarian Governance: Technology, Juridification and Criminalization

Written by

This text first appeared on Global Policy and is re-posted here. Kristin Bergtora Sandvik and Dennis Dijkzeul reflect on some of the new directions in humanitarian governance and the ambiguity of some of the principal techniques.

A member of the European Union assessment team disembarks a UN peacekeeping helicopter in Petit-Goâve, Haiti. 20/Jan/2010. UN Photo/Logan Abassi. www.un.org/av/photo/

According to an influential conception, humanitarian governance entails ‘the increasingly organized and internationalized attempt to save the lives, enhance the welfare, and reduce the suffering of the world’s most vulnerable populations.’ The actors involved in humanitarian governance include affected populations, civil society, host governments, the military, the private sector, international organisations and NGOs, and donors. Much of this governance is associated with the intended as well as the unintended consequences of humanitarian action.

In particular, these unintended consequences have brought about a quest for institutional or moral improvement of humanitarian action. Presented as progress narratives, these initiatives – or techniques – range from efforts to enhance accountability, for example through legalization, to offering better technological solutions. However, in recent years, the techniques of humanitarian governance are increasingly also incorporated into narratives of decline, where attempts to govern humanitarianism is also seen to hinder humanitarian access, hamper aid delivery and undermine the humanitarian principles of humanity, impartiality, neutrality, and independence. This blog post reflects on some of the new directions in humanitarian governance and the ambiguity of some of the principal techniques of such governance.

The Governance Techniques

Accountability to improve behavior. Starting from the mid-1990s, a number of sector-wide transparency and accountability initiatives (e.g., SPHERE, the Humanitarian Accountability Partnership (HAP) International, People in Aid, Groupe URD’s Compass, and more recently the Core Humanitarian Standard) have influenced humanitarian organisations. Criticism has been directed at ‘the accountability industry’ for emphasizing standardization and technocratization, which hide the actual politics, and for prioritizing upwards accountability to donors at the expense of true, participatory accountability processes with communities in crisis. Still, the quest for accountability remains a core normative ambition and shapes attempts to govern in the humanitarian arena.

As part of this, humanitarians are increasingly ‘code of conducted up’, in particular with respect to intimate personal relationships and financial transparency. What would previously be deemed either private behavior – such as substance abuse – or individual moral and personal failure – such as buying sex – is increasingly construed as a risk-generating activity threatening specific operations, organisational reputations, and the legitimacy of the sector itself. Despite the Oxfam sex scandal, there is not sufficient evidence – or a concerted push to establish such evidence – on whether the humanitarian sector is currently doing better in terms of its accountability.

The technological turn. Moreover, the ongoing digitization and datafication of humanitarian action have become central techniques of humanitarian governance, and increasingly shape our understanding of and response to emergencies. Digitization is dramatically changing the way aid agencies provide assistance, from blockchain technology to provide cash transfers to the use of biometrics with iris scans and fingerprinting to register and track beneficiary assistance. This has led to faster information exchanges and greater transparency about what is happening on the ground. At the same time, the integration of information technology has enabled an increasing degree of remote management, which has changed the dynamic between communities in crisis, responders, regional offices, and headquarters.

The technologization of humanitarian space have also brought on a much closer relationship with the private sector: big tech outfits as well as small startups. These actors also have limited experience with and knowledge of the ends and objectives of the humanitarian sector, while pursuing their own financial objectives with respect to commodification and use of data. In addition, the attendant security challenges are slowly receiving more attention. Spyware is being deployed by governments and warlords to provide surveillance of humanitarian officials and civilians. Data collected by humanitarian organizations may be stolen and misused by the same actors. Indifference, incompetence and bad planning might result in data breaches.

Juridification. Humanitarian governance is increasingly undertaken through law and law-like language as actors are held accountable through legal or quasi-legal mechanisms. One important trend is the  evolving body of international disaster response law (IDRL) aiming to eliminate bureaucratic barriers to the entry of relief personnel, goods and equipment, and the operation of relief programmes, as well as addressing regulatory failures to monitor and correct problems of quality and coordination in disasters.

A different kind of legalisation is taking place through the evolution and institutionalization of a legal standard for a ‘duty of care’ for humanitarian staff. The 2015 Steve Dennis versus the Norwegian Refugee Council case from the Oslo District court, have shifted the conceptualisation of the duty of care standard for humanitarian staff from being a good practice standard in human resource management to becoming a standard considered from and articulated through the language of law and liability. Although it is positive the humanitarian organizations need to work out the operational details of their duty of care, it can also lead to risk-avoidance or an increase in bureaucracy.

There is also an increasingly frequent assertion that ‘humanitarianism is being criminalized’ (here, here, here or here). According to the humanitarian narrative of ‘the criminalization of humanitarian space’, such criminalization can hamper access to affected communities and compromises the ability of humanitarian actors to deliver principled aid to fulfill the humanitarian imperative of assisting according to need. This includes the prohibition of material support for terrorism, that was extended to include humanitarian advocacy in the 2010 US Supreme Court decision Holder v. the Humanitarian Law Project and the use of the US False Claims Act to go after humanitarian NGOs operating in the occupied Palestinian territories. Based on complaints from a private individual In 2017 and 2018, the American University in Beirut (AUB) and the Norwegian People’s Aid (NPA) have reached costly settlements with the US government. Oxfam is currently facing a $ 160 million legal threat under the False Claims Act. Several more cases are under seal.

In parallel, there has been a broad trend towards to criminal prosecution of volunteer workers who have offered material support or protection – such as housing, transportation, food, education or rescue – to asylum seekers and refugees (here, here or here). Humanitarian work is here being construed as human smuggling or trafficking. At the same time, some types of criminalization are viewed as beneficial to ensure that humanitarians do no harm to beneficiaries or each other, for example with respect to sexual harassment and sexual violence. 

Conclusion

This blog post draws on our introduction to a 2019 special issue on humanitarian governance “A world in turmoil: governing risk, establishing order in humanitarian crises” published by Disasters. As discussed in the introduction and further analyzed in this blog post, it is ironic that the quest to deal with the unintended consequences of humanitarian action, has unintended effects as well. First, the initiatives listed above are often difficult to implement. Second, they also bear the risk of technocratization: these techniques are not neutral; they may hamper participation and obscure power politics. As illustrated by criminalization, some governance attempts can even contribute to a shrinking of humanitarian space. Third, they can lead to a lack of respect for the humanitarian principles, so that the protection of people in need is not well ensured.

Protecting children’s digital bodies through rights

Written by

This text first appeared on Open Global Rights and is re-posted here.

Kristin Bergtora Sandvik is a socio-legal scholar with a particular interest in the politics of innovation and technology in the humanitarian space. She is a research professor in humanitarian studies at PRIO, and a professor in the Department of Criminology and Sociology of Law at the University of Oslo.

Children are becoming the objects of a multitude of monitoring devices—what are the possible negative ramifications in low resource contexts and fragile settings?

The recent incident of a UNHCR official tweeting a photo of an Iraqi refugee girl holding a piece of paper with all her personal data, including family composition and location, is remarkable for two reasons. First, because of the stunning indifference and perhaps also ignorance displayed by a high-ranking UN communications official with respect to a child’s personal data. However, the more notable aspect of this incident has been the widespread condemnation of the tweet (since deleted) and its sender, and her explanation that it was “six years old”. While public criticism has focused on the power gap between humanitarians and refugees and the precarious situation of Iraqi refugees, this incident is noteworthy because it marks the descent of a new figure in international aid and global governance: that of children’s digital bodies.

Because children are dependent, what technology promises most of all is almost unlimited care and control: directly by parents but indirectly by marketing agencies and tech companies building consumer profiles. As explained by the Deborah Lupton, in the political economy of the global North (and, I would add, the global East), children are becoming the objects of a multitude of monitoring devices that generate detailed data about them. What are the possible negative ramifications in low resources contexts and fragile settings characterized by deep-seated oversight and accountability deficits?

The rise of experimental practices: Ed. Tech, babies and biometrics

There is a long history of problematic educational transplants in aid context, from dumping used text books to culturally or linguistically inappropriate material. The history of tech-dumping in disasters is much more recent, but also problematically involves large-scale testing of educational technology platforms. While practitioners complain about relevance, lack of participatory engagement and questionable operability in the emergency context, ethical aspects of educational technology (Ed. Tech), data extraction—and how the collection of data from children and youth constitute part of the merging of aid and surveillance capitalism—are little discussed.

Another recent trend concerns infant biometric identification to help boost vaccination rates. Hundreds of thousands of children die annually due to preventable diseases, many because of inconsistencies in the provision of vaccine programs. Biometric identification is thus intended to link children with their medical records and overcome the logistical challenges of paper-based systems. Trials are now ongoing or planned for India, Bangladesh and Tanzania. While there are still technical challenges in accurately capturing the biometric data of infants, new biometric techniques capture fingers, eyes, faces, ears and feet. In addition to vaccines, uses for child biometrics include combatting aid fraud, identifying missing children and combatting identity theft.

In aid, data is increasingly extracted from children through the miniaturization and personalization of ICT technology. Infant and child biometrics are often coupled with tracking devices in the form of wristbands, necklaces, earpieces, and other devices which the users carry for extended periods of time.

Across the board, technology initiatives directed at children are usually presented as progress narratives, with little concern for unintended consequences. In the economy of suffering, children and infants are always the most deserving individuals, and life-saving interventions are hard to argue against. Similarly, the urgency of saving children functions as a call to action that affords aid and private sector actors room to maneuver with respect to testing and experimentation. At the same time, the mix of gadget distribution and data harvesting inevitably become part of a global data economy, where patterns of structural inequality are reproduced and exacerbated.

Children’s digital bodies

Despite the massive technologization of aid targeting children, so far, no critical thinking has gone into considering the production of children’s digital bodies in aid. The use of digital technologies creates corresponding “digital bodies”—images, information, biometrics, and other data stored in digital space—that represent the physical bodies of populations affected by conflict and natural hazards, but over which these populations have little say or control. These “digital bodies” co-constitute our personalities, relationships, legal and social personas—and today they have immense bearing on our rights and privileges as individuals and citizens. What is really different about children’s digital bodies? What is the specific nature of risk and harm these bodies might incur?

In a non-aid context, critical data researchers and privacy advocates are only just beginning to direct attention to these practices, in particular to the array of specific harms they may encounter, including but not limited to the erosion of privacy.

The question of testing unfinished products on children is deeply contentious: the possibility that unsafe products may be trialed in fragile and low resource settings under different requirements than those posed by rich countries is highly problematic.  On the other hand, parachuting and transplanting digital devices from the global North and East to the global South without any understanding of local needs, context and adaption practices is—based on the history of technological imperialism—ineffective, disempowering, a misuse of resources and, at worst, could further destabilize fragile school systems.

Very often, in aid tech targeting children, the potential for digital risk and harm for children is ignored or made invisible. Risk is phrased as an issue of data security and malfunction and human manipulation of data. Children—especially in low-resource settings—have few opportunities to challenge the knowledge generated through algorithms. They also have scant techno-legal consciousness with respect to how their personal data is being exploited, commodified and used for decisions about their future access to resources, such as healthcare, education, insurance, welfare, employment, and so on. There is the obvious risk of armed actors and other malicious actors accessing and exploiting data; but there are also issues connected to wearables, tablets and phones being used as listening devices useful for surveilling the child’s relatives and careers. It is incumbent on aid actors to understand both the opportunities posed by new technologies, as well as the potential harms they may present—not only during the response, but long after the emergency ends.

Conclusion: time to turn to the CRC!

The mainstreaming of a combination of surveillance and data extraction from children now taking place in aid, ranging from education technology to infant biometrics means that critical discussions of the ethical and legal implications for children’s digital bodies are becoming a burning issue.

The do no harm principle is a key ethical guidance post across fields of development, humanitarianism and global health. The examples above illustrate the need for investment in ethics and evidence on the impact of development and application of new technologies in low resource and fragile settings.  Practitioners and academics need to be alert to how the framing of structural problems shifts to problematizations being amenable to technological innovation and intervention and the interests of technology stakeholders.  But is that enough?

The Children’s Rights Convention of 1989 represented a watershed moment in thinking children’s right to integrity, to be heard and to protection of their physical bodies. Article 3.1 demands that “In all actions concerning children, whether undertaken by public or private social welfare institutions, courts of law, administrative authorities or legislative bodies, the best interests of the child shall be a primary consideration.” Time has now come to articulate and integrate an understanding of children’s digital bodies in international aid within this normative framework.

What Can Data Governance Learn from Humanitarians?

Written by

Sean McDonald argues that the humanitarian sector has much to offer the technology industry, and explores the relationship between the two. This article first appeared on Centre for International Governance Innovation, and is reposted here.

About the author: Sean Martin McDonald is the co-founder of Digital Public, which builds legal trusts to protect and govern digital assets. Sean’s research focuses on civic data trusts as vehicles that embed public interest governance into digital relationships and markets.

World Food Programme (WFP) aid arrives in in Aslam, Hajjah, Yemen. The programme recently accused the government of redirecting aid to fund the war and insisted that aid recipients participate in a biometric identity-tracking system, sparking a data governance standoff. (AP Photo/Hammadi Issa)

Over the summer, the World Food Programme (WFP) — the world’s largest humanitarian organization — got into a pitched standoff with Yemen’s Houthi government over, on the surface, data governance. That standoff stopped food aid to 850,000 people for more than two months during the world’s worst humanitarian crisis. Essentially, the WFP accused the Houthi government of redirecting aid to fund the war and insisted that aid recipients participate in a biometric identity-tracking system. The government responded by accusing the WFP of being a front for intelligence operations; this was opportune, given the recent controversy over their relationship with Palantir. In the end, the parties agreed to use the WFP’s fingerprint-based biometric identity system, despite reported flaws. The dispute, of course, wasn’t just about data — it was about power, trust and the licence to operate.  

While they may seem worlds apart, the humanitarian sector has much to offer to the technology industry. One of the things humanitarians and technologists have in common is an extraordinary power to operate. For humanitarians, power takes the form of an internationally agreed-upon right to intervene in conflicts – for some, with legal immunity. And technology companies have the ability to project themselves into global markets without the need for traditional government approval.

In one sense, they’re opposites. Humanitarians have had to meticulously negotiate the conditions of their access to conflict zones, based on non-intervention principles, the terms of host country agreements with governments and, increasingly, data-sharing agreements. In contrast, technology companies have mostly enjoyed the freedom to operate globally without much negotiation, taxation or regulation of any type. But, in recent years (as illustrated by the WFP example) humanitarian organizations are starting to face the political and regulatory implications of collecting, using, storing, sharing and deleting data. Technology companies, it seems, are following the same path; they face significant public pushback from nearly every corner of the world, from international standards bodies and antitrust investigations to privacy fines and class action lawsuits.

Humanitarian organizations have considerable history and experience negotiating for the licence to operate in political and unstable contexts – which should inform the people and companies designing data governance systems. Here are five places to start:

Licence to Operate

Humanitarians and technology companies can, and sometimes do, operate in places where the government is actively resistant to their presence. While the stakes are often lower for technology companies, the costs involved in negotiating licence to operate country-by-country, and the technical complexity of maintaining product offerings compatible with divergent political contexts, are high. As a result, most technology companies launch offerings, and then react to, or defend against governmental and public concerns. That approach is decidedly opportunist, sacrificing long-term goodwill for short-term gains. Humanitarian organizations have extensive debates around their right to access affected populations, and under what conditions they earn that mandate. One thing humanitarians can teach technology companies is the importance of contextual negotiations and compromise to improve medium-term sustainability and long-term growth.

The Political Complexity of Neutrality

The technology industry has become a popular political scapegoat, often coming under fire for all kinds of bias. Technology companies arbitrate complex social, commercial and political processes, some without any dedicated operational infrastructure. The larger companies have built trust and safety teams, content moderation units of varying types, and online dispute resolution systems — all of which are designed to help users solve problems related to platforms’ core functions. Each of these approaches has grown significantly in recent years, but largely to mitigate damage created by the technology sector itself – and often without transparency or the ability to shape rules.

Humanitarian organizations, in contrast, are defined by their commitment to several core, apolitical principles: humanity, neutrality, impartiality, independence and to do no harm. The major humanitarian organizations have built organizations and reputations for upholding those values, often amid violent conflict, that scale globally. The technology industry, and in particular those seeking the licence to provide public digital services or to govern public data — has a significant amount to learn from the organizational structure of complex humanitarian operations. 

Federation

Federation is an organizational structure that manages common infrastructure and operational hierarchies. Federation is second nature to technology companies when it comes to code, but they are just learning how to federate and devolve their organizational structures. Humanitarian organizations have been working through devolved, federated organizational structures for decades — the International Federation of the Red Cross, for example. There is a natural, and well-documented tension between independence and upholding common standards across networks – especially in technology systems. Yet, humanitarian organizations have built federated organizations that enable them to operate globally, while availing themselves of the two most important aspects of building trust: investment in local capacity and accountability.

Localization

In addition to negotiating a licence to operate with governments, humanitarian organizations often invest in domestic response capacity, and in recent years, localization has become a driving strategic imperative. Humanitarians increasingly realize they need to offer value beyond direct emergency aid, in order to foster more durable solutions and earn the trust of communities. Technology companies often make their products available internationally — and they often invest in countries where they maintain a physical presence, but they rarely set up a presence for the purposes of investing in local communities or in ways that extend beyond their business interests. Technology organizations looking to build trust and public approval in the ways they govern data could learn from the humanitarian sector’s investments in local capacity, resilience and independence.

Accountability

While the humanitarian sector faces a lot of controversy over accountability, their typical operating practice is to engage in direct negotiations with local parties, which is different than technology companies, who generally start with one set of terms they apply globally.  The default terms of the technology industry’s cardinal data governance contracts — terms of service agreements and privacy policies — enable them to unilaterally change the terms of the agreement. It’s impossible to rely on the terms of a contract that can change at the whim of one party – or when the underlying goes bankrupt or gets acquired. The actors within the technology industry seeking public trust in the way they manage data can learn from the humanitarian sector about the need for credible parity between negotiating parties and distributed accountability.

The good news is that the humanitarian sector and the technology industry are well on their way to forming deep alliances; the heads of several major humanitarian organizations have placed private sector coordination and co-creation at the centre of their strategies. The World Economic Forum is laying the foundation for private companies to participate in international governance bodies. And, private foundations and investors increasingly play a role in shaping response efforts. 

Unfortunately, these relationships may be a double-edged sword. Technology companies can take advantage of humanitarian organizations’ unique licence to operate to work in regulated spaces, test new products without repercussions and even justify the creation of invasive surveillance. This new generation of relationships between the humanitarian organizations and technology companies offer opportunities for each group to learn from the other’s structural solutions on problems relating to shared issues of trust, neutrality and global scale. Let’s hope that the technology industry chooses to learn from the organizations that have spent the last century building, testing and scaling organizational structures to deliver the best of humanity.

New article: Digital communication technologies in humanitarian and pandemic response

In their newly published article, The new informatics of pandemic response: humanitarian technology, efficiency, and the subtle retreat of national agency, in the Journal of International Humanitarian Action, Christopher Wilson and Maria Gabrielsen Jumbert, review empirical uses of communications technology in humanitarian and pandemic response, and the 2014 Ebola response in particular, and propose a three-part conceptual model for the new informatics of pandemic response.

Digital communication technologies play an increasingly prominent role in humanitarian operations and in response to international pandemics specifically. A burgeoning body of scholarship on the topic displays high expectations for such tools to increase the efficiency of pandemic response. The model proposed in this article distinguishes between the use of digital communication tools for diagnostic, risk communication, and coordination activities and highlights how the influx of novel actors and tendencies towards digital and operational convergence risks focusing humanitarian action and decision-making outside national authorities’ spheres of influence in pandemic response. This risk exacerbates a fundamental tension between the humanitarian promise of new technologies and the fundamental norm that international humanitarian response should complement and give primacy to the role of national authorities when possible. The article closes with recommendations for ensuring the inclusion of roles and agency for national authorities in technology-supported communication processes for pandemic response.

The article can be read here: https://jhumanitarianaction.springeropen.com/articles/10.1186/s41018-018-0036-5

From Principle to Practice: Humanitarian Innovation and Experimentation

Written by

Humanitarian organizations have an almost impossible task: They must balance the imperative to save lives with the commitment to do no harm. They perform this balancing act amidst chaos, with incredibly high stakes and far fewer resources than they need. It’s no wonder that new technologies that promise to do more with less are so appealing.

By now, we know that technology can introduce bias, insecurity, and failure into systems. We know it is not an unalloyed good. What we often don’t know is how to measure the potential for those harms in the especially fragile contexts where humanitarians work. Without the tools or frameworks to evaluate the credibility of new technologies, it’s hard for humanitarians to know whether they’re having the intended impact and to assess the potential for harm. Introducing untested technologies into unstable environments raises an essential question: When is humanitarian innovation actually human subjects experimentation?

Humanitarians’ use of new technologies (including biometric identification to register refugees for relief, commercial drones to deliver cargo in difficult areas, and big data-fueled algorithms to predict the spread of disease) increasingly looks like the type of experimentation that drove the creation of human subjects research rules in the mid-20th century. In both examples, Western interests used untested approaches on African and Asian populations with limited consent and even less recourse. Today’s digital humanitarians may be innovators, but each new technology raises the specter of new harms, including biasing public resources with predictions over needs assessment, introducing coordination and practical failures through unique indicators and incompatible databases, and significant legal risks to both humanitarians and their growing list of partners.

For example, one popular humanitarian innovation uses big data and algorithms to build predictive epidemiological models. In the immediate aftermath of the late 2014 Ebola outbreak in West Africa, a range of humanitarian, academic, and technology organizations called for access to mobile network operators’ databases to track and model the disease. Several organizations got access to those databases—which, it turns out, was both illegal and ineffective. It violated the privacy of millions of people in contravention of domestic regulation, regional conventions, and international law. Ebola was a hemorrhagic fever, which requires the exchange of fluids to transmit—a behavior that isn’t represented in call detail records. More importantly, the resources that should have gone into saving lives and building the facilities necessary to treat the disease instead went to technology.

Without functioning infrastructure, institutions, or systems to coordinate communication, technology fails just like anything else. And yet these are exactly the contexts in which humanitarian innovation organizations introduce technology, often without the tools to measure, monitor, or correct the failures that result. In many cases, these failures are endured by populations already under tremendous hardship, with few ways to hold humanitarians accountable.

Humanitarians need both an ethical and evidence-driven human experimentation framework for new technologies. They need a structure parallel to the guidelines created in medicine, which put in place a number of practical, ethical, and legal requirements for developing and applying new scientific advancements to human populations.

The Medical Model

“Human subjects research,” the term of art for human experimentation, comes from medicine, though it is increasingly applied across disciplines. Medicine created some of the first ethical codes in the late 18th and early 19th centuries, but the modern era of human subject research protections started in the aftermath of World War II, evolving with the Helsinki Declaration (1975), the Belmont Report (1978), and the Common Rule (1981). These rules established proportionality, informed consent, and ongoing due process as conditions of legal human subjects research. Proportionality refers to the idea that an experiment should balance the potential harms with the potential benefit to participants. Informed consent in human subjects research requires that subjects understand the context and the process of the experiment prior to agreeing to participate. And due process, here, refers to a bundle of principles, including assessing subjects’ need “equally,” subjects’ ability to quit a study, and the continuous assessment of whether an experiment balances methods with the potential outcomes.

These standards defined the practice of human subjects research for the much of the rest of the world and are essential for protecting populations from mistreatment by experimenters who undervalue their well-being. But they come from the medical industry, which relies on a lot of established infrastructure that less-defined industries, such as technology and humanitarianism, lack, which limits their applicability.

The medical community’s human subjects research rules clearly differentiate between research and practice based on the intention of the researcher or practitioner. If the goal is to learn, an intervention is research. If the goal is to help the subject, it’s practice. Because it comes from science, human subjects research law doesn’t contemplate that an activity would use a method without researching it first. The distinction between research and practice has always been controversial, but it gets especially blurry when applied to humanitarian innovation, where the intention is both to learn and to help affected populations.

The Belmont Report, a summary of ethical principles and guidelines for human subject research, defines practice as “designed solely to enhance the well-being of a client or patient and that have a reasonable expectation of success,” (emphasis added). This differs from humanitarian practice in two major ways: First, there is no direct fiduciary relationship between humanitarians and those they serve, and so humanitarians may prioritize groups or collective well-being over the interests of individuals. Second, humanitarians have no way to evaluate the reasonableness of their expectation of success. In other words, the assumptions embedded in human subjects research protections don’t clearly map to the relationships or activities involved in humanitarian response. As a result, these conventions offer humanitarian organizations neither clear guidance nor the types of protections that exist for well-regulated industrial experimentation.

In addition, human subjects research rules are set up so that interventions are judged on their potential for impact. Essentially, the higher the potential for impact on human lives, the more important it is to get informed consent, have ethical review, and for subjects to extricate themselves from the experiment. Unfortunately, in humanitarian response, the impacts are always high, and it’s almost impossible to isolate the effects generated by a single technology or intervention. Even where establishing consent is possible, disasters don’t lend themselves to consent frameworks, because refusing to participate can mean refusing life-saving assistance. In law, consent agreements made under life-threatening circumstances are called contracts of adhesion and aren’t valid. The result is that humanitarian innovation faces fundamental challenges in knowing how to deploy ethical experimentation frameworks and in implementing the protections they require.

First Steps

The good news is that existing legal and ethical frameworks lay a strong foundation. As Jacob Metcalf and Kate Crawford lay out in a 2016 paper, there are significant enough similarities between biomedical and big data research to develop new human subjects research rules. This January, the United States expanded the purview of the Common Rule to govern human subjects research funded by 16 federal departments and agencies. Despite their gaps, human subjects research laws go a long way toward establishing legally significant requirements for consent, proportionality, and due process—even if they don’t yet directly address humanitarian organizations.

Human rights-based approaches such as the Harvard Humanitarian Initiative’s Signal Code go further, adapting human rights to digital humanitarian practice. But, like most rights frameworks, it relies on public infrastructure to ratify, harmonize, and operationalize. There are proactive efforts to set industry-focused standards and guidelines, such as the World Humanitarian Summit’s Principles for Ethical Humanitarian Innovation and the Digital Impact Alliance’s Principles for Digital Development. And, of course, there are technology-centric efforts beginning to establish ethical use standards for specific technologies—like biometric identification, drone, and big data—that offer specific guidance but include incentives that may not be relevant in the humanitarian context.

That said, principles aren’t enough—we’re now getting to the hard part: building systems that actualize and operationalize our values. We don’t need to decide the boundaries of innovation or humanitarianism as industries to begin developing standards of practice. We don’t need to ratify an international convention on technology use to begin improving procurement requirements, developing common indicators of success for technology use, or establishing research centers capable of testing for applicability of new approaches to difficult and unstable environments. A wide range of industries are beginning to invest in legal, organizational, and technological approaches to building trust—all of which offer additional, practical steps forward.

For humanitarians, as always, the stakes are high. The mandate to intervene comes with the responsibility to know how to do better. Humanitarians hold themselves and their work to a higher standard than almost any other field in the world. They must now apply the same rigor to the technologies and tools they use.


This post originally appeared on the blog of Stanford Social Innovation Review.

Humanitarian experimentation

Written by

Humanitarian actors, faced with ongoing conflict, epidemics, famine and a range of natural disasters, are increasingly being asked to do more with less. The international community’s commitment of resources has not kept pace with their expectations or the growing crises around the world. Some humanitarian organizations are trying to bridge this disparity by adopting new technologies—a practice often referred to as humanitarian innovation. This blog post, building on a recent article in the ICRC Review, asserts that humanitarian innovation is often human experimentation without accountability, which may both cause harm and violate some of humanitarians’ most basic principles.

While many elements of humanitarian action are uncertain, there is a clear difference between using proven approaches to respond in new contexts and using wholly experimental approaches on populations at the height of their vulnerability. This is also not the first generation of humanitarian organizations to test new technologies or approaches in the midst of disaster. Our article draws upon three timely examples of humanitarian innovations, which are expanding into the mainstream of humanitarian practice without clear assessments of potential benefits or harms.

Cargo drones, for one, have been presented as a means to help deliver assistance to places that aid agencies otherwise find difficult, and sometimes impossible, to reach. Biometrics is another example. It is said to speed up cumbersome registration processes, thereby allowing faster access to aid for people in need (who can only receive assistance upon registration). And, in the case of responding to the 2014 outbreak of Ebola in West Africa, data modelling was seen as a way to help in this response. In each of these cases, technologies with great promise were deployed in ways that risked, distorted and/or damaged the relationships between survivors and responders.

These examples illustrate the need for investment in ethics and evidence on the impact of development and application of new technologies in humanitarian response. It is incumbent on humanitarian actors to understand both the opportunities posed by new technologies, as well as the potential harms they may present—not only during the response, but long after the emergency ends. This balance is between, on the one hand, working to identify new and ‘innovative’ ways of addressing some of the challenges that humanitarian actors confront and, on the other hand, the risk of introducing new technological ‘solutions’ in ways that resemble ‘humanitarian experimentation’ (as explained in the article). The latter carries with it the potential for various forms of harm. This risk of harm is not only to those that humanitarian actors are tasked to protect, but also to humanitarian actors themselves, in the form of legal liability, loss of credibility and operational inefficiency. Without open and transparent validation, it is impossible to know whether humanitarian innovations are solutions, or threats themselves. Aid agencies must not only to be extremely attentive to this balance, but also should do their utmost to avoid a harmful outcome.

Framing aid projects as ‘innovative’, rather than ‘experimental’, avoids the explicit acknowledgment that these tools are untested, understating both the risks these approaches may pose, as well as sidestepping the extensive body of laws that regulate human trials. Facing enormous pressure to act and ‘do something’ in view of contemporary humanitarian crisis, a specific logic seems to have gained prominence in the humanitarian community, a logic that conflicts with the risk-taking standards that prevail under normal circumstances. The use of untested approaches in uncertain and challenging humanitarian contexts provokes risks that do not necessarily bolster humanitarian principles. In fact, they may even conflict with the otherwise widely adhered to Do No Harm principle. Failing to test these technologies, or even explicitly acknowledge that they are untested, prior to deployment raises significant questions about both the ethics and evidence requirements implicit in the unique license afforded to humanitarian responders.

In Do No Harm: A Taxonomy of the Challenges of Humanitarian Experimentation, we contextualize humanitarian experimentation—providing a history, examples of current practice, a taxonomy of potential harms and an analysis against the core principles of the humanitarian enterprise.

***

Kristin Bergtora Sandvik, SJD Harvard Law School, is a Research Professor at the Peace Research Institute Oslo and a Professor of Sociology of Law at the University of Oslo. Her widely published socio-legal research focuses on technology and innovation, forced displacement and the struggle for accountability in humanitarian action. Most recently, Sandvik co-edited UNHCR and the Struggle for Accountability (Routledge, 2016), with Katja Lindskov Jacobsen, and The Good Drone (Routledge, 2017).

Katja Lindskov Jacobsen, PhD International Relations Lancaster University, is a Senior Researcher at Copenhagen University, Department of Political Science, Centre for Military Studies. She is an international authority on the issue of humanitarian biometrics and security dimensions and is the author of The Politics of Humanitarian Technology (Routledge, 2015). Her research has also appeared in Citizenship Studies, Security Dialogue, Journal of Intervention & Statebuilding, and African Security Review, among others.

Sean Martin McDonald, JD/MA American University, is the CEO of FrontlineSMS and a Fellow at Stanford’s Digital Civil Society Lab. He is the author of Ebola: A Big Data Disaster, a legal analysis of the way that humanitarian responders use data during crises. His work focuses on building agency at the intersection of digital spaces, using technology, law and civic trusts.

Unpacking the Myth of ICT’s Protective Effect in Mass Atrocity Response

Written by

Information Communication Technologies (ICTs) are now a standard part of the mass atrocity responder’s toolkit, being employed for evidence collection and research by NGOs, governments, and the private sector. One of the more notable justifications for their use has been to supplement or improve the protection of vulnerable populations. In a new article published in the Genocide Studies and Prevention Journal, we argue that there is little evidence for the assertion of this protective effect by ICTs in mass atrocity producing environments, which we have labeled the Protective or Preventative Effect (PPE). This blog post argues that the mass atrocity community needs to engage more critically with a widespread perception that ICTs have innate protective effects in mass atrocity response. More testing and validation of potential harms is necessary to ensure that civilians on the ground are not negatively affected by ICTs. Risks to individuals and communities include for example the theft,  appropriation and distortion of personal data, geotracking of ground movements and surveillance of speech, communication, movements and transactions through hand-held devices

Technologies performing remote sensing, crowd mapping, individual identification through facial recognition and big data analytics have significantly impacted mass atrocity response over the past 15 years. These include smartphone apps, remote sensing platforms such as satellite imagery analysis and surveillance drones, social media and data aggregation platforms.

Such technologies are primarily adopted due to their low-cost relative to analogue intervention, and their ability to be remotely deployed in otherwise inaccessible or insecure environments. The specific applications of these technologies and platforms are diverse and constantly evolving, but can generally be divided into two broad categories:

  • Prevention/Response applications seek to create novel situational awareness capacity to protect populations and inform response activities.
  • Justice/accountability use-cases aim to detect and/or document evidence of alleged crimes for judicial and/or advocacy purposes.

These ICTs are now effectively treated as indispensable force multipliers that supplement or supplant traditional mass atrocity response activities. However, in the absence of validation of these claims, adoption of these technologies can be said to be largely supply-driven.

As ICT use in mass atrocity and human security crisis response has been mainstreamed over the last two decades, so has a set of generalized and hitherto largely invalidated claims about their effects on the nature and effectiveness of response. These claims constitute technological utopianism—the notion that technological change is inevitable, problem-free, and progressive. Moreover, the adoption of this technology-reliant and remote posture encodes within it the idea that the direct result of deploying these technologies and platforms is the prediction, prevention, and deterring of mass atrocity related crimes—a form of technological utopianism known as solutionism, which holds that the right technology can solve all of mankind’s problems.

Within atrocity response, this approach is exemplified by the much-publicized Eyes on Darfur campaign, where the public viewing of satellite images from Darfur was framed as action in and of itself—the assumption being that simply “knowing about atrocities” is enough to mobilize mass empathy and as a result engender political action. Implicit in this is the idea that technology itself can fundamentally alter the calculus of whether and how mass atrocities occur.  The adoption of this view by civil society, we argue, means that responders are not simply adopting a set of tools and techniques, but a theory of change, built upon a technologically utopian worldview.

Underlying this theory of change is the imbuing of these platforms and technologies with an inherent “ambient protective effect”—e.g. transforming the threat matrix of a particular atrocity producing environment in a way that improves the human security status of the targeted population. The underlying assumption of this protective effect is that increased volumes of novel and otherwise unobtainable data over a large-scale geographic area or environment may cause one, some, or all of several potential ambient protective effects which will prevent or mitigate the effects of mass atrocities.

Our article argues that the human security community—particularly mass atrocity responders—must come to terms with the fact that there is a difference between knowing about atrocities and doing something about them. Monitoring is a precondition for protection, but it is does not have a protective effect in and by itself.

More research is needed to determine the validity of the assumptions encoded into ICT use, and to address their relationship to a growing body of scholarship indicating possible direct and indirect pernicious effects of attempting to project a PPE through technology. In some cases, these may be exposing civilians to new, rapidly evolving risks to their human security and mutating the behavior of mass atrocity perpetrators in ways which harm target populations (for example by providing perpetrators with sitting duck targets through real-time information about population movements;  or about settlements and survivors not harmed in a bombing campaign, for example) . To do no harm to civilians, we must start by recognizing that the unpredictable knock-on effects of ICT use can cause real harm to civilians—for example, crowd-sourced data can be used to foment violence as well as preventing it—and  that the prevailing technological utopianism may prevent responders from noticing.

This post comes from Kristin Bergtora Sandvik and Nathaniel A. Raymond. Kristin is a Research Professor in Humanitarian Studies at the Peace Research Institute Oslo (PRIO) and a professor of  Sociology of Law  at the University of Oslo. Nathaniel is the Director of the Signal Program on Human Security and Technology at the Harvard Humanitarian Initiative. This post was also published on the ATHA blog of the Harvard Humanitarian Initiative.

Conundrums in the Embrace of the Private Sector

Written by

The humanitarian sector faces an unprecedented number of crises globally. The growing operational and financial deficit in the capacity of governments and humanitarian organizations to respond has led to calls for changes in the way such crises are understood and managed.  This involves a strong focus on cooperation and partnerships with the private sector.  A large part of the allure is the notion that private-public partnerships will make humanitarian response faster by entrenching market-oriented rationalities, thus enhancing effectiveness. This is also how the private sector presents itself:

One should never underestimate the power of private companies who offer aid. Companies are almost always focused on efficiency, good negotiation, building their reputation (their brand) and getting things done on time and on budget (Narfeldt 2007).

Here, I will try to complicate this narrative by pointing out some conundrums in the vigorous humanitarian embrace of the private sector.

Back in 2007, Binder and Witte noted the emergence of a new form of engagement through partnerships between companies and traditional humanitarian actors, often based on a desire to demonstrate corporate social responsibility (CSR) and to motivate employees. In parallel, they observed that the War on Terror had enlarged the scope of traditional work with a role for commercial players to provide relief services. Today, these trends continue as public-private partnerships have emerged as a (donor) preferred humanitarian strategy to increase efficiency and accountability (see for example Drummond and Crawford 2014), goals that to some degree seem to merge as efficiency has become an important way of demonstrating accountability. The rationale for a greater inclusion of the private sector in humanitarian action is that partners can contribute to humanitarian solutions with different expertise and resources. Private companies are profit-driven and thus incentivized to comply with the specific deliverables and time frames set out in contracts. Donors are attracted to low overhead and lesser need for constant engagement and monitoring. Moreover, the private sector owns much of the infrastructure on which information and communication technologies are based.

The objections to private sector engagements are well-known and predictable. The outsourcing of humanitarian action has been criticized by commentators pointing to the loss of ground truth, and to the often poor-quality resulting from the private actors’ lack of understanding of humanitarian action, contextual knowledge, and crisis management skills. It is argued that companies are, by their very nature, mainly interested in “brand, employee motivation and doing more business” (Wassenhove 2014). Intensified private sector engagement thus leads to a marketization of humanitarian values (Weiss 2013) where “the humanitarian ethos is gradually eroded” (Xaba 2014).

In the following, I will instead question the idea of efficacy by challenging some of the assumptions underlying the turn to the private sector. I consider how the call for intensified cooperation overlooks persistent tensions inherent in the humanitarian market and in actors’ rationalities. I also identify what seems to be a fairly prevalent sentiment, namely, the assumption that such cooperation may serve the double objective of delivering humanitarians from the much-loathed Results-Based Management (RBM) regime while simultaneously delivering aid more effectively.

The first difficulty is structural: the turn to business cooperation is informed by the notion that the humanitarian market is inherently efficient and effective because it is a regular market. However, as noted by Binder and Witte, the humanitarian market may be characterized as a “quasi-market,” which exhibits an indirect producer–consumer relationship. In the market for humanitarian relief, the consumer (i.e. the aid recipient) neither purchases nor pays for the delivered service. Aid agencies are the producers, donors the buyers, and aid recipients the consumers. As a result, the market is loaded with asymmetries and uncertainties: Donors have difficulty determining whether the services they pay for are indeed adequately delivered, while recipients have few means of effectively making complaints or airing grievances. Nielsen and Santos (2013) note, for example, the often unanticipated and inappropriate delivery of equipment, as well as personnel. In a trenchant critique, Krause (2014) describes this as a market where agencies produce projects for a quasi-market in which institutional donors are the consumers and populations in need are part of the product being packaged and sold by relief organizations.

Interestingly, the currently most successful technology-based humanitarian endeavor is also a concerted attempt to remedy the quasi-status of the humanitarian market: Over the last decade, the international development community has invested heavily in the so-called financial inclusion agenda, aiming to make poor people less aid-dependent; this is sometimes labelled ‘resilience through asset creation.’ The partnership between the World Food Program and MasterCard, for example, uses “digital innovation to help people around the world to break the cycle of hunger and poverty.” For the World Food Programme, this is part of a broader strategy to move away from food aid and to improve food security through cash assets. As I have noted elsewhere, the underlying rationale is that access to financial services such as credit and savings will “create sizeable welfare benefits” as beneficiaries of aid are drawn further into the market economy as customers. The goal of implementing “cost-effective” electronic payment programs is to help beneficiaries “save money, improve efficiencies and prevent fraud.” The belief is that cash can ‘go where people cannot’ andprovide them with choice. However, while these strategies are motivated explicitly by the desire to turn the beneficiary more directly into a customer, the accountability regime constructed around these systems remains directed upwards to donors.

The second assumption to be examined is that of shared motivation and shared values, going beyond disapproving criticisms of ‘neoliberal governance strategies.’  I think it is important to recognize that call for intensified private sector collaboration masks a rather thin shared understanding of both the nature of humanitarian work and of the competence, presence, and relevance of the private sector, and that this impinges on how this collaboration plays out. Binder and Witte observed that past attempts to pursue partnerships with corporate agencies have often been frustrated as agencies have been unclear about the intended outcomes for the partnership, or have viewed it as a way of developing a long-term funding arrangement. According to Nielsen (2014), private-humanitarian collaboration is currently characterized by underlying disagreement about what constitutes ‘meaningful’ innovation, and how that impinges on responsible innovation and on accountability and CSR more broadly; there is a sense that the humanitarian customer often “does not know what s/he wants.” The private sector actor is frustrated about having to take all the risk in the development of products, while humanitarians fret about taking on future risks, as they will be the ones to face public condemnation and donor criticism if the product fails to aid beneficiaries in the field. Mays et al. (2012) identify a mismatch between humanitarian and business systems, leading to a clash between entrepreneurial and humanitarian values and the imperative to save lives and alleviate suffering. This resonates with my own observations, as humanitarians complain about being offered inadequate or unfeasible solutions; about being used as stepping stones to market access to the greater UN market; or simply about differences in rationality, where the private sector partner frames the transaction commercially by ‘thinking money’ and the humanitarian partner by ‘activity on the ground.’

Finally, the erstwhile push for business management approaches to humanitarian action was the result of a push for greater accountability and a need to professionalize humanitarian work. Perhaps the most significant import was Results-Based Management (RBM), a management strategy “focusing on performance and achievements of outputs, outcome and impact,” which provides a framework and tools for not only planning activities, but also risk management, performance monitoring, and evaluation. Over the course of time, humanitarians have become exasperated and frustrated with the RBM rationale, both because it is sometimes seen to be contrary to principled humanitarian assistance, and more often because RBM and the results agenda engenders a type of bureaucratization where humanitarians feel that they are “performing monitoring” instead of monitoring performance (borrowed from Welle 2014).

While some humanitarians now strive for a shift towards systems accountability (where they will be held to account with respect to their responsibility for maintaining functional and workable supply-chains or information sharing systems, not specifically demarcated deliverables), others see the private sector as the solution to the RBM straightjacket. There seems to have emerged a notion that increased private sector involvement may in fact allow humanitarians to kill two birds with one stone. Much of the attraction of partnerships and outsourcing to the private sector seems to be that RBM obligations can be offloaded to these actors, through subcontracting and outsourcing that details deliverables and outcomes. Hence, the private sector is both envisioned to be faster at delivering RBM-like outputs — now imagined as a separate objective for humanitarian actors — and quicker to deliver humanitarian response.

***

Note: This blog, written by Kristin Bergtora Sandvik (PRIO), was originally posted on the website of the Advanced Training Program on Humanitarian Action (ATHA).

Humanitarian innovation, humanitarian renewal?

Written by

The continued evolution of the humanitarian innovation concept needs a critical engagement with how this agenda interacts with previous and contemporary attempts to improve humanitarian action.

Accountability and transparency have been central to discussions of humanitarian action over the past two decades. Yet these issues appear generally to be given scant attention in the discourse around humanitarian innovation. The humanitarian innovation agenda is becoming a self-contained field with its own discourse and its own set of experts, institutions and projects – and even a definitive founding moment, namely 2009, when the ALNAP study on innovation in humanitarian action was published.[1] While attempts to develop a critical humanitarian innovation discourse have borrowed extensively from critical discussions on innovation in development studies, humanitarianism is not development done in a hurry but has its own distinct challenges, objectives and methodologies.

I will focus here on concrete material innovations, most commonly referred to as ‘humanitarian technology’. Discussions on such humanitarian innovations regularly acknowledge the need to avoid both fetishising novelty in itself and attributing inherently transformative qualities to technology rather than seeing how technology may fit into and build upon refugees’ existing resources.

Renewing humanitarianism

While it is obvious that internal and external reflections on a humanitarian industry and a humanitarian ethos in need of improvement are much older pursuits, I will start – as most scholars in humanitarian studies do today – with the mid-1990s and the ‘Goma-moment’. To recover from the moral and operational failures of the response to the Rwanda genocide and the ensuing crisis in the Great Lakes region of Africa, humanitarianism turned to human rights based approaches (HRBA) to become more ethical, to move from charitable action to social contract. Yet HRBA always suffered from an intrinsic lack of clarity of meaning as well as the problem of states being the obliged parties under international human rights, a particular problem in the context of displacement, whether internal or across borders.

A decade or so later, in the aftermath of the 2004 Indian Ocean tsunami and in the face of accusations about poor governance, insufficient coordination, incompetence and waste, the humanitarian enterprise embarked on institutional reform to become better. Responses were to be maximised through Humanitarian Coordinators, funding was to become more efficient through Central Emergency Response Funds and, most importantly in the everyday life of humanitarian practitioners, the Cluster approach allocated areas of responsibility to the largest humanitarian actors.

The need for greater accountability and transparency were drivers for both HRBA (with its moral intricacies) and humantiarian reform (with its bureaucratic complexities). What is now happening with accountability and transparency within the technological-innovation-as-renewal paradigm?

If Rwanda and the Indian Ocean tsunami were the events ushering in HRBA and humanitarian reform, Haiti was the much heralded game-changer for technology whose use there (despite many practical problems and malfunctioning solutions) is generally assessed as positive.[2] In the years since, a host of new technology actors, initiatives, technical platforms and methodologies has emerged. New communications technology, biometrics, cash cards, drones and 3D printing have all captured the humanitarian imagination.

Thinking about problems and difficulties is often framed in terms of finding technical solutions, obtaining sufficient funding to move from pilot phases to scale, etc. However, as ideas about progress and inevitability dominate the field, the technology is seen not as something we use to get closer to a better humanitarianism but something which, once deployed, is itself a better, more accountable and transparent humanitarianism.

So institutionalised have transparency and accountability become that they have now vanished off the critical radar and become part of the taken-for-granted discursive and institutional framework. Accountability and transparency are assumed to be automatically produced simply by the act of adopting and deploying new technology. (Interestingly, the third tenet usually listed with accountability and transparency, efficiency, is also a basic assumption of this agenda.)

Accountability, participation and transparency

A 2013 report published by UN OCHA, Humanitarianism in the Network Age, argues that “everyone agrees that technology has changed how people interact and how power is distributed”.[3] While technology has undoubtedly altered human interaction, an assumption that proliferating innovative humanitarian technology unveils power, redistributes power or empowers needs to be subjected to scrutiny.

The classic issues in humanitarian accountability – to whom it is owed and by whom, how it can be achieved and, most crucially, what would count as substantively meaningful accountability – remain acutely difficult to answer. These issues also remain political issues which cannot be solved only with new technical solutions emphasising functionality and affordability; we cannot innovate ourselves out of the accountability problem, in the same way as technology cannot be seen as an empty shell waiting to be filled with (humanitarian) meaning.

This speaks particularly to the quest for participation of those in need of humanitarian protection and assistance, “helping people find innovative ways to help themselves”. In practice, we know that humanitarians arrive late in the field – they are not (at least not outside their own communications) the first responders. Affected individuals, their neighbours and communities are. Yet we should be concerned if the engagement with technological innovation also becomes a way of pushing the resilience agenda further in the direction of making those in need more responsible than well-paid humanitarian actors for providing humanitarian aid.

The arrival of the private sector as fully respectable partners in humanitarian action is in principle a necessary and desirable development. Nevertheless, while expressing distaste for the involvement of the private sector in humanitarian response is passé, talk of the importance of local markets and of ‘local innovation’, ‘indigenous innovation’ or ‘bottom-up innovation’ inevitable begs the questions: is the private sector one of the local participants as well as those in humanitarian need, and what do they want out of the partnership?

The current drive towards open data – and the belief in the emancipatory potential of open data access – means that transparency is a highly relevant theme on the humanitarian innovation agenda. Yet, on a pragmatic level, in an avalanche of information, it is difficult to see what is not there, particularly for individuals in crisis with limited access to information technology or with limited (computer) literacy.

Accountability and transparency thus seem to be missing in the implementation of the humanitarian innovation agenda, although innovation should be a means to enhance these objectives (among others) to produce a better humanitarianism.

Conclusions

First, we must beware of the assumption of automatic progress. We may be able to innovate ourselves out of a few traditional challenges and difficulties but most will remain, and additionally there will be new challenges resulting from the new technology.

Second, innovation looked at as a process appears suspiciously like the reforms of yesteryear. What, for example, is the difference between ‘bottom-up innovation’ and the ‘local knowledge’ valued in previous efforts to ensure participation? And are the paradigm shifts of innovation really much different from the moral improvement agenda of approaches such as the human-rights-based humanitarian aid?

Third, the increasingly self-referential humanitarian innovation discourse itself warrants scrutiny. With almost no talk of justice, social transformation or redistribution of power, we are left with a humanitarianism where inclusion is about access to markets, and empowerment is about making beneficiaries more self-reliant and about putting the label ‘humanitarian’ onto the customer concept in innovation theory.

 

***

[1] www.alnap.org/resource/9207
[2] See the IFRC World Disasters Report 2013 on Technology and Humanitarian Innovation.
www.ifrc.org/publications-and-reports/world-disasters-report/world-disasters-report-2013/
[3] www.unocha.org/hina

 


***

This blog is based on Kristin B. Sandvik’s article, ‘Humanitarian innovation, humanitarian renewal?’, published in a special Forced Migration Review supplement on ‘Innovation and refugees’.

A Humanitarian Technology Policy Agenda for 2016

Written by

The World Humanitarian Summit in 2016 will feature transformation through innovation as a key theme. Leading up to the summit, OCHA has voiced the need to “identify and implement….positions that address operational challenges and opportunities” (OCHA 2013) relating to the use of information technology, big data and innovations in humanitarian action.

In this blog post we sketch out four areas in need of further research over the next two years to provide policymakers, humanitarian actors and other stakeholders with up to date and relevant research and knowledge.

1.    Empowerment and Accountability

  • Pivoting humanitarian action: Maximizing user-benefit from technology

Affected populations are the primary responders in disasters and conflict zones, and actively use information technology to self-organize, spread information about their condition, call for aid, communicate with humanitarian actors, and demand accountability. New technologies also have the potential to put responders at the center of the entire life cycle of humanitarian action – from needs assessment and information gathering, to analysis, coordination, support, monitoring and evaluation.  It is crucial that member states, humanitarian organizations and volunteer & technical communities (V&TCs) improve their actions to take advantage of this opportunity. The 2016 Summit should strengthen the end-user perspective in the development of guidelines for V&TCs.

  • The changing meanings of accountability

Increasingly over the last 20 years, the humanitarian community has focused on issues of agency accountability and professionalization of humanitarian action, vis-à-vis donors as well as beneficiaries. However, the technological revolution in humanitarian action and the increasingly central role of large telecom and tech companies makes it necessary to broaden the focus of accountability considerations.  For example, OCHA is now considering developing guidelines for how formal humanitarian organizations and V&TCs should cooperate with these companies. Leading up to the 2016 Summit, there is a need for more reflection and research on how technology can be used to enhance accountability in humanitarian action for all parties, including new actors.


2.    The role of aggregated data

Data collection and the importance of aggregated data have come to occupy an important role in humanitarian action. As illustrated by the 2013 World Disasters Report, big data and remote sensing capabilities provide an unprecedented opportunity to access contextual information about pending and ongoing humanitarian crises. Many notable initiatives such as the UN Global Pulse suggest that the development of rigorous information management systems may lead to feasible mechanisms for forecasting and preventing crises. Particular attention should be paid to three issue areas:

  • Veracity and validity

Multiple data transactions and increased complexity in data structures increase the potential for error in humanitarian data entry and interpretation. Data that is collected or generated through digital or mobile mechanisms will often pose challenges, especially regarding verification. Although significant work is underway to establish software and procedures to verify data, understanding the limitations posed to veracity and validity of humanitarian data will be critical.

  • Identity and anonymity

As humanitarian data is aggregated and made public, the chances for re-identification of individuals and groups increase at an unknown rate. This phenomenon, known as the mosaic effect, is widely recognized but little understood. There is little understanding of the dangers that shared anonymous data would pose in a humanitarian context, where data may be limited, but the potential damage of re-identification would be quite extreme.

  • Agency and (dis)empowerment

The aggregation of humanitarian data from multiple data streams and sources decreases the likelihood that individuals and groups reflected in that data will be aware of, and able to influence, the way in which that data is used.  This principle, sometimes referred to as informational self-determination, presents a challenge to digital and mobile data collection contexts generally, but is highly problematic in humanitarian contexts, where risks associated with personal information are particularly grave.


3.    Enabling and regulating V&TCs

Remote volunteer and technical communities (V&TCs) now play an increasingly important role in humanitarian contexts – generating, aggregating, classifying and even analyzing data, in parallel to, or sometimes in collaboration with more established actors and multilateral initiatives. They increasingly enjoy formalized relationships with traditional humanitarian actors, processing and generating information in the support of humanitarian interventions. Yet individual volunteers participating in such initiatives are often less equipped than traditional humanitarian actors to deal with the ethical, privacy and security issues surrounding their activities, although some work is underway. Although in many ways the contribution of V&TCs represents a paradigm shift in humanitarian action, the digital and volunteering revolution has also brought new concerns with regards to the knowledge and understanding of core humanitarian principles and tasks, such as ‘do no harm’ and humanity, neutrality and impartiality.

In considering the above issues, attention should be paid to inherent trade-offs and the need to balance competing values, including the following two:

  • Data responsibility vs. efficiency. There is an inherent tension between efficiency and data responsibility in humanitarian interventions. Generally, protecting the privacy of vulnerable groups and individuals will require the allocation of time and resources—to conduct risk assessments, to engage and secure informed consent, to implement informational security protocols. In humanitarian contexts, the imperative to act quickly and decisively may often run counter to more measured actions intended to mitigate informational risks to privacy and security
  • Western values vs. global standards. It has also been argued that privacy is a Western preoccupation, without any real relevance to victims of a humanitarian crisis facing much more immediate and pressing threats. This argument highlights the important tension between mitigating informational risks to privacy and security, and the need to efficiently and effectively expedite humanitarian aid. It does not account for the concrete risks posed to individual and collective security by irresponsible data management, however.

This is our modest contribution to an agenda for research and policy development for humanitarian technology. We would like to join forces with other actors interested in these challenges to contribute to a necessary debate on a number of issues that touch upon some of the core principles for humanitarian action. The ambition is to strengthen humanitarian action in an innovative and accountable manner, making us better equipped to help people in need in the future.

Note: This blog, written by Kristin Bergtora Sandvik (PRIO), Christopher Wilson (The Engine Room) and John Karlsrud (NUPI), was originally posted on the website of the Advanced Training on Humanitarian Action Project (ATHA).

Norwegian Centre for Humanitarian Studies
Contact: Centre Director Maria Gabrielsen Jumbert margab@prio.org, PRIO, PO Box 9229 Grønland, NO-0134 Oslo, Norway