Tag Archives: biometrics

Contingency planning in the Digital Age: Biometric data of Afghans must be reconsidered

Written by

This blog was first published on the Peace Research Institute Oslo (PRIO) blog and is available here.

The situation in Afghanistan changes by the minute. In this blog post, we want to call attention to a largely overlooked issue: protection of Afghan refugees or other Afghans who have been registered biometrically by humanitarian or military agencies. Having collected biometrics from various parts of the Afghan population, for different purposes and with different technical approaches, recent events teach us a vital lesson: both the humanitarian and the military approach come with significant risks and unintended consequences.

Image: Cpl. Reece Lodder/United States Marine Corps

Normally, humanitarian biometrics and military biometrics are considered separate spheres. Yet, as we show in this piece, looking at military and humanitarian biometric systems in parallel gives a strong indication that the use of biometrics in intervention contexts calls for reconsideration. Neither anonymized nor identifiable biometric data is a ‘solution’ but rather comes with distinct risks and challenges.

Afghanistan, UNHCR and biometrics: risks of wrongfully denying refugees assistance

As embassies in Afghanistan are being evacuated and employees of international humanitarian agencies wonder how much longer they will be able to work, contingency plans are drawn up: Will there be population movements, will there be camps for IDPs in Afghanistan or for refugees across borders? How will they be registered? How will they be housed? Contingency planning will help save lives.

Future planning must learn from experiences of the past. In the case of Afghanistan those are dire. More than forty years ago, on Christmas Eve in 1979, the Soviet Army invaded the country. Afghans began fleeing and sought refuge across nearby borders. Numbers swelled progressively. A decade later there were more than five million refugees in Pakistan and Iran. The departure of Soviet troops was followed by continued civil war and the reign of the Taliban from 1996 to 2001.

A US-led coalition of Western powers dislodged the Taliban regime after the 9/11 attacks in New York. This was the starting point for the international community to invest in the return of Afghan refugees to their home country. The UN Refugee Agency (UNHCR) was tasked with organizing the return and found itself facing several challenges, including limited financial and technical capacities, and problems linked to the sheer number of persons to be repatriated. While UNHCR started to develop and operate large-scale automated registration systems already in the 1990s, these were not yet sufficiently advanced to deal with several million people. At the time, nobody had such systems. Registration was eventually outsourced to Pakistan’s National Database and Registration Authority (NADRA). Refugees were given Proof of Registration (PoR) cards issued by the Pakistani government. Another problem was the integrity of the voluntary return programme. Donors provided funds to UNHCR for the agency to disburse significant cash grants to Afghan refugees as incentives to return. But how could they ensure that nobody would come forward more than once to claim allowance? There was no precedent. UNHCR was charting new territory and testing new approaches.

A ‘solution’ was offered by an American tech company: Biometrics. A stand-alone system was set up. The iris patterns of every returning refugee above the age of 12 – later the age of 6 – was scanned and stored in a biometrics database. Intending to protect the privacy of these individuals, each refugee’s iris image was stored anonymously. The belief was that the novel biometric system would comply with data privacy standards if the iris images were stored anonymously. The system operates ‘one to many matches’, meaning that one iris image is matched against the numerous iris images stored in the database to search for a potential match. This means that a returning refugee would only receive a cash grant if their iris could not be found in UNHCR’s new biometric database. If the iris was found in the database, this was taken to mean that the person had already received a cash grant earlier and repatriation assistance was thus denied. Millions of Afghans received their allowances, and left images of their iris in UNHCR’s biometric database. Today, the total number of images in this database stands at well above four million.

Fifteen years after its introduction it was discovered that while this novel system was designed with good intentions of providing privacy protection to iris-registered returnees, it had unintentionally opened a Pandora’s box: automated decision-making without any possibility of recourse. If the biometric recognition system would produce a false positive match (i.e., mistakenly matching the iris scan of a new returnee with one already registered in UNHCR’s database) – which statistically is possible, even likely – there was no way this returnee could prove that the machine’s false match was in fact wrong. Since all scans are stored anonymously, a person cannot prove that the iris in UNHCR’s database belongs to someone else, even though that is a likely scenario for a system being tested on an unprecedented scale. Likewise, no UNHCR staff can overturn the decision of the machine. Thus, by default, the machine is always ‘right’. This logic risks turning the intended aim of privacy protection into a problem, namely denying assistance to returnees who rightfully claim repatriation cash grants from UNHCR.

This system has never been replicated elsewhere. UNHCR has modernized its global registration systems in recent years and continues to use iris scanning and other biometric identifiers. In its current system, each image is linked to a person and can be checked in case of doubt. Such system designs are, however, not unproblematic either.

US military and identifiable biometrics in the hands of the Taliban: risk of reprisals

As embassies in Afghanistan are being evacuated, not only are many vulnerable individuals left behind, but also biometric identification devices have been left. Indeed, not only UNHCR but also the US military has been collecting biometrics, though from very different parts of the Afghan population. This includes biometric data (i) from Afghans who have worked with coalition forces and (ii) from individuals encountered ‘in the field’. In both cases biometrics were for example used by the US military to check the identity of these individuals against biometrics stored in the US DoD’s Biometric-Enabled Watchlist containing biometrics from wanted terrorists, among others. As news have circulated about the Taliban getting their hands on biometric collection and identification devices left behind, and on the sensitive biometric data that these devices contain, an assessment of the situation, the risks, and the lessons is called for. What will the Taliban do with this data and with these devices? Will they for example use it to check whether an individual has collaborated with coalition forces?

If that is the case, it could have detrimental repercussions for anyone identified biometrically by the Taliban. The Taliban regime of course cannot check the iris scans and fingerprints of all individuals throughout Afghanistan. Yet, as we have seen in many other contexts, including humanitarian access, biometric checks could be introduced by the Taliban when Afghans for example cross a checkpoint moving from one region to another, or request access to hospitals or other government assistance. Would someone then decide not to go to hospital in fear of being identified by the Taliban as a friend of their coalition enemy? Or, as Welton Chang, chief technology officer at Human Rights First noted, the biometric databases and equipment in Afghanistan that the Taliban now likely have access to, could also be used “to create a new class structure – job applicants would have their bio-data compared to the database, and jobs could be denied on the basis of having connections to the former government or security forces.”

There are many worst case-scenarios to think through and to do our utmost to avoid, and there are many actors who should see this as a call to revise their approaches to the collection and storing of biometric data. Besides the two examples above, it can for example also be added that as part of its migration management projects worldwide, IOM has in recent years supported the Population Registration Department within the Ministry of Interior Affairs (MoIA) in the digitalization of paper-based ID cards (“Tazkiras”). The main objective of the project is the acceleration of the identity verification process and to establish an identity verification platform. Once operational, the platform can be accessed by external government entities dependent on identity verification for provision of services. Since 2018, more than two million Afghan citizens were issued a Tazkira smartcard which is linked to a biometric database. The IOM project also supports the Document Examination Laboratory under the Criminal Investigation Department of MoIA in upgrading their systems and knowledge base on document examination.

What should be done? Access denied or data deleted

While this blogpost cannot possibly allude to all the various cases that involve biometrics in Afghanistan, it seems that a diverse range of actors, that all have collected biometric data from Afghans over the past 20 years, need to undertake an urgent risk assessment, ideally in a collective and collaborative manner. On that basis a realistic mitigation plan should be developed. How can access for example be denied or data deleted?

We do not know what will happen next in Afghanistan. Should the situation develop in a way that will see a new wave of refugees into Pakistan, UNHCR’s stand-alone iris system loses its relevance because the new refugees could well be those four million who returned during the past 19 years, and whose biometric data UNHCR has already processed once before and keeps in its database. In such a scenario, the database would serve no purpose and preparations should be made to destroy it in line with the Right to be Forgotten. Indeed, there is consensus among many human and digital rights specialists that individuals have the right to have private information removed from Internet searches and other directories and databases under certain circumstances. The concept of the Right to be Forgotten has been put into practice in several jurisdictions, including the EU. Biometric data is considered a special category of particularly sensitive data whether it is stored anonymously or not. As opposed to ID cards and passports, a biometric identity cannot be erased: you will always carry your fingerprint and iris. In fact, the main legal basis for the processing of sensitive personal data is the explicit informed consent of the concerned individual.

Another lesson for future reference should be the understanding that neither anonymized nor unanonymized biometric data provide easy technological solutions. None of the above approaches can be replicated in future war or interventions without serious reconsideration, including questions about whether and why the data is needed and careful attention to whether it should be deleted? Hence, this is the moment for UNHCR, as the global protection agency, to review and showcase its learnings from this project. It is time to show respect for the digital rights of those who have certainly never consented that their biometric data be maintained in a database beyond the point of usefulness.

One advantage of seeing this humanitarian biometric system in parallel with US military use, and other uses of biometrics in Afghanistan, is that together these examples powerfully illustrate some of the many challenges confronting the at times stubborn belief in biometrics as a solution, making challenges visible from many different ’user’ perspectives. Anonymous data is not a solution (as per the UNHCR example), nor is unanonymized data (as per the US military example). What should we do then? What do both ‘failures’ mean for how to think about the use of biometrics in future interventions, humanitarian and military.

Having stored this data for almost two decades, and now concluding that this effort was potentially not just useless, but more seriously risked producing additional insecurity – e.g. to Afghans wrongfully denied humanitarian assistance – should signpost the need to reconsider the taken-for-granted assumption that the more biometrics are collected from refugees the better. This should be a starting point to review the risks of identification of data traceable to individuals and that of anonymous data. So far we have paid attention to refugee digital bodies and digital dead bodies – but what about abandoned digital bodies?

Katja Lindskov Jacobsen holds a PhD in International Relations from Lancaster University. She is a Senior Researcher at the University of Copenhagen, Department of Political Science, Centre for Military Studies. Her research focuses on security and intervention. 

Karl Steinacker is an expert on issues relating to forced migration, humanitarian aid and digital identity and trust. He has worked in the aid and development industry for more than 30 years, including four different UN agencies and the German Humanitarian Aid. As a manager and diplomat of the UNHCR, he was for several years in charge of registration, biometrics, and the digital identity of refugees. He currently works as Digital Advisor for the International Civil Society Centre (ICSC).

Humanitarian biometrics in Yemen: The complex politics of humanitarian technology

Written by

The introduction of biometrics in Yemen is a prime example of challenges related to the use of biometric solutions in humanitarian contexts. The complexity of the situation in Yemen needs to be acknowledged by policy makers and other stakeholders involved in the humanitarian crisis unfolding in the country.

The humanitarian crisis in Yemen

Yemen is experiencing a humanitarian catastrophe. Currently, a majority of Yemeni, more than 24 million people – 80 percent of the population – are in need of humanitarian assistance to cover their basic needs. According to the UN, more than 16 million of those face crises levels of food insecurity and, of those, 3.5 million women and children require acute treatment for malnutrition. A child dies every 10 minutes from diseases, such as measles and diphtheria, that could easily be prevented, leading UN Secretary-General António Guterres to describe childhood in Yemen as a special kind of hell.

This humanitarian catastrophe is man-made. The truism that reality is complex should not be used to detract from this simple but unpleasant fact. The catastrophe in Yemen has developed to its current unfathomable level because of choices that have allowed it to continue and deteriorate. Some of these have been deliberate whereas others have been accidental or the result of decisions with seemingly unintended side effects.

Cutting aid is a death sentence

The international community has struggled to find effective strategies for alleviating the suffering of ordinary Yemeni. Simultaneously, belligerents on the ground repeatedly demonstrate blatant disregard for the lives of the people they purport to defend and represent. The lack of trustworthy data and the absence of simple solutions can lead to resignation. The most recent UN donor conference had an aid goal of $3.85 billion but only $1.7 billion was pledged, meaning that as of April 2021 aid agencies are only reaching half of the 16 million people targeted for food assistance every month. Clearly, a lack of engagement with Yemen has direct implications for the thousands of men, women and children that suffer the consequences of this conflict every day.

Challenging context for humanitarian work

Humanitarian aid agencies point to Yemen as a complex and challenging context for humanitarian work. They face bureaucratic and political obstacles and restrictions on movement that limit access to beneficiaries, as well as difficulties in reaching parts of Yemen due to the dispersion of settlements, and weak infrastructure that has deteriorated further during the conflict. Further, the highly unstable security situation impedes effective humanitarian assistance delivery. Finally, there is a lack of reliable data, making it difficult for aid agencies to properly track and document both needs and effects of aid. This is only exacerbated by the conflicting parties lack of transparency and accountability. In Yemen, humanitarian aid is big business.

Biometric-based humanitarian responses

As explored in the policy brief Piloting Humanitarian Biometrics in Yemen: Aid Transparency versus Violation of Privacy?, the World Food Programme (WFP) has developed a digital assistance platform, SCOPE, to manage the registration of and provision of humanitarian assistance and entitlements for over 50 million beneficiaries worldwide. In Yemen, the WFP has applied a mobile Vulnerability Analysis and Mapping approach to conduct remote phone-based data collection and food-security monitoring and has implemented a Commodity Voucher system as a transfer mechanism for beneficiaries. In the government-controlled areas in the south of Yemen, the WFP has registered more than 1.6 million beneficiaries to date, but the Houthi authorities in the north of Yemen have been slow to accept the roll-out of biometric registration.

The WFP has argued that the introduction of a biometric registration system would help prevent diversion and ensure that food reaches those who need it most. Biometrics is envisioned to simplify registration and identification of beneficiaries as many Yemenis do not have identification documents. In any case, as explored further in the aforementioned policy brief, biometric data is more reliable than paper documents that can be stolen or manipulated. The WFP also accentuates that biometric registration has the potential to reduce fraud by increasing the traceability of assistance. If beneficiaries are biometrically registered, it supports a high degree of versatility and the ability to quickly adjust relevant services in a volatile environment where conflict might force families to relocate on short notice.

Humanitarian biometrics in Yemen: A complex case

The use of biometrics in Yemen is a prime example of the challenges related to the use of biometric solutions in humanitarian contexts. These challenges are inherently political and highlight the potential clash between values and objectives. The WFP maintains that biometric registration is necessary to prevent fraud and ensure effective aid distribution, whereas the Houthis accuse the WFP of violating Yemeni law by demanding control over biometric data. The Houthis allege that WFP is not neutral and a potential front for intelligence operations. The Houthis allegations were given credence by the recent controversy surrounding WFP’s  partnership with the algorithm intelligence firm Palantir, and underscores the need for greater attention to responsible data management in the humanitarian sector. Distressed civilian Yemenis, in dire need for humanitarian assistance, are caught in the middle.

What is this “middle”? The use of a biometric system, while having commendable intentions, creates new problems beyond the political disputes on the ground. The use of personal data of vulnerable people in a highly contested conflict further exposes local communities to risks. The problems raised by the expansive collection of personal data include theft, interception, or unintended/non-accountable exchange of private data where, in the contentious Yemeni context, such as a breach of privacy may potentially be a matter of life and death. Yet, the scale of the humanitarian crisis means that effective distribution of humanitarian aid is, quite literally, also a matter of life and death. In a situation where the humanitarian effort is underfunded, it is paramount to ensure effective, transparent, and accountable aid distribution.   

The Yemeni case analysed in the policy brief points to the broader problems associated with reliance on new technology-based solutions to complex problems. The complexity of the situation illustrated in this case needs to be acknowledged by policy makers and other stakeholders involved in the humanitarian crisis unfolding in the country. While the potential for digital and new technology-based innovation to contribute to alleviating human suffering should be explored, the wider societal and political implications need to be considered by the ones involved in these processes.

New MidEast policy brief on humanitarian biometrics in Yemen

​In this latest Peace Research Institute Oslo (PRIO) Middle East Centre policy brief, Piloting Humanitarian Biometrics in Yemen: Aid Transparency versus Violation of Privacy?Maria-Louise Clausen  addresses the challenges of using biometrics for the World Food Program’s aid distribution in Yemen. It highlights the need for balanced approaches that counter fraud and aid diversion of humanitarian operations, while also safeguarding the privacy of beneficiaries.

Humanitarian work is under pressure from donors to prove efficiency, cut costs, and strengthen accountability. To this end, biometric data – such as fingerprints or iris scans – is increasingly used to register and identify beneficiaries in food assistance, refugee identity management, and cash assistance. The World Food Program (WFP) is on the forefront of this development, but in Yemen, their roll-out of biometric registration has been met with resistance from the Houthi authorities in the north. The Houthis accuse the WFP of not being neutral and violating Yemeni law by wanting control over biometric data. 

In response, the WFP has on several occasions scaled back its humanitarian assistance. The WFP maintains that biometric registration is necessary to prevent fraud and ensure effective aid distribution and emphasizes that beneficiary data is held in a secure system. Additionally, it is voluntary to have biometric information registered, but critics question whether the need for informed consent is meaningfully upheld when acceptance of biometric registration is a prerequisite for access to life-saving food and medical treatment.

The power struggle between the Houthis and the WFP shows how aid is politicized and weaponized. Civilian Yemenis are caught in the middle. The policy brief points out that the dilemmas related to efficient and transparent aid distribution are genuine, but that the introduction of biometrics can impose additional risks on the already most vulnerable. While biometric registration can prevent fraud and ensure effective aid distribution, more attention to the consequences of biometric registration for the beneficiaries is required.

The policy brief reflects findings from the project “Biometrics and the humanitarian intervention in Yemen”, which is supported by the Norwegian Centre for Humanitarian Studies (NHCS) and the PRIO Middle East Centre.

You can download the full MidEast Policy Brief here.

This article was originally published by the PRIO Middle East Centre.

Humanitarian experimentation

Written by

Humanitarian actors, faced with ongoing conflict, epidemics, famine and a range of natural disasters, are increasingly being asked to do more with less. The international community’s commitment of resources has not kept pace with their expectations or the growing crises around the world. Some humanitarian organizations are trying to bridge this disparity by adopting new technologies—a practice often referred to as humanitarian innovation. This blog post, building on a recent article in the ICRC Review, asserts that humanitarian innovation is often human experimentation without accountability, which may both cause harm and violate some of humanitarians’ most basic principles.

While many elements of humanitarian action are uncertain, there is a clear difference between using proven approaches to respond in new contexts and using wholly experimental approaches on populations at the height of their vulnerability. This is also not the first generation of humanitarian organizations to test new technologies or approaches in the midst of disaster. Our article draws upon three timely examples of humanitarian innovations, which are expanding into the mainstream of humanitarian practice without clear assessments of potential benefits or harms.

Cargo drones, for one, have been presented as a means to help deliver assistance to places that aid agencies otherwise find difficult, and sometimes impossible, to reach. Biometrics is another example. It is said to speed up cumbersome registration processes, thereby allowing faster access to aid for people in need (who can only receive assistance upon registration). And, in the case of responding to the 2014 outbreak of Ebola in West Africa, data modelling was seen as a way to help in this response. In each of these cases, technologies with great promise were deployed in ways that risked, distorted and/or damaged the relationships between survivors and responders.

These examples illustrate the need for investment in ethics and evidence on the impact of development and application of new technologies in humanitarian response. It is incumbent on humanitarian actors to understand both the opportunities posed by new technologies, as well as the potential harms they may present—not only during the response, but long after the emergency ends. This balance is between, on the one hand, working to identify new and ‘innovative’ ways of addressing some of the challenges that humanitarian actors confront and, on the other hand, the risk of introducing new technological ‘solutions’ in ways that resemble ‘humanitarian experimentation’ (as explained in the article). The latter carries with it the potential for various forms of harm. This risk of harm is not only to those that humanitarian actors are tasked to protect, but also to humanitarian actors themselves, in the form of legal liability, loss of credibility and operational inefficiency. Without open and transparent validation, it is impossible to know whether humanitarian innovations are solutions, or threats themselves. Aid agencies must not only to be extremely attentive to this balance, but also should do their utmost to avoid a harmful outcome.

Framing aid projects as ‘innovative’, rather than ‘experimental’, avoids the explicit acknowledgment that these tools are untested, understating both the risks these approaches may pose, as well as sidestepping the extensive body of laws that regulate human trials. Facing enormous pressure to act and ‘do something’ in view of contemporary humanitarian crisis, a specific logic seems to have gained prominence in the humanitarian community, a logic that conflicts with the risk-taking standards that prevail under normal circumstances. The use of untested approaches in uncertain and challenging humanitarian contexts provokes risks that do not necessarily bolster humanitarian principles. In fact, they may even conflict with the otherwise widely adhered to Do No Harm principle. Failing to test these technologies, or even explicitly acknowledge that they are untested, prior to deployment raises significant questions about both the ethics and evidence requirements implicit in the unique license afforded to humanitarian responders.

In Do No Harm: A Taxonomy of the Challenges of Humanitarian Experimentation, we contextualize humanitarian experimentation—providing a history, examples of current practice, a taxonomy of potential harms and an analysis against the core principles of the humanitarian enterprise.


Kristin Bergtora Sandvik, SJD Harvard Law School, is a Research Professor at the Peace Research Institute Oslo and a Professor of Sociology of Law at the University of Oslo. Her widely published socio-legal research focuses on technology and innovation, forced displacement and the struggle for accountability in humanitarian action. Most recently, Sandvik co-edited UNHCR and the Struggle for Accountability (Routledge, 2016), with Katja Lindskov Jacobsen, and The Good Drone (Routledge, 2017).

Katja Lindskov Jacobsen, PhD International Relations Lancaster University, is a Senior Researcher at Copenhagen University, Department of Political Science, Centre for Military Studies. She is an international authority on the issue of humanitarian biometrics and security dimensions and is the author of The Politics of Humanitarian Technology (Routledge, 2015). Her research has also appeared in Citizenship Studies, Security Dialogue, Journal of Intervention & Statebuilding, and African Security Review, among others.

Sean Martin McDonald, JD/MA American University, is the CEO of FrontlineSMS and a Fellow at Stanford’s Digital Civil Society Lab. He is the author of Ebola: A Big Data Disaster, a legal analysis of the way that humanitarian responders use data during crises. His work focuses on building agency at the intersection of digital spaces, using technology, law and civic trusts.