Tag Archives: data

Humanitarian biometrics in Yemen: The complex politics of humanitarian technology

Written by

The introduction of biometrics in Yemen is a prime example of challenges related to the use of biometric solutions in humanitarian contexts. The complexity of the situation in Yemen needs to be acknowledged by policy makers and other stakeholders involved in the humanitarian crisis unfolding in the country.

The humanitarian crisis in Yemen

Yemen is experiencing a humanitarian catastrophe. Currently, a majority of Yemeni, more than 24 million people – 80 percent of the population – are in need of humanitarian assistance to cover their basic needs. According to the UN, more than 16 million of those face crises levels of food insecurity and, of those, 3.5 million women and children require acute treatment for malnutrition. A child dies every 10 minutes from diseases, such as measles and diphtheria, that could easily be prevented, leading UN Secretary-General António Guterres to describe childhood in Yemen as a special kind of hell.

This humanitarian catastrophe is man-made. The truism that reality is complex should not be used to detract from this simple but unpleasant fact. The catastrophe in Yemen has developed to its current unfathomable level because of choices that have allowed it to continue and deteriorate. Some of these have been deliberate whereas others have been accidental or the result of decisions with seemingly unintended side effects.

Cutting aid is a death sentence

The international community has struggled to find effective strategies for alleviating the suffering of ordinary Yemeni. Simultaneously, belligerents on the ground repeatedly demonstrate blatant disregard for the lives of the people they purport to defend and represent. The lack of trustworthy data and the absence of simple solutions can lead to resignation. The most recent UN donor conference had an aid goal of $3.85 billion but only $1.7 billion was pledged, meaning that as of April 2021 aid agencies are only reaching half of the 16 million people targeted for food assistance every month. Clearly, a lack of engagement with Yemen has direct implications for the thousands of men, women and children that suffer the consequences of this conflict every day.

Challenging context for humanitarian work

Humanitarian aid agencies point to Yemen as a complex and challenging context for humanitarian work. They face bureaucratic and political obstacles and restrictions on movement that limit access to beneficiaries, as well as difficulties in reaching parts of Yemen due to the dispersion of settlements, and weak infrastructure that has deteriorated further during the conflict. Further, the highly unstable security situation impedes effective humanitarian assistance delivery. Finally, there is a lack of reliable data, making it difficult for aid agencies to properly track and document both needs and effects of aid. This is only exacerbated by the conflicting parties lack of transparency and accountability. In Yemen, humanitarian aid is big business.

Biometric-based humanitarian responses

As explored in the policy brief Piloting Humanitarian Biometrics in Yemen: Aid Transparency versus Violation of Privacy?, the World Food Programme (WFP) has developed a digital assistance platform, SCOPE, to manage the registration of and provision of humanitarian assistance and entitlements for over 50 million beneficiaries worldwide. In Yemen, the WFP has applied a mobile Vulnerability Analysis and Mapping approach to conduct remote phone-based data collection and food-security monitoring and has implemented a Commodity Voucher system as a transfer mechanism for beneficiaries. In the government-controlled areas in the south of Yemen, the WFP has registered more than 1.6 million beneficiaries to date, but the Houthi authorities in the north of Yemen have been slow to accept the roll-out of biometric registration.

The WFP has argued that the introduction of a biometric registration system would help prevent diversion and ensure that food reaches those who need it most. Biometrics is envisioned to simplify registration and identification of beneficiaries as many Yemenis do not have identification documents. In any case, as explored further in the aforementioned policy brief, biometric data is more reliable than paper documents that can be stolen or manipulated. The WFP also accentuates that biometric registration has the potential to reduce fraud by increasing the traceability of assistance. If beneficiaries are biometrically registered, it supports a high degree of versatility and the ability to quickly adjust relevant services in a volatile environment where conflict might force families to relocate on short notice.

Humanitarian biometrics in Yemen: A complex case

The use of biometrics in Yemen is a prime example of the challenges related to the use of biometric solutions in humanitarian contexts. These challenges are inherently political and highlight the potential clash between values and objectives. The WFP maintains that biometric registration is necessary to prevent fraud and ensure effective aid distribution, whereas the Houthis accuse the WFP of violating Yemeni law by demanding control over biometric data. The Houthis allege that WFP is not neutral and a potential front for intelligence operations. The Houthis allegations were given credence by the recent controversy surrounding WFP’s  partnership with the algorithm intelligence firm Palantir, and underscores the need for greater attention to responsible data management in the humanitarian sector. Distressed civilian Yemenis, in dire need for humanitarian assistance, are caught in the middle.

What is this “middle”? The use of a biometric system, while having commendable intentions, creates new problems beyond the political disputes on the ground. The use of personal data of vulnerable people in a highly contested conflict further exposes local communities to risks. The problems raised by the expansive collection of personal data include theft, interception, or unintended/non-accountable exchange of private data where, in the contentious Yemeni context, such as a breach of privacy may potentially be a matter of life and death. Yet, the scale of the humanitarian crisis means that effective distribution of humanitarian aid is, quite literally, also a matter of life and death. In a situation where the humanitarian effort is underfunded, it is paramount to ensure effective, transparent, and accountable aid distribution.   

The Yemeni case analysed in the policy brief points to the broader problems associated with reliance on new technology-based solutions to complex problems. The complexity of the situation illustrated in this case needs to be acknowledged by policy makers and other stakeholders involved in the humanitarian crisis unfolding in the country. While the potential for digital and new technology-based innovation to contribute to alleviating human suffering should be explored, the wider societal and political implications need to be considered by the ones involved in these processes.

The datafication of refugee protection in and beyond the Middle East: A case for digital refugee lawyering

Written by

In February and March 2021, I organised a two-part workshop in which academics, activists, lawyers and NGO-workers were invited to (re)think how digital technologies interact with refugee protection, specifically in the Middle East. Refugee protection – the right to be protected from persecution and the right to make claims to these rights in another country – is increasingly data-driven protection. The increased pluralisation and privatisation of migration management interact with widespread experimental deployment of humanitarian technology. In regard to border and migration governance, governments and UN agencies are developing emerging digital technologies in ways that are ‘dangerous and discriminatory’.

Discussions of digital rights of refugees are key, because getting their privacy wrong can have disastrous consequences. Digital technologies also interact with refugee law, for instance by reconstituting what counts as legal knowledge. And the same technologies – biometric information and automated technologies – are also increasingly used for pre-emptive border controls further narrowing the right to seek refuge and future rights of refugees. Here I consider some important concerns and potential directions for doing differently, derived from the workshop, before I make a case for digital refugee lawyering.  

Concerns about data-driven refugee protection

The workshop’s geographical focus relates to the relatively large presence of refugee populations in Middle Eastern protection contexts and the complex legal interplay pertaining to the roles (and immunity) that International Organisations have taken on regarding refugee rights, in interaction with governments, private entities, implementing partners and donors. Limited regulations combined with a dwindling of funding and the push for efficiency and ‘objectivity’ by external stakeholders has contributed to experimental technology-use, such as the use of iris-scanning technologies, automated vulnerability assessments and cash-assistance via block-chain technology. Humanitarian operations in Jordan and Lebanon are known for innovation and datafication of relief. Geographical areas that receive less humanitarian and academic attention are perhaps also prime locations for technological experimentation.

Recently, there has been more attention for data protection in humanitarian settings. International organizations have developed their own data protection policies. But matters such as limited information provided to data subjects, widespread (meta)data sharing and the permanence of data are persisting as is the presumption that a digital identity would result into a legal identity. Concerns about the use of data beyond its original purposes, cyber (in)security, and algorithm’s tendencies for entrenching structural inequalities also remain.

The increased usage of ‘new’ technologies can cloud that technologies have long been used in refugee management and already often simultaneously imposed control. For instance, physical copies of UNHCR’s Refugee Status Determination handbook were never made accessible, for concern that refugees would use them to ‘game’ the system. Current emphasis on data extraction and biometrics closely resemble colonial governance and its racialised exceptionalism. And some refugee communities have longstanding histories of being experimented on.

What is new is the persistence of data, their accessibility over distance and the ability to continuously reassemble data. Technologies can enable urban refugee settings to become camp-like environments by installing modes of surveillance and control. Digital transformations are not confined to refugee governance. But experimentations in humanitarian settings often provide normative and scientific affirmation for technological-driven measures and relate to larger macro-political developments, including anti-migration tendencies and bio-tracing efforts to control Covid-19.

Greater and inclusive techno-legal consciousness

The involvement of private sector and big tech often creates opacity. Across the board, there is need for greater techno-legal consciousness and more knowledge on the back-end of technological infrastructure, on how data can be (mis)used, exploited and misappropriated and how the activities of private partners – including but going beyond Palantir, IrisGuard, Accenture – oscillate between border control and humanitarian operations. Such private partnerships raise questions about normative frameworks used within UN organisations. Committed humanitarian operations might be dedicated to not sharing data, but it is questionable whether involved third parties will uphold the same standards.

This not an argument for more handbooks, for there is often a gap between guidelines produced in Brussels or Geneva and actual data practices by humanitarian workers and this can easily result in more work pressure in the ‘field’. Persisting hierarchical work cultures, fear that admitting mistakes would result in loss (jobs, funding) and the need to tell success stories continue to make learning from the past difficult.

Academics, activists, affected populations, the tech community, practitioners, and policymakers ought to join their efforts. This includes being mindful to the politics of translation, language and accessibility to knowledge. Concerned populations are actively involved in negotiating safety, also concerning their data use, but meaningful consent and access to necessary information. From the outset people on the move, trusted local researchers and communities already working on these topics ought to be involved in discussions on digital rights spaces. In the tech community emphasis is often put on removing biases whereas in refugee law, personal information and characteristics are crucial to determine the credibility of a claim. Such and other differences need to be recognised and addressed.

Implementing partners, headquartered in the EEA, are since 2018 required to follow the General Data Protection Regulation (GDPR). GDPR also applies to personal data collected from people beyond Europe. It does not apply to International Organisations. Workshop participants noted that the GDPR did not result in substantive changes in how data is collected, stored, and processed: other NGOs, not bound by GDPR, would be asked to do the work. Many countries across the globe have their own national frameworks for data protection, but these are not always enforced as GDPR would be. Data protection policies can also be (mis)used for government control.

Donors tend to push for efficiency and a logic of audit but have rather minimal requirement for data protection and technology-oriented programs. And claims about the functionality of technologies in humanitarian relief are hardly ever questioned or evaluated. It is therefore noteworthy that in April 2021 a European parliament member asked why the EU, by funding the WFP and UNHCR’s biometric identity systems for refugee registration in Jordan, was approving standards that within the EU would be deemed ethically unacceptable. This question will hopefully be taken forward. 

A case for digital refugee lawyering

Discussions on rights easily turn into discursive dances around responsibilities and sovereignty that do not relate to realities on the ground. The concept of digital refugee lawyering I put forward therefore perceives digital rights as a negotiated practice. It not only considers how technologies interact with the already precarious access to rights that is reality to many forced and illegalised migrants worldwide. It also explores how to ensure that – considering legal marginalisation in interaction with (lack of) rule of law – people seeking protection and persons working to aid their access to rights can draw safely upon the potentials of digital connectivity. How technologies operate and interact with social relationships relates to matters such as access, power, and privilege. There is potential that procedures taken to curtail Covid-19 can aggravate risks of refugees. And much of UNHCR’s processing procedures are now done remotely. Legal aid is following this development. This only makes discussions on how to act collectively and locally in favor of digital rights of refugees and other (illegalized) migrants more pertinent.

TikTok and the War on Data: Great Power Rivalry and Digital Body Counts

Written by

This post first appeared on Global Policy, and is re-posted here. You may access the original post by clicking this link. Kristin Bergtora Sandvik (SJD Harvard Law School) is a Professor of Sociology of Law at the University of Oslo and a Research Professor in humanitarian studies at the Peace Research Institute Oslo. Katja Lindskov Jacobsen is a Senior Researcher at Copenhagen University, Department of Political Science, Centre for Military Studies. In this post, the authors explore how a tech-reliant humanitarian sector increasingly finds itself implicated in a global War on Data.

“Personal data” by Natana Elginting via Freepik

In 1971, the US declared a War on Drugs. In 2001, it began a still ongoing War on Terror. In 2020, the country has initiated a global War on Data to ‘combat’ the malicious collection of US citizens’ personal data. It is the first time that America is going to war for its population’s digital bodies. While this represents a further shift of the battlefield to the domains of cyberspace and trade, there is a likelihood that this war too will entail significant human suffering. This blog post thinks through the consequences for humanitarian aid, problematizing the notion of ‘digital body counts’.

In 1971, US President Richard Nixon declared a War on Drugs (WoD) to eradicate the sup­ply and demand for illegal narcotics. This global campaign consisted of drug prohibition, military aid and military interventions. The toll of this war – both human and financial – has been enormous, costing billions of dollars and taking thousands of lives annually.  

In 2001, US President Bush began a War on Terror (WoT) in response to the 9/11 attacks.  This global military campaign has led to between 480,000 and 507,000 deaths in major theatres of war (Afghanistan, Pakistan, Iraq) alone. As the security and reliability of data links improved, the WoT evolved into an increasingly remote form of warfare, using armed drones. This also entailed an extension of the battlefield, illustrated by the rising number of US drone strikes in Somalia. Adding to the violence was the merger of the WoD and WoT, as illustrated by the Colombian example.

Almost 20 years on, in 2020, President Trump is now launching a ‘War on Data’ (WoDa). “At the president’s direction, we have taken significant action to combat China’s malicious collection of American citizens’ personal data”, Commerce Secretary Wilbur Ross stated in September 2020. The rationale he gave is familiar, namely that the US wants to promote “our national values, democratic rules-based norms, and aggressive enforcement of US laws and regulations.” While the other wars have depended on military tools from the outset, at present the WoDa ostensibly relies on a weaponization of trade policy, commerce and regulation – and the expansion of military logic to these domains.

While tensions around global technology hegemony has been budding for years, we suggest that the official ‘launch’ of this campaign was the US extradition request for and subsequent detention of Huawei CFO Meng Wanzhou in Vancouver December 2018 on the basis of fraud and conspiracy to circumvent US sanctions on Iran. In 2020, this campaign – also labelled a “Tech Cold War” or a new “World War over technology”  has been ramped up, focusing on restricting technology flows to China, revamping global technology supply chains, barring certain industry actors from infrastructure projects (such as Huawei’s 5G network) and attempting to curtail US users access to Chinese digital goods and platforms like TikTok and WeChat.

This is also the first time that America is going to war for its population’s digital bodies. ‘Digital bodies’ are images, information, biometrics, and other digitalized data that represent the physical bodies of populations, but over which they have little control or say. Issues of control and say are particularly pertinent where such data is stored in databases to which access may be granted (e.g. via more or less public data sharing agreements), forced (e.g. hacking), or occur as a byproduct of accidental leaks.

While WoDa at this point in time has a clearly designated enemy – China – its impact is likely to significantly impact civilian populations – and their digital bodies – globally, though likely with particular significance for populations already affected by WoD and WoT.  This is because of how digital data gathering has been particularly intense as a counterterror technology, collecting enormous amounts of digital footprints on terror suspects including where the grounds for suspicion may be weak at best (we get back to this). This requires us to think through the notion of digital body counts – not as a measure of disappeared US platform users and dead accounts, but as a critical human security issue. Crucially, we are concerned that the new WoDa is also likely to lead to bad humanitarian outcomes and real body counts. In the following, we identify three humanitarian aspects of the WoDa.

The first concerns sovereignty and data colonization of war-affected civilians. Protecting sovereignty is increasingly about protecting the sovereignty of digital bodies – though through this is a type of protection principle the US systematically ‘violates’ the to ‘digital bodies’ of other states’ populations. Arguably, a precursor to the emergent WoDa is the extent to which the US is collecting enormous amounts of biometric data on civilian populations in war zones, because the digital registration-net is cast as widely as possible, and how this data is kept by the US even after its ‘wars’ in foreign places officially ends (the case of Iraq). According to Spencer Ackerman, the US military compiled biometric data from “3 Million Iraqis,”, which the US holds onto even though troops have come home and the War in Iraq officially ended.  Similarly, UNHCR is an example of an important humanitarian actor that increasingly collects biometric data on subjects that it assists. In view of that, and related to the point about military biometrics moving beyond and maintained beyond the battlefield, it is for example noteworthy how UNHCR has an agreement with the Department of Homeland Security to provide biometric data on all candidates for resettlement – data which is kept even when no resettlement is taking place.

The second concerns Great power rivalry spilling over into humanitarian technology. Proxy wars are a staple of US global military campaigns. Humanitarians often deal with their consequences. In the WoDa, the risk is that humanitarian space itself becomes the site of a proxy war. The issue of control over data and digital bodies and the US’ aim of preventing China from collecting digital data on its citizens could have implications for humanitarian operations that increasingly rely on the collection of digital data. From a humanitarian perspective, biometric registration of beneficiaries helps agencies overcome ‘double registration’ challenges, as well as challenges related to providing donors with reliable numbers. Now, on the one hand, we have seen institutions like ICRC publish a Biometric Policy that limits data storing. Yet, on the other hand, other humanitarian actors keep adding to the list of challenges that biometrics can help overcome. Contactless biometrics as a response to “the risk of COVID-19” being but one example. The latter development calls for attention to questions about whose technology (US? Chinese? Other?) should be used for these data-gathering and storage purposes in an increasingly technology-reliant and data-enthusiastic humanitarian domain? What will be the consequences (credibility, funding, acceptance, neutrality) of the increasing tension with respect to the trustworthiness of the sectors digital infrastructure?

The third concerns a broadening of the lawfare paradigm and its negative effects. The humanitarian sector is increasingly being embedded in a lawfare paradigm, where US counter-terrorism measures and ‘material support to terror’ provisions are being globalized through strategic litigation against humanitarian actors in domestic courts coupled with blacklisting of the same humanitarian actors in global banking systems. What we now see on the horizon is the possible extension of a type of lawfare to incorporate civil society digital procurement of Chinese-produced commercial off-the-shelf solutions of hardware, platforms and networks as a US national security issue. This would greatly up the stakes for humanitarian actors, and significantly impact their ability to provide aid.  

Our initial takeaway is as follows: For people in war zones, the onset of a WoDa has potentially serious security implications for communities and aid actors. This is not about abstract notions of digital bodies and virtual body counts, nor is it a question of risks to privacy and data protection only. This is about life and death. As a tech-reliant humanitarian sector inevitably finds itself implicated in WoDa, it risks becoming more than involuntarily entangled: for example the digital bodies that humanitarian practice produce may increasingly become targets in this war; because the concept and moral imperative of humanitarian aid is put into play, and finally because access to humanitarian assistance might be compromised as the sector gets further enmeshed in these great power rivalries.