Tag Archives: Do No Harm

Protecting children’s digital bodies through rights

Written by

This text first appeared on Open Global Rights and is re-posted here.

Kristin Bergtora Sandvik is a socio-legal scholar with a particular interest in the politics of innovation and technology in the humanitarian space. She is a research professor in humanitarian studies at PRIO, and a professor in the Department of Criminology and Sociology of Law at the University of Oslo.

Children are becoming the objects of a multitude of monitoring devices—what are the possible negative ramifications in low resource contexts and fragile settings?

The recent incident of a UNHCR official tweeting a photo of an Iraqi refugee girl holding a piece of paper with all her personal data, including family composition and location, is remarkable for two reasons. First, because of the stunning indifference and perhaps also ignorance displayed by a high-ranking UN communications official with respect to a child’s personal data. However, the more notable aspect of this incident has been the widespread condemnation of the tweet (since deleted) and its sender, and her explanation that it was “six years old”. While public criticism has focused on the power gap between humanitarians and refugees and the precarious situation of Iraqi refugees, this incident is noteworthy because it marks the descent of a new figure in international aid and global governance: that of children’s digital bodies.

Because children are dependent, what technology promises most of all is almost unlimited care and control: directly by parents but indirectly by marketing agencies and tech companies building consumer profiles. As explained by the Deborah Lupton, in the political economy of the global North (and, I would add, the global East), children are becoming the objects of a multitude of monitoring devices that generate detailed data about them. What are the possible negative ramifications in low resources contexts and fragile settings characterized by deep-seated oversight and accountability deficits?

The rise of experimental practices: Ed. Tech, babies and biometrics

There is a long history of problematic educational transplants in aid context, from dumping used text books to culturally or linguistically inappropriate material. The history of tech-dumping in disasters is much more recent, but also problematically involves large-scale testing of educational technology platforms. While practitioners complain about relevance, lack of participatory engagement and questionable operability in the emergency context, ethical aspects of educational technology (Ed. Tech), data extraction—and how the collection of data from children and youth constitute part of the merging of aid and surveillance capitalism—are little discussed.

Another recent trend concerns infant biometric identification to help boost vaccination rates. Hundreds of thousands of children die annually due to preventable diseases, many because of inconsistencies in the provision of vaccine programs. Biometric identification is thus intended to link children with their medical records and overcome the logistical challenges of paper-based systems. Trials are now ongoing or planned for India, Bangladesh and Tanzania. While there are still technical challenges in accurately capturing the biometric data of infants, new biometric techniques capture fingers, eyes, faces, ears and feet. In addition to vaccines, uses for child biometrics include combatting aid fraud, identifying missing children and combatting identity theft.

In aid, data is increasingly extracted from children through the miniaturization and personalization of ICT technology. Infant and child biometrics are often coupled with tracking devices in the form of wristbands, necklaces, earpieces, and other devices which the users carry for extended periods of time.

Across the board, technology initiatives directed at children are usually presented as progress narratives, with little concern for unintended consequences. In the economy of suffering, children and infants are always the most deserving individuals, and life-saving interventions are hard to argue against. Similarly, the urgency of saving children functions as a call to action that affords aid and private sector actors room to maneuver with respect to testing and experimentation. At the same time, the mix of gadget distribution and data harvesting inevitably become part of a global data economy, where patterns of structural inequality are reproduced and exacerbated.

Children’s digital bodies

Despite the massive technologization of aid targeting children, so far, no critical thinking has gone into considering the production of children’s digital bodies in aid. The use of digital technologies creates corresponding “digital bodies”—images, information, biometrics, and other data stored in digital space—that represent the physical bodies of populations affected by conflict and natural hazards, but over which these populations have little say or control. These “digital bodies” co-constitute our personalities, relationships, legal and social personas—and today they have immense bearing on our rights and privileges as individuals and citizens. What is really different about children’s digital bodies? What is the specific nature of risk and harm these bodies might incur?

In a non-aid context, critical data researchers and privacy advocates are only just beginning to direct attention to these practices, in particular to the array of specific harms they may encounter, including but not limited to the erosion of privacy.

The question of testing unfinished products on children is deeply contentious: the possibility that unsafe products may be trialed in fragile and low resource settings under different requirements than those posed by rich countries is highly problematic.  On the other hand, parachuting and transplanting digital devices from the global North and East to the global South without any understanding of local needs, context and adaption practices is—based on the history of technological imperialism—ineffective, disempowering, a misuse of resources and, at worst, could further destabilize fragile school systems.

Very often, in aid tech targeting children, the potential for digital risk and harm for children is ignored or made invisible. Risk is phrased as an issue of data security and malfunction and human manipulation of data. Children—especially in low-resource settings—have few opportunities to challenge the knowledge generated through algorithms. They also have scant techno-legal consciousness with respect to how their personal data is being exploited, commodified and used for decisions about their future access to resources, such as healthcare, education, insurance, welfare, employment, and so on. There is the obvious risk of armed actors and other malicious actors accessing and exploiting data; but there are also issues connected to wearables, tablets and phones being used as listening devices useful for surveilling the child’s relatives and careers. It is incumbent on aid actors to understand both the opportunities posed by new technologies, as well as the potential harms they may present—not only during the response, but long after the emergency ends.

Conclusion: time to turn to the CRC!

The mainstreaming of a combination of surveillance and data extraction from children now taking place in aid, ranging from education technology to infant biometrics means that critical discussions of the ethical and legal implications for children’s digital bodies are becoming a burning issue.

The do no harm principle is a key ethical guidance post across fields of development, humanitarianism and global health. The examples above illustrate the need for investment in ethics and evidence on the impact of development and application of new technologies in low resource and fragile settings.  Practitioners and academics need to be alert to how the framing of structural problems shifts to problematizations being amenable to technological innovation and intervention and the interests of technology stakeholders.  But is that enough?

The Children’s Rights Convention of 1989 represented a watershed moment in thinking children’s right to integrity, to be heard and to protection of their physical bodies. Article 3.1 demands that “In all actions concerning children, whether undertaken by public or private social welfare institutions, courts of law, administrative authorities or legislative bodies, the best interests of the child shall be a primary consideration.” Time has now come to articulate and integrate an understanding of children’s digital bodies in international aid within this normative framework.

Safeguarding: good intentions, difficult process

Written by

This post first appeared on “ALNAP” and is re-posted here.

In front of a church in Port-au-Prince, Haiti. Image by Wenche Hauge / PRIO

In the wake of the scandal in Haiti revolving around sexual misconduct by Oxfam staff in the aftermath of the 2010 Earthquake, the aid sector is now engaging in ‘safeguarding’ exercises. While initially based on a UK legal definition that applied to vulnerable adults and children, safeguarding has acquired a broader meaning, which includes all actions by aid actors to protect staff from harm (abuse, sexual harassment and violence) and to ensure staff do not harm beneficiaries. However, despite good intentions, I suggest that the safeguarding response has some problematic qualities which need to be discussed. Here I will focus on two:

Formulating inclusive and informed safeguarding

First, as we move from arguments for the legitimacy of safeguarding initiatives, to a discussion of the legitimacy of how they are implemented, there has been vocal concern about the lack of inclusivity to this extent. Critics have noted that a “safeguarding industry was hatched, and experts magically appeared and promises of change were made’ with little attention to local and national contexts or participation.

These types of objections speak to the sector’s long-standing struggle with bottom-up accountability. The view that safeguarding is yet another Western-centric practice, and frustrated complaints about the absence of meaningful field participation and local consultations when formulating safeguarding approaches, need to be taken seriously and addressed carefully – with the cognisance that the underlying issues of discontent go much beyond safeguarding.

However, I think we need to be clear that technical and ‘programming’ conversations around safeguarding also expose difficult and normally ‘hidden’ contestations over privilege, power and race. Where long-standing struggles of women of colour in aid crash head-on into the whiteness of the Me Too movement, the whiteness of ‘humanitarian feminism’ and the whiteness of the sector more generally. Here I think the sector – including reform minded individuals – could be more honest about who is around the table and why, and display a greater willingness to engage: this type of conversation is and will be uncomfortable – but if we want to go anywhere with safeguarding, so be it.

Establishing clarity not de facto criminalisation

The second issue pertains to the inherent vagueness and malleability of the concept. While problems in the sector are frequently attributed to a ‘lack of clear definition’ of an emerging challenge, something else seems to be at play here.  At its core, the idea of safeguarding is to reinforce the humanitarian imperative to Do No Harm, by preventing ‘sexual abuse and exploitation’. Humanitarians have long been concerned about this and tried to do something about it. For decades, sexual exploitation has been considered the worst possible behaviour humanitarian workers can be guilty of, but it has perhaps not been quite so clear what constitutes exploitation and which relationships exploitation takes place in.

Previously too many behaviours and relationships were left out of the equation for behavioural mores in the sector – but are we on the road to leaving too many in today? Is safeguarding at risk of becoming some sort of moral trojan horse that implants new social and political struggles into the humanitarian space?

I am here particularly thinking about transactional sex. The interpretation of what safeguarding means is also shaped by changing cultural perceptions of transactional sex and prostitution, primarily in the Global North. While the Me Too campaign is of very recent date, it links up with a more longstanding trend in big donor countries, namely the de facto criminalisation of prostitution by criminalising the buyer. Whereas Codes of Conduct have been promoted as a key mechanism for governing the sexual behaviour of humanitarian workers, the act of buying sex is increasingly construed legally and ideologically as a criminal practice.

In my view, this is possibly the most difficult field of social practice covered by safeguarding, and where it is vital to think carefully so that one can navigate the fine line between justifiable moral censure and moralistic outrage. Is moralistic outrage necessarily a bad thing? The view appears to be emerging that paying for sex, anywhere and at any time, is incompatible with being a ‘good’ humanitarian worker and dependable employee; the distinction between paying for sex and exploiting someone for sex is being erased.

While buying sex in the 1980s, for example, appears to have been a fairly common practice in the aid world (broadly defined), much of the moral indignation previously linked to prostitution and aid was linked to the HIV/AIDS epidemic and the fact that buying sex helped spread the epidemic at home and abroad. Today, in such donor countries as Canada, France, Iceland, Ireland, Norway and Sweden, buying sex is illegal and is punished with fines or prison sentences. At the same time, criminalisation remains extremely controversial, and the extent of this controversy is perhaps getting lost as the abolitionist approach travels to the humanitarian space.

Global prostitution activism has long been an ideological battlefield, with a seemingly unbridgeable abyss between those who see prostitution as violence against women and those who want it regulated as work, regardless of gender.  What are the costs and trade-offs of transporting this battlefield into humanitarian practice? While I am not aware of any comprehensive effort to track the consequences of criminalisation for sex workers, new research indicates that vulnerable women in prostitution become more vulnerable through criminalisation in the Global North.

Thus, when trying to gauge an appropriate scope for the idea of safeguarding, I think it is necessary to reflect on the usefulness (and normative appropriateness) of maintaining a strong conceptual distinction between procuring sexual services from individuals receiving aid or falling under protection mandates, from sex workers who are not recipients of aid nor in a position of vulnerability in a specific humanitarian field setting.

It is now widely recognised that buying sex in emergencies rests on deep power differences, is fundamentally unacceptable and as such threatens the legitimacy of the sector. While this recognition is long overdue, its emergence should be seen as progress. However, this does not imply that safeguarding practices should be used as a vehicle for criminalising buyers and abolishing prostitution going forward.