AI in aid: Framing conversations on humanitarian policy

featured image

This blogpost was originally published on the Global Policy Journal.

Introduction

At present, no major donor country engages extensively with artificial intelligence in their humanitarian policies or in their strategic thinking on aid. This blog looks at the development of humanitarian strategies in important donor countries, and how policy conversations around AI in aid can be framed. To illustrate what this conversation could look like, we use the ongoing Norwegian humanitarian strategy process as an example. While our observations are tentative, we aim to contribute to emergent discussions in the field of humanitarian policymaking.

The rise of AI and more recently generative AI as a core issue in global governance has enormous import for the humanitarian sector. This may take the form of using AI to sharpen reporting tools, in planning of humanitarian programs, aid distribution, mapping of needs and more. The specific challenges that can arise in the humanitarian sector relate to the protection of vulnerable people, protection of their personal data and decisions impacting human lives being taken on tools based on predictions and “most-likely” scenarios. They also relate to the power differences and lack of democratic accountability inherent in humanitarian governance. At the same time, because the level of unmet needs is unprecedented and increasing, there is also a need to think about how we can harness the potential of AI for humanitarian action. At this stage, there are more questions than answers in the humanitarian sector about the potential of AI, and its inherent risks. In assessing risk, harm and opportunity, we take the need for responsible AI, including a concern with risk, bias and meaningful accountability, as our point of departure.

We begin by describing the role and import for humanitarian policies before outlining the regulatory context for strategic thinking on humanitarian AI. Thereafter we articulate a set of questions to guide discussions around strategy developments. Then we identify a set of AI-focused initiatives aimed at ‘making aid fit for purpose’ and enhancing humanitarian accountability and effectiveness, while drawing on values and priorities in existing humanitarian strategy.

Crafting humanitarian strategies: What is the place of AI?

Humanitarian strategies are policy documents developed by donor countries to guide their humanitarian responses. Generally formulated to last for a given time period, these strategies aim to help decision makers and bureaucrats in the everyday management of funding and projects, and to offer guidance for humanitarian actors when new challenges and dilemmas arise. Norway’s current humanitarian strategy was launched in 2018 and expires at the end of 2023. The new strategy is currently being developed. Following a similar timeline, the German humanitarian strategy is being updated, while the UK Government launched its Humanitarian framework in November 2022, ‘outlining how the UK will deliver its humanitarian commitments’. The strategies are overall remarkably similar: they share both an overall commitment to humanitarian principles and a self-understanding as important donors at a time of increasingly complex challenges. At present, as several of these strategies are up for revision, a central question is how to meaningfully include policies on fast-moving AI?

Norway has for long maintained a firm ambition of being an important development aid and humanitarian donor. It generally finds itself among the biggest OECD DAC contributors and well above the UN-set target of 0.7% of GNI – although some variations have followed shifting governments. In 2021, Norway was the 10th largest donor, and the second largest in share of its economy, with 0.93% of GNI spent on ODA. Norway launched its first humanitarian strategy in 2008. Since then, key priorities have been the effective protection of civilians (PoC), women’s rights and the struggle against sexual and gender-based violence, and, increasingly, a focus on climate-induced humanitarian crises. The current strategy emphasises the ‘power of action’ and the need for ‘holistic efforts.’

In the humanitarian community, Norway has a reputation for being a ‘flexible’ donor and sees this as a strength. Yet, a new paradox has arisen: while the current humanitarian strategy emphasises that ‘digital advances and the use of data are creating new challenges’ as well as opportunities, there is no governmental pronouncement on AI in aid, neither in the humanitarian strategy nor in policy statements. As challenges relating to AI are becoming mainstreamed across humanitarian funding, programming and practice, going forward, flexible donorship must be coupled with coherent and careful thinking on the role and place of AI. Hence our wish is to help framing a conversation on AI in humanitarian policy.

What is the regulatory context: AI in existing and emerging policy and law

The question of how to relate to AI is already central in various national strategic documents and new legislations, and reviewing what is being done in these areas is an important starting point for Norway’s reflection on its new humanitarian strategy. We argue that policy development must be cognisant of five structuring factors:

  • Factor 1: Global values for AI governance are emerging. While the United States, China and others are competing for the role as Global lead on AI, there are also rapid developments in global norm articulation, diffusion and uptake. International consensus documents are often characterised by a broad scope and high-level principles. For example, the OECD Principles on Artificial Intelligence promote AI that is innovative, trustworthy and respects human rights and democratic values. They state that AI should benefit people and the planet by driving inclusive growth, law, human rights, democratic values and diversity. Similarly broad are the UNESCO recommendations on the Ethics of artificial intelligence, which are focused on core values related to ‘the good of humanity, individuals, societies and the environment’.  Which values to include and emphasise in a humanitarian strategy on AI should be the subject of inclusive deliberation in the sector.

 

  • Factor 2: Existing governmental AI policies provide limited guidance. Many humanitarian donor countries have adopted AI strategies. For example, France launched its work on a national AI strategy in 2017, while the Norwegian strategy is from 2020 and the UK strategy is from 2021. Across these strategies, foreign policy priorities and objectives are incorporated to a very limited degree. Conversely, foreign policy does not reflect or promote national strategic AI goals to any significant degree. With respect to the specific humanitarian context, there is very little to emulate: for example, while the US AI Strategy focuses on the promotion of international collaborations in AI R&D to address global challenges and emphasises the need to develop trustworthy AI, it fails to consider civil unrest, emergencies and armed conflict.

 

  • Factor 3: Evolving EU legislation will provide a comprehensive but not exhaustive regulatory context with respect to high-risk AI. In the EU context, a key emphasis is placed on developing trustworthy AI, combined with excellence, in order to “to boost research and industrial capacity while ensuring safety and fundamental rights”. Under the draft EU AI Act, all artificial intelligence would be classified under four levels of risk, from minimal to unacceptable. Technology deemed to be an unacceptable risk—such as systems that judge people based on behavior, known as “social scoring,” as well as predictive policing tools—would be banned, and AI focused on children, other vulnerable populations, and hiring practices would face tougher scrutiny. Human rights advocates have lauded this development but criticised the missed opportunity to increase protection mechanisms by empowering people affected by AI, and notably migrants, refugees and asylum seekers. The timeline for the AI Act is 2024-2025.

 

  • Factor 4: Humanitarian AI is inherently entangled with broader global governance considerations. The problems arising from disasters and violence and the strategies to mitigate humanitarian suffering – including principled policies on the humanitarian imperatives and principles –  distinguish humanitarian AI from broader global governance considerations. Yet, it is difficult to isolate developments in this sector from developments in other sectors. The implications of the use of AI are already cross-sectoral and need to be understood as such: new practices in the field of migration and border control, or arms trade and control, can be expected to have an impact on the humanitarian sector, and vice-versa. This also poses challenges for humanitarian policies on AI.

 

  • Factor 5: Technological advances are extremely rapid. Is a more profound rethink of policy making, including humanitarian policy, needed? As noted by Scott et al (2018), the toolbox includes policy making, public diplomacy bilateral and multilateral engagement, actions through international and treaty organisations, convenings and partnerships, grant-making and information gathering and analysis. They argue that “While the existing toolkit can get us started, this pragmatic approach does not preclude thinking about more drastic changes that the technological changes might require for our foreign policy institutions and instruments”.

 

Strategic questions for framing and situating AI in humanitarian policy

Despite the fragmented regulatory context, national interest will be salient for any humanitarian AI policy. Here we offer a set of (non-exhaustive) questions of relevance for the Norwegian policy context. Other humanitarian strategy processes may have different concerns:

  • Question 1: What are Norwegian geopolitical and regional interests with respect to AI? Norway should ask itself what the implications of various forms of AI engagements for its broader geopolitical and regional interests are. Importantly, a coherent national MFA/institutional understanding of and approach to AI problem areas in global governance is needed to include and integrate AI in humanitarian policymaking.

 

  • Question 2: Is AI a public good in Norway’s humanitarian policy? Despite Norway’s support of the Digital Public Goods Alliance, the current Norwegian strategy on AI does not relate to AI as a public good.  In line with evolving national interest priorities on AI, for the new humanitarian strategy there is a need to consider whether a choice must be made between focusing on regional bloc priorities or seeing AI as a public good.

 

  • Question 3: What will ‘meaningful’ humanitarian accountability entail in the new strategy? There is a need to reflect on how Norway as a donor can support meaningful renditions of AI accountability, including the role and import of transparency, explicability and digital literacy. At the heart of this is the need to be able to trace the sources and processes underpinning key decisions and the ability to identify, address and remedy unethical and illegal experimentation, misuse and harmful decisions. Burgeoning propositions for ‘AI for good’ include ‘participatory  AI for humanitarian innovation’ and industry-driven initiatives.  In place of buzzwords, careful policy innovation  on humanitarian accountability is needed: For example, drawing on UNHCR’s work, Norwegian humanitarian policy could support an extension of the “society-in-the-loop” algorithm concept – embedding the general will into an algorithmic social contract – where both humanitarian responders and affected populations understand and oversee algorithmic decision-making that affects them.

 

  • Question 4: How will ‘Norwegian values’ and the ‘Norwegian model’ feature in relation to a policy on humanitarian AI? In policy documents, Norway’s humanitarian engagement is framed with reference to innate values that are ‘particularly Norwegian.’ These are presented as intrinsically linked to wider notions of the social-democratic nation state and progress achievable through the rule of law, social democracy, equality, sameness, redistribution, citizen participation and high levels of trust, that lower transaction costs, which might be barriers to efficiency, and facilitate government interventions. The ‘Norwegian model’ entails close cooperation between the Norwegian state and humanitarian NGOs to fulfill these values.  How should this be reflected in future uses of (or cautions against) AI? In particular, should Norway fundamentally re-evaluate the emphasis on trust in the context of humanitarian AI?

Concluding thoughts: Operationalising AI in humanitarian policy?

In the final part of the blog, we offer some thoughts on operationalising AI in humanitarian policy, with a view to enhance humanitarian accountability and effectiveness. Here we seek to widen the scope beyond the Norwegian context.

  • Suggestion 1: Support regulation, standard setting and legal accountability. Donor countries can provide funding and political support for the development and adoption of standards related to artificial intelligence in the humanitarian sector. This includes the everyday impact of AI on the administration, management and evaluation of humanitarian programs. The standards should incorporate emergent EU regulation and human rights norms while foregrounding best practice in the aid sector.

 

  • Suggestion 2: Support capacity building in humanitarian governance. Contributing to the slow institutional uptake of big data and analytics within the humanitarian sector is a lack of knowledge and capacity to apply these instruments in operational settings. Similarly, capacity building is needed to grapple with the impact of AI, including risk and harm, but also to harness the potential of AI to make the sector more ‘fit for purpose.’ To be meaningful, capacity building efforts and support must be directed at donors and international humanitarian actors but also towards host governments, bureaucracies, fieldworkers, civil society and communities in crisis.