While there is hope that generative artificial intelligence (AI), in the form of technologies like Chat GPT and Bard, can enhance the efficiency of humanitarian aid and emergency relief efforts, the humanitarian sector remains hesitant.
In a recent op-ed for Panorama, Peace Research Institute Oslo (PRIO) researchers call on development agencies, civil society and humanitarian workers to be more actively engaged in conversations about generative AI in humanitarian assistance.
Authors Kristin Bergtora Sandvik, Kristoffer Lidén and Maria Gabrielsen Jumbert put forward three considerations for Norwegian actors in relation to AI and the humanitarian sector:
- Game-changing potential: Generative AI stands as a potential game-changer, but its precise influence on humanitarian operations remains uncertain. Humanitarian actors need to appreciate not only what opportunities AI offers to improve aid but also the risks it introduces, requiring careful assessment and management.
- Experimental in nature: The technology remains largely experimental and unregulated, amid ongoing global legislative developments. There is a risk humanitarian governance and administrative structures also become part of this experiment and thus discussions around humanitarian accountability must focus not only on control of the technology, but also how AI influences humanitarian governance models.
- Lack of strategic focus: Enhanced strategic thinking regarding AI’s application in humanitarian work is required and involving humanitarian organisations in this is critical. Norway’s new humanitarian strategy should recognise AI as a foreign policy challenge and humanitarian resource.
The op-ed advocates for a thoughtful, values-driven approach to constructing a humanitarian AI strategy, firmly rooted in the sector’s mission and fundamental principles, such as doing no harm, impartiality, non-discrimination and humanity.
Read the full article in Norwegian here.
The following blog series also explores AI in the humanitarian sector: