Tag Archives: technology

Humanitarian innovation, humanitarian renewal?

Written by

The continued evolution of the humanitarian innovation concept needs a critical engagement with how this agenda interacts with previous and contemporary attempts to improve humanitarian action.

Accountability and transparency have been central to discussions of humanitarian action over the past two decades. Yet these issues appear generally to be given scant attention in the discourse around humanitarian innovation. The humanitarian innovation agenda is becoming a self-contained field with its own discourse and its own set of experts, institutions and projects – and even a definitive founding moment, namely 2009, when the ALNAP study on innovation in humanitarian action was published.[1] While attempts to develop a critical humanitarian innovation discourse have borrowed extensively from critical discussions on innovation in development studies, humanitarianism is not development done in a hurry but has its own distinct challenges, objectives and methodologies.

I will focus here on concrete material innovations, most commonly referred to as ‘humanitarian technology’. Discussions on such humanitarian innovations regularly acknowledge the need to avoid both fetishising novelty in itself and attributing inherently transformative qualities to technology rather than seeing how technology may fit into and build upon refugees’ existing resources.

Renewing humanitarianism

While it is obvious that internal and external reflections on a humanitarian industry and a humanitarian ethos in need of improvement are much older pursuits, I will start – as most scholars in humanitarian studies do today – with the mid-1990s and the ‘Goma-moment’. To recover from the moral and operational failures of the response to the Rwanda genocide and the ensuing crisis in the Great Lakes region of Africa, humanitarianism turned to human rights based approaches (HRBA) to become more ethical, to move from charitable action to social contract. Yet HRBA always suffered from an intrinsic lack of clarity of meaning as well as the problem of states being the obliged parties under international human rights, a particular problem in the context of displacement, whether internal or across borders.

A decade or so later, in the aftermath of the 2004 Indian Ocean tsunami and in the face of accusations about poor governance, insufficient coordination, incompetence and waste, the humanitarian enterprise embarked on institutional reform to become better. Responses were to be maximised through Humanitarian Coordinators, funding was to become more efficient through Central Emergency Response Funds and, most importantly in the everyday life of humanitarian practitioners, the Cluster approach allocated areas of responsibility to the largest humanitarian actors.

The need for greater accountability and transparency were drivers for both HRBA (with its moral intricacies) and humantiarian reform (with its bureaucratic complexities). What is now happening with accountability and transparency within the technological-innovation-as-renewal paradigm?

If Rwanda and the Indian Ocean tsunami were the events ushering in HRBA and humanitarian reform, Haiti was the much heralded game-changer for technology whose use there (despite many practical problems and malfunctioning solutions) is generally assessed as positive.[2] In the years since, a host of new technology actors, initiatives, technical platforms and methodologies has emerged. New communications technology, biometrics, cash cards, drones and 3D printing have all captured the humanitarian imagination.

Thinking about problems and difficulties is often framed in terms of finding technical solutions, obtaining sufficient funding to move from pilot phases to scale, etc. However, as ideas about progress and inevitability dominate the field, the technology is seen not as something we use to get closer to a better humanitarianism but something which, once deployed, is itself a better, more accountable and transparent humanitarianism.

So institutionalised have transparency and accountability become that they have now vanished off the critical radar and become part of the taken-for-granted discursive and institutional framework. Accountability and transparency are assumed to be automatically produced simply by the act of adopting and deploying new technology. (Interestingly, the third tenet usually listed with accountability and transparency, efficiency, is also a basic assumption of this agenda.)

Accountability, participation and transparency

A 2013 report published by UN OCHA, Humanitarianism in the Network Age, argues that “everyone agrees that technology has changed how people interact and how power is distributed”.[3] While technology has undoubtedly altered human interaction, an assumption that proliferating innovative humanitarian technology unveils power, redistributes power or empowers needs to be subjected to scrutiny.

The classic issues in humanitarian accountability – to whom it is owed and by whom, how it can be achieved and, most crucially, what would count as substantively meaningful accountability – remain acutely difficult to answer. These issues also remain political issues which cannot be solved only with new technical solutions emphasising functionality and affordability; we cannot innovate ourselves out of the accountability problem, in the same way as technology cannot be seen as an empty shell waiting to be filled with (humanitarian) meaning.

This speaks particularly to the quest for participation of those in need of humanitarian protection and assistance, “helping people find innovative ways to help themselves”. In practice, we know that humanitarians arrive late in the field – they are not (at least not outside their own communications) the first responders. Affected individuals, their neighbours and communities are. Yet we should be concerned if the engagement with technological innovation also becomes a way of pushing the resilience agenda further in the direction of making those in need more responsible than well-paid humanitarian actors for providing humanitarian aid.

The arrival of the private sector as fully respectable partners in humanitarian action is in principle a necessary and desirable development. Nevertheless, while expressing distaste for the involvement of the private sector in humanitarian response is passé, talk of the importance of local markets and of ‘local innovation’, ‘indigenous innovation’ or ‘bottom-up innovation’ inevitable begs the questions: is the private sector one of the local participants as well as those in humanitarian need, and what do they want out of the partnership?

The current drive towards open data – and the belief in the emancipatory potential of open data access – means that transparency is a highly relevant theme on the humanitarian innovation agenda. Yet, on a pragmatic level, in an avalanche of information, it is difficult to see what is not there, particularly for individuals in crisis with limited access to information technology or with limited (computer) literacy.

Accountability and transparency thus seem to be missing in the implementation of the humanitarian innovation agenda, although innovation should be a means to enhance these objectives (among others) to produce a better humanitarianism.

Conclusions

First, we must beware of the assumption of automatic progress. We may be able to innovate ourselves out of a few traditional challenges and difficulties but most will remain, and additionally there will be new challenges resulting from the new technology.

Second, innovation looked at as a process appears suspiciously like the reforms of yesteryear. What, for example, is the difference between ‘bottom-up innovation’ and the ‘local knowledge’ valued in previous efforts to ensure participation? And are the paradigm shifts of innovation really much different from the moral improvement agenda of approaches such as the human-rights-based humanitarian aid?

Third, the increasingly self-referential humanitarian innovation discourse itself warrants scrutiny. With almost no talk of justice, social transformation or redistribution of power, we are left with a humanitarianism where inclusion is about access to markets, and empowerment is about making beneficiaries more self-reliant and about putting the label ‘humanitarian’ onto the customer concept in innovation theory.

 

***

[1] www.alnap.org/resource/9207
[2] See the IFRC World Disasters Report 2013 on Technology and Humanitarian Innovation.
www.ifrc.org/publications-and-reports/world-disasters-report/world-disasters-report-2013/
[3] www.unocha.org/hina

 


***

This blog is based on Kristin B. Sandvik’s article, ‘Humanitarian innovation, humanitarian renewal?’, published in a special Forced Migration Review supplement on ‘Innovation and refugees’.

A Humanitarian Technology Policy Agenda for 2016

Written by

The World Humanitarian Summit in 2016 will feature transformation through innovation as a key theme. Leading up to the summit, OCHA has voiced the need to “identify and implement….positions that address operational challenges and opportunities” (OCHA 2013) relating to the use of information technology, big data and innovations in humanitarian action.

In this blog post we sketch out four areas in need of further research over the next two years to provide policymakers, humanitarian actors and other stakeholders with up to date and relevant research and knowledge.

1.    Empowerment and Accountability

  • Pivoting humanitarian action: Maximizing user-benefit from technology

Affected populations are the primary responders in disasters and conflict zones, and actively use information technology to self-organize, spread information about their condition, call for aid, communicate with humanitarian actors, and demand accountability. New technologies also have the potential to put responders at the center of the entire life cycle of humanitarian action – from needs assessment and information gathering, to analysis, coordination, support, monitoring and evaluation.  It is crucial that member states, humanitarian organizations and volunteer & technical communities (V&TCs) improve their actions to take advantage of this opportunity. The 2016 Summit should strengthen the end-user perspective in the development of guidelines for V&TCs.

  • The changing meanings of accountability

Increasingly over the last 20 years, the humanitarian community has focused on issues of agency accountability and professionalization of humanitarian action, vis-à-vis donors as well as beneficiaries. However, the technological revolution in humanitarian action and the increasingly central role of large telecom and tech companies makes it necessary to broaden the focus of accountability considerations.  For example, OCHA is now considering developing guidelines for how formal humanitarian organizations and V&TCs should cooperate with these companies. Leading up to the 2016 Summit, there is a need for more reflection and research on how technology can be used to enhance accountability in humanitarian action for all parties, including new actors.


2.    The role of aggregated data

Data collection and the importance of aggregated data have come to occupy an important role in humanitarian action. As illustrated by the 2013 World Disasters Report, big data and remote sensing capabilities provide an unprecedented opportunity to access contextual information about pending and ongoing humanitarian crises. Many notable initiatives such as the UN Global Pulse suggest that the development of rigorous information management systems may lead to feasible mechanisms for forecasting and preventing crises. Particular attention should be paid to three issue areas:

  • Veracity and validity

Multiple data transactions and increased complexity in data structures increase the potential for error in humanitarian data entry and interpretation. Data that is collected or generated through digital or mobile mechanisms will often pose challenges, especially regarding verification. Although significant work is underway to establish software and procedures to verify data, understanding the limitations posed to veracity and validity of humanitarian data will be critical.

  • Identity and anonymity

As humanitarian data is aggregated and made public, the chances for re-identification of individuals and groups increase at an unknown rate. This phenomenon, known as the mosaic effect, is widely recognized but little understood. There is little understanding of the dangers that shared anonymous data would pose in a humanitarian context, where data may be limited, but the potential damage of re-identification would be quite extreme.

  • Agency and (dis)empowerment

The aggregation of humanitarian data from multiple data streams and sources decreases the likelihood that individuals and groups reflected in that data will be aware of, and able to influence, the way in which that data is used.  This principle, sometimes referred to as informational self-determination, presents a challenge to digital and mobile data collection contexts generally, but is highly problematic in humanitarian contexts, where risks associated with personal information are particularly grave.


3.    Enabling and regulating V&TCs

Remote volunteer and technical communities (V&TCs) now play an increasingly important role in humanitarian contexts – generating, aggregating, classifying and even analyzing data, in parallel to, or sometimes in collaboration with more established actors and multilateral initiatives. They increasingly enjoy formalized relationships with traditional humanitarian actors, processing and generating information in the support of humanitarian interventions. Yet individual volunteers participating in such initiatives are often less equipped than traditional humanitarian actors to deal with the ethical, privacy and security issues surrounding their activities, although some work is underway. Although in many ways the contribution of V&TCs represents a paradigm shift in humanitarian action, the digital and volunteering revolution has also brought new concerns with regards to the knowledge and understanding of core humanitarian principles and tasks, such as ‘do no harm’ and humanity, neutrality and impartiality.

In considering the above issues, attention should be paid to inherent trade-offs and the need to balance competing values, including the following two:

  • Data responsibility vs. efficiency. There is an inherent tension between efficiency and data responsibility in humanitarian interventions. Generally, protecting the privacy of vulnerable groups and individuals will require the allocation of time and resources—to conduct risk assessments, to engage and secure informed consent, to implement informational security protocols. In humanitarian contexts, the imperative to act quickly and decisively may often run counter to more measured actions intended to mitigate informational risks to privacy and security
  • Western values vs. global standards. It has also been argued that privacy is a Western preoccupation, without any real relevance to victims of a humanitarian crisis facing much more immediate and pressing threats. This argument highlights the important tension between mitigating informational risks to privacy and security, and the need to efficiently and effectively expedite humanitarian aid. It does not account for the concrete risks posed to individual and collective security by irresponsible data management, however.

This is our modest contribution to an agenda for research and policy development for humanitarian technology. We would like to join forces with other actors interested in these challenges to contribute to a necessary debate on a number of issues that touch upon some of the core principles for humanitarian action. The ambition is to strengthen humanitarian action in an innovative and accountable manner, making us better equipped to help people in need in the future.

Note: This blog, written by Kristin Bergtora Sandvik (PRIO), Christopher Wilson (The Engine Room) and John Karlsrud (NUPI), was originally posted on the website of the Advanced Training on Humanitarian Action Project (ATHA).

The Rise of the Humanitarian Drone: Giving Content to an Emerging Concept

Written by

Kristin Bergtora, who directs the Norwegian Center for Humanitarian Studies (and sits on the Advisory Board of the Humanitarian UAV Network, UAViators), just co-authored this important study on the growing role of UAVs or drones in the humanitarian space. Kristin and fellow co-author Kjersti Lohne consider the mainstreaming of UAVs as a technology-transfer from the global battlefield. “Just as drones have rapidly become intrinsic to modern warfare, it appears that they will increasingly find their place as part of the humanitarian governance apparatus.” The co-authors highlight the opportunities that drones offer for humanitarian assistance and explore how the notion of the humanitarian UAV will change humanitarian practices.

CorePhil DSI

Kristin and Kjersti are particularly interested in two types of discourse around the use of UAVs in humanitarian settings. The first relates to the technical and logistical functions that UAVs might potentially fulfill as humanitarian functions. The second relates to the discourse around ethical uses of UAVs. The co-authors “analyze these two types of discourse” along with “their broader implications for humanitarian action.” The co-authors make the following two assumptions prior to carrying out there analysis. First, technologies change the balance of power (institutional power). Second, “although UAV technology may still be relatively primitive, it will evolve and proliferate as a technological paradigm.” To this end, the authors assume that the use of UAVs will “permeate the humanitarian field, and that the drones will be operated not only by states or intergovernmental actors, but also by NGOs.”

The study recognizes that the concept of the “humanitarian drone” is a useful one for military vendors who are urgently looking for other markets given continuing cuts in the US defense budget. “As the UAV industry tries to influence regulators and politicians […] by promoting the UAV as a humanitarian technology,” the co-authors warn that the humanitarian enterprise “risks becoming an important co-constructor of the UAV industry’s moral-economy narrative.” They stress the need for more research on the political economy of the humanitarian UAV.

That being said, while defense contractors are promoting their large surveillance drones for use in humanitarian settings, “a different group of actors—who might be seen as a new breed of ‘techie humanitarians’—have entered the race. Their aim is to develop small drones to conduct SAR [search and rescue] or to provide data about emergencies, as part of the growing field of crisis mapping.” This “micro-UAV” space is the one promoted by the Humanitarian UAV Network (UAViators), not only for imaging but for multi-sensing and payload delivery. Indeed, as “the functions of UAV technologies evolve from relief-site monitoring to carrying cargo, enabling UAVs to participate more directly in field operations, ‘civil UAV technologies will be able to aid considerably in human relief […].”

Screen Shot 2014-06-24 at 2.30.27 PM

As UAVs continue collecting more information on disasters and the impact of humanitarian assistance, they will “part of the ongoing humanitarian challenge of securing, making sense of and maintaining Big Data, as well as developing processes for leveraging credible and actionable information in a reasonable amount of time. At the same time, the humanitarian enterprise is gradually becoming concerned about the privacy implications of surveillance, and the possible costs of witnessing.” This an area that the Humanitarian UAV Network is very much concerned about, so I hope that Kristen will continue to push for this given that she is also on the Advisory Board of UAViators.

In conclusion, the authors believe that the “focus on weaponized drones fails to capture the transformative potential of humanitarian drones and their possible impact on humanitarian action, and the associated pitfalls.” They also emphasize that “the notion of the humanitarian drone is still an immature concept, forming around an immature technology. It is unclear whether the integration of drones into humanitarian action will be cost-effective, ethical or feasible.” I agree with this but only in part since Kristin and Kjersti do not include small or micro-UAVs in their study. The latter are already being integrated in a cost-effective & ethical manner, which is in line with the Humanitarian UAV Network’s mission.

Screen Shot 2014-06-24 at 2.29.46 PM

More research is needed on the role of small-UAVs in the humanitarian space and in particular on the new actors deploying them: from citizen journalists and local, grassroots communities to international humanitarian organizations & national NGOs. Another area ripe for research is the resulting “Big Data” that is likely to be generated by these new data collection technologies.

Note: This blog, written by Patrick Meier (PhD), was originally posted on the website of iRevolution.

ICCM – The Annual Gathering of a Global Digital Village

Written by

Guro Åsveen is a master student at the University of Stavanger, department of Societal Safety Science. In the spring of 2014 she will be writing her thesis on humanitarian technology and emergency management in Kenya.

 


On November 8 this year, one of the most powerful storms ever recorded, typhoon Yolanda (Haiyan) struck the Philippines in a mass of rain, wind and destruction. Reflecting on this on-going crisis and on the role of technology in humanitarian response situations, crisis mappers from across the world recently gathered in Nairobi for the annual International Conference of Crisis Mapping (ICCM)[1].

The ICCM 2013 was the fifth conference since the start-up in 2009. Patrick Meier, co-funder of the Crisis Mappers network, held the opening speech. Commenting on the value of partnerships, Meier cited an old African saying, “It takes a village”, implying that when people work together they can make anything happen. He asked: How can the crisis mapping community best contribute to help save lives in a crisis situation?

 Towards a more digitalized response

In the Philippines and elsewhere, the affected communities are undoubtedly the most important part of the response village. When disaster strikes, members of the local communities immediately start to organize help for their friends and neighbours, using the resources already in place. In the crisis literature, this acute phase is known as “the golden hour” which is when the chances of saving lives are the greatest. The long-standing myths that portray victims of disasters as dysfunctional and helpless are thus proven to be incorrect. In fact, one study found that nine out of ten lives saved in a crisis are due to local and non-professional helpers[2].

Nonetheless, even if there is no replacement for the crucial peer-to-peer assistance during crisis, the offering of help should and do not stop at the local or even national level. As for the crisis mappers, they have a dual approach: While at the one hand seeking to engage with other NGOs and traditional humanitarians, they are also speaking directly to locals on the ground. With the use of technology and crisis mapping, the volunteer and technical communities (V&TCs) are offering tools for crisis-affected populations through which the populations can communicate their needs. In practice this means monitoring social media and reading SMS and e-mails from victims during crisis.

Serving as an example of a formal partnership between a mapping community and the traditional humanitarian sector, the Digital Humanitarian Network (DHN) was requested to make a map for UN-OCHA as part of the preparations for the Yolanda response operation[3]. For OCHA, who holds the difficult task of coordinating international efforts, digital mapping has meant getting access to real-time data and needs assessments without themselves having to be physically present in the affected communities. Although it might be debatable whether or not this off-site positioning is in fact profitable when dealing with information and disaster management, many nevertheless highlight the potential for new technology to bring about alternative solutions to logistical challenges, thereby enabling a more rapid disaster response.

Technology in and out of Africa

When looking at the history of crisis mapping on the African continent, one of the most influential platforms for sharing digital information had its starting point in the aftermath of the 2007 Kenyan presidential election and is named “Ushahidi”, meaning “testimony”. The name reflects the role of the citizens and the volunteers who gave their testimony of post-electoral violence through sending SMS and posting on-line what they saw and experienced during that time. It further developed into an innovative and influential digital community where people can turn either for receiving or for sharing information. Another platform, “Uchaguzi”, was launched in the preparation for a new election in the spring of 2013, and through excessive mapping of the situation in different parts of Kenya, history was successfully prevented from repeating itself[4].

Another Kenyan mapping project worth mentioning is the MapKibera. Kibera is the largest slum in Eastern Africa, located in Nairobi. With a population of approximately one million inhabitants, the Kibera slum is a prominent part of the city. Mapping is utilized in search for hotspots of crime and also as a strategy to empower and build resilience among those most vulnerable. MapKibera is in many ways a great example of how making maps can help bring change to a community. Before this project, Kibera was undetectable on any maps and therefore invisible to anyone outside the slum[5].

 10 per cent technology, 90 per cent human

One thing we tend to forget when talking about mapping and humanitarian technology is that although these may serve as effective tools, all is useless without someone to gather the information, verify it and visualize it for the public or the intended user. The Crisis Mappers network has over 6,000 members from 169 different countries and the Standby Task Force (SBTF) has approximately a thousand members from 70 different countries. With a variety of nationalities and professional backgrounds, these members are to be counted as a human resource. Crisis mapping, as it was stated several times throughout the conference, is only ten per cent about the technology; the rest is dependent on human effort and judgement.

Concerning human partaking in technology, one of the main challenges discussed at the ICCM was how to deal with Big Data. Some challenged the terminology, arguing that there are too many myths and unnecessary concerns related to the concept, “Big Data”. They argued: For most people working with information technology on a daily basis, data is still data; every bits and pieces of information speaks to their original sources which will not change just because more data is shared in a larger format. In conclusion, if the format is too large for us to handle, then the problem is not data but format.

Others find the biggest challenge to be the gathering of data and how we choose between relevant and irrelevant information. If we do not qualify what type of questions are absolutely necessary to ask in a crisis situation and if we cannot agree on any standards, we may face an escalating problem with information overload and owner-ship issues related to extra sensitive and/or unverified information in the future.

Many questions stand unanswered: Is there a need to professionalize the crisis mapping community? Should it be acting as a fully independent actor, or instead work to fulfil the needs of the traditional humanitarian sector? Should the main focus be on entering into formal relationships with already established partners, or more directly on supporting disaster-prone communities and peer-to-peer engagement? Is it possible to make the technology available to a broader audience and thereby decrease the digital divide? Will we be able to use the technology in prevention and disaster risk reduction? How can crisis map technologists balance the support for open data and at the same time respect information that is private or confidential? Should unverified data be published and on whose command? Can contributors of information give or withhold consent on their own behalf or are they simply left with having to trust others to do the picking for them?

These are all high-priority questions in the “new age” of humanitarianism. Considering that crisis mapping is still an emerging field, it may take a while for it to find its role and place in the world of humanitarian affairs. The value of partnerships may be key when coming to terms with both the professionalized and traditional response organizations, as well as with the slum-inhabitants of Nairobi. In either case, technology, people and collaboration remain equally central to humanitarian efforts.

 


[1] To read more about the conference and the Crisis Mappers network, visit http://crisismappers.net

[2] Cited in IFRCs World Disaster Report, 2013. The full report can be downloaded from http://worlddisastersreport.org

[3] Study the map and read more about the Yolanda response on-line: http://digitalhumanitarians.com/profiles/blogs/yolanda

[4] Omeneya, R. (2013): Uchaguzi Monitoring and Evaluation Report. Nairobi: iHub Research

[5] For visiting the MapKibera website, go to http://mapkibera.org

Killer Robots: the Future of War?

Written by

In September 2013, PRIO and the Norwegian Centre for Humanitarian Studies hosted the breakfast seminar “Killer Robots: the Future of War?”. The goal of the seminar was to contribute to the public debate on autonomous weapons, and identify key ethical and legal concerns relating to robotic weapon platforms. The event was chaired by Kristin B. Sandvik (PRIO), and the panellists were Alexander Harang (Director, Fredslaget), Kjetil Mujezinovic Larsen (Professor of Law, Norwegian Centre for Human Rights, UiO) and Tobias Mahler (Postdoctoral Fellow, Norwegian Research Center for Computers and Law, UiO). Based on the panel discussion, the following highlights the prospects of banning autonomous weapons and legal and ethical challenges in light of current technological development.

 Killer robots and the case against them

As a result of technological advancement autonomous weapon platforms, or so-called lethal autonomous weapons (LAR), may well be on the horizon of future wars. Such development, however, raises legal and ethical concerns that need discussion and assessment. Chairing the seminar, Kristin Bergtora Sandvik, highlights that such perspectives are absent in current political debates in Norway, and points out that “autonomous weapons might not be at your doorstep tomorrow or next week, but they might be around next month, and we think that it is important that we begin thinking about this, begin understanding what this is actually about, and what the complications are for the future of war.”

Killer robots are defined as weapon systems that identify and attack without any direct human control. As outlined in the Human Rights Watch Losing Humanity Report, unmanned robotic weapons can be divided into three categories. First, human controlled systems, or human in the loop systems, are weapon systems that can perform tasks delegated to them independently, but where humans are in the loop. This category constitutes the currently available LAR technology. Second, human supervised systems, or human on the loop systems, are weapon systems that can conduct targeting processes independently, but theoretically remain on the real-time supervision of a human operator who can override these automatic decisions. Third, fully autonomous systems, or the human out of the loop systems, are weapon systems that can search, identify, select and attack targets without any human control.

Alexander Harang highlights four particular issues when using such weapon systems. Firstly, killer robots may potentially lower the threshold of armed conflict. As Harang emphasizes, “it is easier to kill with a joystick than a knife”. Secondly, the development, deployment and use of armed autonomous unmanned systems should be prohibited, as machines should not be allowed to make the decision to kill people. Thirdly, the range and deployment of weapons carried out by unmanned systems is threatening to other states and should therefore be limited. Fourthly, that the arming of unmanned weapon platforms with nuclear weapons should be a banned.

As a response to these challenges, the Campaign to Stop Killer Robots urgently calls upon the international community to establish an arms control regime to reduce the threat posed by robotic systems. More specifically, the Campaign calls for an international agreement to prohibit fully autonomous weapon platforms. The Campaign is an international coalition of 43 NGOs based in twenty countries, supported by eight international organisations, a range of scientists, Nobel laureates and regional and national NGOs. The Campaign has already served as a forum for high-level discussion. So far, 24 states at the UN Human Rights Council have participated in talks. The Campaign has also brought these demands further at the 2013 meeting on the Convention on Certain Conventional Weapons (CCW), where more than 20 state representatives participated. Harang emphasizes that “the window of opportunity is open now, and [the issue] should be addressed before the military industrial complex proceeds with further development of these weapon systems.”

Finally, Harang notes the difficulties in establishing clear patterns of accountability in war. Who is responsible when a robot kills in the battlefield? Who is accountable in the event of malfunction where an innocent civilian is killed? In legal terms, it is unclear where the responsibility and accountability lies, and whether this is somewhere in the military chain of command or with the software developer. One thing is certain: the robot cannot be held accountable or be persecuted if IHL is violated.

 

The legal conundrum

Although unmanned robotic technology is developing rapidly, there is a slow evolution on the laws which governs these matters. In the legal context it is important to assess how autonomous weapon systems exist and conform to existing legislation; may it be international humanitarian law, human rights law or general international law. Harang emphasizes that this technology also challenges arms control regimes and the existing disarmament machinery. In particular, this issue raises concerns with regards to humanitarian law, in which distinction between civilian and combatants in war is a requirement. Addressing such legal concerns, Kjetil Mujezinovic Larsen reflects on how fully autonomous weapons can be discussed in light of existing international humanitarian law. Larsen sets out some legal premises for discussion on whether such weapons are already illegal and whether they should be banned or not.

Under IHL, autonomous weapon platforms can either be inherently unlawful or potentially unlawful. Such weapons can then be evaluated with considerations to two particular principles of IHL, namely that of proportionality and distinction. Inherently unlawful weapons are always prohibited. Some weapons are lawful, but might be used in an unlawful manner. Where do autonomous weapons fit?

Larsen explains that unlawful weapons are weapons that, by construct, cause superfluous injury or unnecessary suffering, such as chemical and biological weapons. As codified under IHL, such weapons are unlawful with regards to the principle of proportionality, for the protection of combatants. This prohibition does not immediately apply to autonomous weapons, because it is concerned with the effect of the weapons on the targeted individual, not with the manner of engagement. The concern with autonomous weapons lies precisely in the way they are deployed. So, if autonomous weapons are used to deploy chemical, biological or nuclear weapons, then they would clearly be unlawful.

Furthermore, as outlined in IHL, any armed attack must be targeted at a military target. This is to ensure that the attack distinguishes between civilians and combatants. If a weapon is incapable of making that discrimination, it is inherently unlawful. Due to the inability of robots to discriminate between civilians and combatants, using them would imply uncontrollable effects. Thus, such weapons are incapable of complying with the principles of distinction, which is fundamental in international humanitarian law.

The Human Rights Watch’s Losing Humanity Report states that “An initial evaluation of fully autonomous weapons shows that even with the proposed compliance mechanisms, such robots would appear to be incapable of abiding by the key principles of international humanitarian law. They would be unable to follow the rules of distinction, proportionality, and military necessity”. However, as Christof Heyns states in his report to the Human Rights Council “it is not clear at present how LARs could be capable of satisfying IHL and IHRL requirements [.]”

As Larsen highlights, the question of compliance is a big controversy in the legal sphere. From one legal viewpoint, the threshold for prohibiting weapons is rather high. Hard-core IHL lawyers will say that prohibition will only apply if there are no circumstances whatsoever where an autonomous weapon can be used lawfully. For example, there are defensive autonomous weapons that are programmed to destroy incoming missiles. Autonomous weapons are also used to target military objectives in remote areas where there is no civilian involvement. Under these circumstances, autonomous weapons do not face the problem of distinction and discrimination. However, the presumption of civilian status in IHL states that in case of doubt as to whether a civilian or an individual is a combatant or a civilian, he or she should be treated as a civilian. Will technology be able to make such assessments and take precautions to avoid civilian casualty?  How can an autonomous weapon be capable of doubt, and act on doubt?

In addition to such legal concerns, Larsen also discusses a range of ethical and societal concerns. Some argue that autonomous weapons will make it easier to wage war, because there is less risk of death and injury to own soldiers. Such technology can also make it easier for authoritarian leaders to suppress their own people, because the risk of a military coup is reduced. Furthermore, using autonomous weapons increase the distance between the soldier and the battlefield, and make human emotions and ethical considerations irrelevant. The nature of warring would change, as robots cannot show compassion or mercy.

On the other hand, some scholars argue that such weapons may be advantageous in terms of IHL. Soldiers, under psychological pressure and steered by emotions, can choose to disobey IHL. An autonomous weapon would not have the reason or capacity to snap, and robots may achieve military goals with less violence. This is based on the argument that soldiers can kill in order to avoid being killed. As robots would not be subject to such a dilemma, it could be easier for them to capture and not kill the enemy.

Potentially, autonomous weapons can make the use of violence more precise, leading to less damage and risk for civilians. This, however, requires a substantial development of software. Throughout history, weapons have always been a passive tool that humans have actively manipulated to achieve a certain purpose. Larsen suggests that if active manipulation is taken out of the equation, perhaps autonomous weapons cannot be considered as weapons in the IHL sense. Perhaps the IHL is as such insufficient to resolve the legal disputes about LAR. This would call for the establishment of new laws and regulations to outline the issue of accountability. Alternatively, a ban could resolve the dispute of the level of unlawfulness, by constituting them as inherently unlawful. Regardless, Larsen emphasizes the urgent need of a comprehensive and clear legal framework, particularly due to the rapid technological development in this field. Larsen also notes that lawyers have to defer to technology experts to define whether such technology can comply with current legal frameworks.

 

Technological determinism?

Due to technological advancement, Tobias Mahler argues that it is realistic to expect automated and autonomous technology to be implemented in all spheres of society in the near future. In this context, how realistic is a ban of killer robots? Mahler views the chances to be slim, and foresees a technological domino effect, implying that once some states acquire autonomous robots other states are expected to follow. From a technological and military perspective, the incentives for doing so are fairly strong.

In addition to the conventional features of LARs, such as surveillance equipment, robustness and versatility, robots can also be programmed to communicate with each other. This would imply programming different vehicles to share and exploit the information they collect, advancing the strategic approach to finding and attacking targets. Such communication between machines is already used in civilian technology such as autonomous vehicles, and is also assumed to be in use in the military complex. Such development and advanced of military technology is not presented to the public, due to strategic and security considerations. Thus, the technological opportunities of LARs are immense for the military sector.

Mahler emphasizes that although the military hardware may look frightening, the real threat lies in the algorithms of the software determining the decisions that are made. It is the software that controls the hardware and makes decisions concerning human lives. Robots rely on human specifications on what to do through software. Due to limitations of what programmers can specify, software development is prone to shortcomings and challenges. How do we deal with the artificial intelligence of autonomous robots?

Software malfunctions as well as hacking are problems in all spheres where technology is used. In a future comprised by technology any device could cause potential harm for civilians. In this context, Mahler suggests that there is still not full clarity to what a killer robot is. Questioning the relative lethality of autonomous weapons, he suggests that “in 20 year, when everything will be autonomous, you might be killed by a door.” However, he points out that the concerns related to autonomous weapon systems should be ignored or avoided. This argument simply points to that such challenges are present in both the civilian context and the military context.” Nevertheless, it is unclear who the responsible party would be when using killer robots.

Other concerns raised by Mahler regard whether LAR technology differs from other types of weapon technology, and may change the nature of war. In a war situation, would soldiers prefer to be attacked by another soldier, or a killer robot? How will the dehumanization of war impact soldiers and the public? It is correct to assume that soldiers would prefer to fight with other soldiers? A soldier in a combat situation could make an ethical consideration and show mercy, contrary to robots. However, there is not much evidence which suggests that mercy is commonly used among soldiers. On the other hand, governments could gain great public support by promoting LARs as a means to limiting loss of soldiers. As Mahler states, “people are really concerned about loss of lives of their soldiers, and if there is any way to protect them, then one might go that way.”

One of the questions that remain unanswered is whether software-developers are able to program software sufficiently advanced for autonomous war machines. One way of dealing with such concerns would be to develop robots that comply with IHL. Mahler ponders whether a pre-emptive ban may be too late in light of the current technological development. Perhaps the aim should be to regulate the robots and artificial intelligence in a way so they comply with the current legislation.

In this regard, Mahler points out the need for further development of the current conceptual framework of war and the law of armed conflict. Perhaps the current concepts used in IHL may be insufficient for the future of war. For instance, in a situation where robots are fighting robots, who are considered to be combatants under IHL? Is it the software programmer or the president who decided to send out the killer robot? Future technology could perhaps be able to distinguish between civilians and combatants using face recognition or iris scans. For now, however, this issue remains unresolved.

Regardless of technological inevitability, further discussion on this issue is necessary. Legal, ethical and societal challenges must be identified, and the means to solve these challenges must be specified. Addressing these issues is important in order to curb unintended humanitarian consequences and implications in the future. Perhaps these consequences may be avoided through a ban on LAR system or that current concepts of IHL need to be broadened in order to tackle legal shortcomings. Maybe software developers will one day be able to write programs that comply with IHL. Nevertheless, it is important to discuss and address these issues based on present knowledge and tools we have in place. The future of war is still not determined.

Literature:

United Nations General Assembly – Human Rights Council (2013) “Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns”. Available at http://www.ohchr.org/Documents/HRBodies/HRCouncil/RegularSession/Session23/A-HRC-23-47_en.pdf

Human Rights Watch (2012) “Losing Humanity Report”. Available at http://www.hrw.org/node/111291/section/1

Campain to Stop Killer Robots (2013) “Who we are”. Available at http://www.stopkillerrobots.org/coalition

The complete video of the “Killer Robots: the future of War?” seminar is available here.