Tag Archives: robot technology

Fighting the War with the Ebola Drone

Written by

A particularly interesting and puzzling corner of the War on Ebola imaginary is inhabited by the triad consisting of Ebola, humanitarian governance, and unmanned technology, drones more precisely. Out of this triad has emerged what will here be called ´the Ebola Drone`. The Ebola Drone has materialized from a confluence of ideas about the relationship between diseases and (inter)national security; the means and ends of effective aid delivery; and the potentiality of drones to «be good».

The Ebola Drone is imagined to be able to do many things, including seeing, sensing and shooting Ebola infected individuals to protect Western Health Workers participating in the War on Ebola. At the same time the Ebola Drone is a reflection of the efforts made by the drone industry and the drone DIY movements to reshape the public notion of drones as spy or killer drones: the Ebola Drone is designated as a humanitarian drone; it can carry medication and other aid where health workers cannot go, due to “insecurity” or bad roads. This latter idea is not coincidentally also feeding into the current private sector frenzy to identify and promote credible and publically acceptable usages of small cargo-carrying drones.

Mike Crang and Stephen Graham refer to such narratives as “technological fantasies” that position emergent technological systems as necessary — and effective — responses to dire threats. They note that such narratives are not just instrumental devices designed to achieve desired ends; they also actively shape the larger security cultures and afford them influence. Carved out from mainstream media as well as the more obscure parts of the blogosphere, this is precisely the type of work the multiple Ebola Drone narratives appear to be doing.

Back in September, Ebola was framed by President Obama as an issue of national security (complete with a parallel manufacturing of Ebola terrorism scares) and by the UN as a threat to international peace and security. With the deployment of AFRICOM, the type of military medical response at play since September has been patterned on the modus operandi of the War on Terror: According to division spokesman Lt. Col. Brian DeSantis

Our job is to build Ebola treatment units and train health care workers. There is no mission for us to handle infected people, human remains or medical waste… We will have our own facility separate from the population where we will handle force protection and life support, similar to our facilities in Iraq or Afghanistan.”

There has also been some generalized optimism about the potential of robot technology to serve as force protection and force multiplier in the War on Ebola. The answers to questions of how US health workers can help West Africa while minimizing risk to themselves (and their country) include suggestions for “mortuary robots” to deal with the “Ebola burial problem”: the Robokiyu Rescue Robot has a pair of giant claws to pull the injured or the dead onto a slide to move them away. Another idea is to use robots for crowd control to protect the physical security of hospital staff in the case of a riot.

Then there is the Ebola Drone. There are creative proposals for using the Ebola Drone for reconnaissance, intelligence gathering and surveillance, premised on the idea that it is possible and meaningful to try to “see” Ebola from a distance so as to identify infected and thus potentially threatening individuals. One commentator proposes that drone reconnaissance could enable the military to look “for what’s happening in this village? Any signs of illness? [How] are people fleeing “. Another commentator suggests that if Global Hawks were based at the US drone base in Niger, they could easily fly over Liberia, providing surveillance which could “could help the fight against Ebola by looking for unusual human behavior, like a sudden vehicle exodus or overcrowded hospitals, which might give away an outbreak before its reported.

Elaborate scenarios are devised to prove the value of the Ebola Drone in producing ground truth: “Someone’s sick, they call a cab to take them to the hospital, they may be shedding the virus [via fluids] in the cab. They reach the hospital and there’s no beds; then they go home and they’ve contaminated these cabs.” It’s the sort of subtle clue you can catch from space, with enough time, patience and, most importantly, attention. That’s where drones come in, which could provide more eyes on potential hotspots.” No longer just an eye in the sky, but a militarized medical eye in the sky.

A different proposal for detecting sick locals is to use thermal imagery. In a discussion on DIY Drones, one user wonders if UAVs could be used to detect people with Ebola: “people who have Ebola have an increased temperature as it is one of the symptoms and from what I have seen on News most of the checking at airports is done by individuals with infra-red thermometer. The UAV could highlight individuals who might have symptoms and they could be isolated or given treatment.” Of course, even if infrared science would be successful in effectively detecting fever through layers of cloth and sweat, it could not detect the cause of the fever.

Most remarkable however, is the very aspirational rhetoric on the cargo-potentiality of the Ebola Drone to drop of medication, food and water to Ebola affected populations: In testimony before Congress about the Defense Department’s efforts to contain the Ebola outbreak, Assistant Secretary of Defense Michael Lumpkin reiterated that, “I traveled to the region thinking we faced a healthcare crisis with a logistics challenge. In reality, we face a logistics crisis focused on a healthcare challenge.”

The call for drones to carry medicine in crisis or to generally inaccessible areas (which unfailingly have been imagined to be Somewhere in Africa) is not new. In 2012, Jack Chow pondered about the potential of “predators for peace” to deliver HIV/AIDS medication. According to Chow, cargo drones could be a ‘game changer’ for delivering aid, which could eliminate or reduce the type of corruption, theft and insecurity (as well as the consequences of difficult weather conditions and problems caused by disasters) which frequently undermines delivery of aid.

Conversely, the manufacturers of smaller cargo drones for civil airspace heavily emphasize their potential humanitarian use: AERMATICA, an Italian UAV manufacturer, has suggested that ´Civil UAV technologies will be able to aid considerably in human relief operations”, evolving from performing relief-site monitoring tasks to a more incisive participation in on-field operations through the use of cargo drones. Part of a broader movement of Silicon Valley UAV-entrepreneurs, the startup Matternet describes plans to create ‘the next paradigm for transportation’ of goods and medicines to remote settlements, through a network of unmanned aerial vehicles, while another startup, ARIA (Autonomous Roadless Intelligent Arrays), wants to provide rural Africa with a humanitarian drone skyway network, which can help launch ‘a new strategy of fighting poverty from the air’. There is the MedicAir Courier UAV from BFA Systems, and countless other examples. While DHL, Google and Amazon have joined the race to develop cargo drones, the amateur hour is far from over, and neither is the struggle for access to airspace and popular legitimacy.

The Ebola Drone is imagined as a useful way to carry what doesn’t exist either here or there- an effective and available cure for Ebola: according to one commentator, “a flying drone can prove useful to send medical supplies to remote (dangerous) locations. It would act as a simple way to either stop or slow down the spread of the Ebola virus” and be a “safer alternative than people travelling to dangerous areas just to deliver materials.” Moreover, it is unclear how the drone pilot would identify the individuals, communities or health facilities that were to receive and distribute this medication.

The Ebola Drone can also mediate closed airspace: “surely the United States can use them to bring protective medical gear to hospitals in countries like Liberia or Sierra Leone. Closed borders to commercial air traffic are no barriers to drones.” Finally, the Ebola Drone is also tasked with the old jobs of bringing both hope and providing pamphlet drops to suffering peoples, as if despair and ignorance was behind the whole epidemic: “Drones also can bring hope and, say, by pamphlets deliver valuable information to West Africans”. As “knowledge can combat disease and the fear that precedes”, these pamphlets are supposed to inform people of how to protect themselves, how to discern the signs of sickness, and how to treat the stricken or safely dispose of the dead.”

Existing technology has very limited cargo-carrying capacity and can fly only for a short time. As pertinently observed by Timothy Luege, the problem is the lack of a “possible scenario in the current Ebola crisis in which you can’t deliver something more efficiently with a motorbike within the area that the drone can cover”. According to Luege, this builds on a misdiagnosis of Ebola as a problem of delivering drugs to remote areas (as we know, the current Ebola outbreak is so serious because it is urban in nature).

Finally, understanding Ebola as a “supply chain challenge” also engages the classic technology transfer argument where military technology is better and re-use for civilian purposes is both responsible and economic: in response to the regions bad roads and shortage of trucks, civilian drone technology cannot deliver the “tons of aid” needed. Hence “military-grade drones” are the answer. Part of the appeal of drones is their ability to undertake ‘dull, dirty, and dangerous’ military jobs. Some of the dullest, dirtiest and most dangerous work is related to supplying troops. The Kaman K-Max has been “extraordinarily successful at delivering supplies to American troops in remote parts of Afghanistan” and “could easily be repurposed to deliver humanitarian aid” (from 2011, the manufacturer of the K-Max began foreseeing its migration into civilian use, explicitly including humanitarian relief);  it could solve problems related to infrastructure and crime and enable more remote management, which would reduce the number of personnel needed on the ground in remote regions.  The not-unexpected second part of this argument is that the US already owns the K-Max, which is just sitting idle in storage.

In the end, then, it seems the Ebola Drone is mostly a set of imaginations about extended uses of military drones, whereby some drones do good to make many drones look better. Imagined for deployment in the War on Ebola, it is endowed with the potentiality for being surgically precise, avoiding the burden of having boots on the ground and allowing for remote management. Meanwhile, West Africans are strangely absent from the technoscape created by Ebola drone imaginary: it is a technoscape inhabited only by Western actors, who possess hardware, technical skills and the know-how of crisis management. The locals seem to be dead, infected or potentially infected. They are allotted roles as threat subjects (the Ebola terrorist scenario) or victims (in a humanitarian crisis), but either way as individuals and communities mostly void of agency. However, we should remember that while this resonates with the rationales underlying the militarized approach to Ebola, and the determinist views of technology accompanying it; on a different level the militarized approach is also a response to a lack of knowledge about how to deal effectively with disease emerging from structural injustice, a post-conflict context and “culture”. Just as drones can’t clean up combat, no Ebola Drone can ever “combat” disease.

Note: This blog, written by Kristin Bergtora Sandvik (PRIO), was originally posted on the blog of Mats Utas, Associate Professor in Cultural Anthropology at the Nordic Africa Institute.

The promise and perils of ‘disaster drones’

Written by

The dire humanitarian consequences of the use of unmanned aerial vehicles (UAVs, or drones) in conflict have become all too familiar. In contrast, there has been much less public discussion about the potential humanitarian uses of drones. So-called ‘disaster drones’ offer humanitarian agencies a range of possibilities in relation to crisis mapping, search and rescue and (some way off in the future) cargo transport and relief drops.

How can the humanitarian community benefit from the technological advances that UAVs and other unmanned or automated platforms offer without giving further legitimacy to a UAV industry looking for civilian applications for drones developed for military purposes? Are there particular ethical, legal and financial implications with respect to procuring disaster drones? This article gives an overview of current and foreseeable uses of disaster drones and ‘(ro)bots without borders’, highlighting the need for a more thorough understanding of the commercial logic underpinning the transfer of technology from the military to the civilian and humanitarian fields, and the systematic attempts being made by the UAV industry to rebrand itself as a humanitarian actor. It also shares insights from a recent workshop on the potential role of drones in Red Cross search and rescue operations, and concludes by linking the issue of the disaster drone to broader questions regarding humanitarian technology.

Available at: Sandvik, Kristin Bergtora & Lohne, Kjersti (2013). The promise and perils of ‘disaster drones’. Humanitarian Exchange Magazine.  ISSN 1472-4847.  (58).

Killer Robots: the Future of War?

Written by

In September 2013, PRIO and the Norwegian Centre for Humanitarian Studies hosted the breakfast seminar “Killer Robots: the Future of War?”. The goal of the seminar was to contribute to the public debate on autonomous weapons, and identify key ethical and legal concerns relating to robotic weapon platforms. The event was chaired by Kristin B. Sandvik (PRIO), and the panellists were Alexander Harang (Director, Fredslaget), Kjetil Mujezinovic Larsen (Professor of Law, Norwegian Centre for Human Rights, UiO) and Tobias Mahler (Postdoctoral Fellow, Norwegian Research Center for Computers and Law, UiO). Based on the panel discussion, the following highlights the prospects of banning autonomous weapons and legal and ethical challenges in light of current technological development.

 Killer robots and the case against them

As a result of technological advancement autonomous weapon platforms, or so-called lethal autonomous weapons (LAR), may well be on the horizon of future wars. Such development, however, raises legal and ethical concerns that need discussion and assessment. Chairing the seminar, Kristin Bergtora Sandvik, highlights that such perspectives are absent in current political debates in Norway, and points out that “autonomous weapons might not be at your doorstep tomorrow or next week, but they might be around next month, and we think that it is important that we begin thinking about this, begin understanding what this is actually about, and what the complications are for the future of war.”

Killer robots are defined as weapon systems that identify and attack without any direct human control. As outlined in the Human Rights Watch Losing Humanity Report, unmanned robotic weapons can be divided into three categories. First, human controlled systems, or human in the loop systems, are weapon systems that can perform tasks delegated to them independently, but where humans are in the loop. This category constitutes the currently available LAR technology. Second, human supervised systems, or human on the loop systems, are weapon systems that can conduct targeting processes independently, but theoretically remain on the real-time supervision of a human operator who can override these automatic decisions. Third, fully autonomous systems, or the human out of the loop systems, are weapon systems that can search, identify, select and attack targets without any human control.

Alexander Harang highlights four particular issues when using such weapon systems. Firstly, killer robots may potentially lower the threshold of armed conflict. As Harang emphasizes, “it is easier to kill with a joystick than a knife”. Secondly, the development, deployment and use of armed autonomous unmanned systems should be prohibited, as machines should not be allowed to make the decision to kill people. Thirdly, the range and deployment of weapons carried out by unmanned systems is threatening to other states and should therefore be limited. Fourthly, that the arming of unmanned weapon platforms with nuclear weapons should be a banned.

As a response to these challenges, the Campaign to Stop Killer Robots urgently calls upon the international community to establish an arms control regime to reduce the threat posed by robotic systems. More specifically, the Campaign calls for an international agreement to prohibit fully autonomous weapon platforms. The Campaign is an international coalition of 43 NGOs based in twenty countries, supported by eight international organisations, a range of scientists, Nobel laureates and regional and national NGOs. The Campaign has already served as a forum for high-level discussion. So far, 24 states at the UN Human Rights Council have participated in talks. The Campaign has also brought these demands further at the 2013 meeting on the Convention on Certain Conventional Weapons (CCW), where more than 20 state representatives participated. Harang emphasizes that “the window of opportunity is open now, and [the issue] should be addressed before the military industrial complex proceeds with further development of these weapon systems.”

Finally, Harang notes the difficulties in establishing clear patterns of accountability in war. Who is responsible when a robot kills in the battlefield? Who is accountable in the event of malfunction where an innocent civilian is killed? In legal terms, it is unclear where the responsibility and accountability lies, and whether this is somewhere in the military chain of command or with the software developer. One thing is certain: the robot cannot be held accountable or be persecuted if IHL is violated.

 

The legal conundrum

Although unmanned robotic technology is developing rapidly, there is a slow evolution on the laws which governs these matters. In the legal context it is important to assess how autonomous weapon systems exist and conform to existing legislation; may it be international humanitarian law, human rights law or general international law. Harang emphasizes that this technology also challenges arms control regimes and the existing disarmament machinery. In particular, this issue raises concerns with regards to humanitarian law, in which distinction between civilian and combatants in war is a requirement. Addressing such legal concerns, Kjetil Mujezinovic Larsen reflects on how fully autonomous weapons can be discussed in light of existing international humanitarian law. Larsen sets out some legal premises for discussion on whether such weapons are already illegal and whether they should be banned or not.

Under IHL, autonomous weapon platforms can either be inherently unlawful or potentially unlawful. Such weapons can then be evaluated with considerations to two particular principles of IHL, namely that of proportionality and distinction. Inherently unlawful weapons are always prohibited. Some weapons are lawful, but might be used in an unlawful manner. Where do autonomous weapons fit?

Larsen explains that unlawful weapons are weapons that, by construct, cause superfluous injury or unnecessary suffering, such as chemical and biological weapons. As codified under IHL, such weapons are unlawful with regards to the principle of proportionality, for the protection of combatants. This prohibition does not immediately apply to autonomous weapons, because it is concerned with the effect of the weapons on the targeted individual, not with the manner of engagement. The concern with autonomous weapons lies precisely in the way they are deployed. So, if autonomous weapons are used to deploy chemical, biological or nuclear weapons, then they would clearly be unlawful.

Furthermore, as outlined in IHL, any armed attack must be targeted at a military target. This is to ensure that the attack distinguishes between civilians and combatants. If a weapon is incapable of making that discrimination, it is inherently unlawful. Due to the inability of robots to discriminate between civilians and combatants, using them would imply uncontrollable effects. Thus, such weapons are incapable of complying with the principles of distinction, which is fundamental in international humanitarian law.

The Human Rights Watch’s Losing Humanity Report states that “An initial evaluation of fully autonomous weapons shows that even with the proposed compliance mechanisms, such robots would appear to be incapable of abiding by the key principles of international humanitarian law. They would be unable to follow the rules of distinction, proportionality, and military necessity”. However, as Christof Heyns states in his report to the Human Rights Council “it is not clear at present how LARs could be capable of satisfying IHL and IHRL requirements [.]”

As Larsen highlights, the question of compliance is a big controversy in the legal sphere. From one legal viewpoint, the threshold for prohibiting weapons is rather high. Hard-core IHL lawyers will say that prohibition will only apply if there are no circumstances whatsoever where an autonomous weapon can be used lawfully. For example, there are defensive autonomous weapons that are programmed to destroy incoming missiles. Autonomous weapons are also used to target military objectives in remote areas where there is no civilian involvement. Under these circumstances, autonomous weapons do not face the problem of distinction and discrimination. However, the presumption of civilian status in IHL states that in case of doubt as to whether a civilian or an individual is a combatant or a civilian, he or she should be treated as a civilian. Will technology be able to make such assessments and take precautions to avoid civilian casualty?  How can an autonomous weapon be capable of doubt, and act on doubt?

In addition to such legal concerns, Larsen also discusses a range of ethical and societal concerns. Some argue that autonomous weapons will make it easier to wage war, because there is less risk of death and injury to own soldiers. Such technology can also make it easier for authoritarian leaders to suppress their own people, because the risk of a military coup is reduced. Furthermore, using autonomous weapons increase the distance between the soldier and the battlefield, and make human emotions and ethical considerations irrelevant. The nature of warring would change, as robots cannot show compassion or mercy.

On the other hand, some scholars argue that such weapons may be advantageous in terms of IHL. Soldiers, under psychological pressure and steered by emotions, can choose to disobey IHL. An autonomous weapon would not have the reason or capacity to snap, and robots may achieve military goals with less violence. This is based on the argument that soldiers can kill in order to avoid being killed. As robots would not be subject to such a dilemma, it could be easier for them to capture and not kill the enemy.

Potentially, autonomous weapons can make the use of violence more precise, leading to less damage and risk for civilians. This, however, requires a substantial development of software. Throughout history, weapons have always been a passive tool that humans have actively manipulated to achieve a certain purpose. Larsen suggests that if active manipulation is taken out of the equation, perhaps autonomous weapons cannot be considered as weapons in the IHL sense. Perhaps the IHL is as such insufficient to resolve the legal disputes about LAR. This would call for the establishment of new laws and regulations to outline the issue of accountability. Alternatively, a ban could resolve the dispute of the level of unlawfulness, by constituting them as inherently unlawful. Regardless, Larsen emphasizes the urgent need of a comprehensive and clear legal framework, particularly due to the rapid technological development in this field. Larsen also notes that lawyers have to defer to technology experts to define whether such technology can comply with current legal frameworks.

 

Technological determinism?

Due to technological advancement, Tobias Mahler argues that it is realistic to expect automated and autonomous technology to be implemented in all spheres of society in the near future. In this context, how realistic is a ban of killer robots? Mahler views the chances to be slim, and foresees a technological domino effect, implying that once some states acquire autonomous robots other states are expected to follow. From a technological and military perspective, the incentives for doing so are fairly strong.

In addition to the conventional features of LARs, such as surveillance equipment, robustness and versatility, robots can also be programmed to communicate with each other. This would imply programming different vehicles to share and exploit the information they collect, advancing the strategic approach to finding and attacking targets. Such communication between machines is already used in civilian technology such as autonomous vehicles, and is also assumed to be in use in the military complex. Such development and advanced of military technology is not presented to the public, due to strategic and security considerations. Thus, the technological opportunities of LARs are immense for the military sector.

Mahler emphasizes that although the military hardware may look frightening, the real threat lies in the algorithms of the software determining the decisions that are made. It is the software that controls the hardware and makes decisions concerning human lives. Robots rely on human specifications on what to do through software. Due to limitations of what programmers can specify, software development is prone to shortcomings and challenges. How do we deal with the artificial intelligence of autonomous robots?

Software malfunctions as well as hacking are problems in all spheres where technology is used. In a future comprised by technology any device could cause potential harm for civilians. In this context, Mahler suggests that there is still not full clarity to what a killer robot is. Questioning the relative lethality of autonomous weapons, he suggests that “in 20 year, when everything will be autonomous, you might be killed by a door.” However, he points out that the concerns related to autonomous weapon systems should be ignored or avoided. This argument simply points to that such challenges are present in both the civilian context and the military context.” Nevertheless, it is unclear who the responsible party would be when using killer robots.

Other concerns raised by Mahler regard whether LAR technology differs from other types of weapon technology, and may change the nature of war. In a war situation, would soldiers prefer to be attacked by another soldier, or a killer robot? How will the dehumanization of war impact soldiers and the public? It is correct to assume that soldiers would prefer to fight with other soldiers? A soldier in a combat situation could make an ethical consideration and show mercy, contrary to robots. However, there is not much evidence which suggests that mercy is commonly used among soldiers. On the other hand, governments could gain great public support by promoting LARs as a means to limiting loss of soldiers. As Mahler states, “people are really concerned about loss of lives of their soldiers, and if there is any way to protect them, then one might go that way.”

One of the questions that remain unanswered is whether software-developers are able to program software sufficiently advanced for autonomous war machines. One way of dealing with such concerns would be to develop robots that comply with IHL. Mahler ponders whether a pre-emptive ban may be too late in light of the current technological development. Perhaps the aim should be to regulate the robots and artificial intelligence in a way so they comply with the current legislation.

In this regard, Mahler points out the need for further development of the current conceptual framework of war and the law of armed conflict. Perhaps the current concepts used in IHL may be insufficient for the future of war. For instance, in a situation where robots are fighting robots, who are considered to be combatants under IHL? Is it the software programmer or the president who decided to send out the killer robot? Future technology could perhaps be able to distinguish between civilians and combatants using face recognition or iris scans. For now, however, this issue remains unresolved.

Regardless of technological inevitability, further discussion on this issue is necessary. Legal, ethical and societal challenges must be identified, and the means to solve these challenges must be specified. Addressing these issues is important in order to curb unintended humanitarian consequences and implications in the future. Perhaps these consequences may be avoided through a ban on LAR system or that current concepts of IHL need to be broadened in order to tackle legal shortcomings. Maybe software developers will one day be able to write programs that comply with IHL. Nevertheless, it is important to discuss and address these issues based on present knowledge and tools we have in place. The future of war is still not determined.

Literature:

United Nations General Assembly – Human Rights Council (2013) “Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns”. Available at http://www.ohchr.org/Documents/HRBodies/HRCouncil/RegularSession/Session23/A-HRC-23-47_en.pdf

Human Rights Watch (2012) “Losing Humanity Report”. Available at http://www.hrw.org/node/111291/section/1

Campain to Stop Killer Robots (2013) “Who we are”. Available at http://www.stopkillerrobots.org/coalition

The complete video of the “Killer Robots: the future of War?” seminar is available here.