By Sahana Bhagat
Technological advances are rapidly outpacing our ability to reflect and decide whether a
particular new technology is one that serves our public good. Technologists, entrepreneurs,
policymakers, ethicists and legal scholars from Microsoft to MIT are now openly questioning
how artificial intelligence and what is called “machine learning” can be designed and/or
regulated to ensure such automated systems don’t do harm, from entrenching racial stereotypes,
or other forms of discrimination, in insurance, criminal justice, healthcare, or in many other
Nowhere is the dystopian vision of machines increasingly taking over human agency more
frightening than in the current research on lethal autonomous weapons systems, known as
“LAWS”. LAWS are weapons that utilize artificial intelligence to locate, identify, and attack
targets without human intervention. Dubbed ‘killer robots’ their critics argue these technologies
lack human morality and judgement, and point out the danger in assuming that the automation of
the exercise of lethal force is more ‘objective’ than human rationale.
While it is generally assumed that lethal autonomous weapons systems have not yet been
deployed, existing weapons systems that are deployed, particularly defensive weapons, share
some of the same characteristics. A Turkish state-owned defense company, STM, recently
unveiled a “kamikaze drone” complete with facial recognition technology. Increasing military
investment in artificial intelligence, and what are known as “loitering munitions” (weapons
systems that can “loiter” in a target area for some time before automatically identifying a target
and striking) could make LAWS a reality within the next few years.
Those who advocate for the development of LAWS cite their several advantages. As
autonomous weapons lack a ‘control-and-communication link’ between system and operator,
they are seen as more secure, i.e. less likely to be vulnerable to interception and attack. They
also point out that in addition to being more secure, autonomous weapons can act without the
delay between command from the operator and interpretation and execution by the system.
Countering critics concerns about their use, proponents argue that because these systems do not
feel fear, they are capable of making more rational decisions than human combatants. The
argument here is that systems will not react to a threat with an intense need for self-preservation,
and will therefore be less violent and show greater restraint than a soldier.
The weaponization of this new technology raises questions of how that technology should be
governed and regulated. LAWS mark a paradigm shift in warfare. They challenge long standing
views on the morality of war and blur existing conceptions of responsibility in war. As
technology moves further from direct automation and towards systems that can adapt, learn, and adjust, their actions become increasingly unpredictable. By definition, imbuing a system with
autonomous functions means humans cannot control how they will react. The real issue here,
then, is that there is an unprecedented degree of autonomy in a weapons system, and no legal,
moral, ethical, or technological infrastructure to support, regulate, or govern it.
At present, debates on these challenges are taking place under the United Nation’s Convention
on Certain Conventional Weapons in Geneva. The UN Secretary-General, António Guterres has
called for their prohibition in March of this year. The UN’s Group of Governmental Experts, a
subsidiary body of the Convention on Certain Conventional Weapons, began meeting in 2016 to
bring together state signatories, international organizations, nongovernmental organizations, and
academic institutions in discussions on LAWS. Their most recent meeting was in November of
20129. Though the GGE has been discussing LAWS since 2016, little has been achieved
beyond defining LAWS and outlining ‘best practices’ for their use. In the GGE’s August 2018
meeting, 26 states advocated for a ban on fully autonomous weapons, while 12 states including
the United States, the United Kingdom, and Russia, opposed a treaty on LAWS.
A report by the Human Rights Watch issued last year argues machines are unable to distinguish
between combatants and civilians, especially in armed conflicts where the lines between friend
and foe are unclear. In these situations, the report argues, the opportunity for fratricide and
civilian death is high, and the pace of such an attack would be too fast for human intervention to
prevent it once it begins. From a legal perspective, the question of responsibility poses a major
challenge. How can a machine be held accountable for civilian deaths, or fratricide? Is it the
programmer who will be persecuted, even though the machine acts autonomously?
Experts have also expressed concerns over the unreliability of fully autonomous weapons and the
high risk of uncontrolled proliferation that would inevitably accompany development of LAWS.
In 2015, a large group of AI researchers and robotic engineers released an open letter calling for
a ban on lethal autonomous weapons. As of 2018, the letter had over 20,000 signatures,
including those of Elon Musk and Steve Wozniak.
Advocacy efforts are largely centered in nongovernmental organizations. Campaign to Stop
Killer Robots, formed in October 2012, is a coalition of non-governmental organizations (NGOs)
that is working to ban fully autonomous weapons and thereby retain meaningful human control over the use of force. 30 countries and the European Parliament have signed on to a call to ban
For more information see PAX’s report titled “Slippery Slope: The Arms Industry and
Increasingly Autonomous Weapons” published on Nov. 11, 2019.
For how you can become engaged in advocacy against LAWS, please visit