It’s time to stop the drift towards ‘Killer Robots’

IssueJune - July 2023
A Russian-made KYB-BLA suicide drone or ‘loitering munition’ shot down over Kyiv in March 2022. PHOTO: Ukrainian ministry of defence via Wikimedia commons (cc By 4.0)
Feature by Peter Burt

Back in 2018, Drone Wars UK published Off The Leash, a report which we considered to be a ground-breaking study into the development of armed autonomous drones – ‘killer robots’ able to operate without human control.

Our study showed that the British ministry of defence (MoD) was actively funding research into technology supporting the development of autonomous weapon systems. We mapped out the agencies, laboratories, and contractors undertaking this research and we identified some of the research projects they were working on.

Five years later, many of these research projects have now evolved into real-life military capabilities. The royal navy is now operating uncrewed minesweeping craft and is trialling uncrewed submarines; the air force is testing drones which can operate together as an autonomous swarm; and the army is piloting driverless trucks to deliver supplies to troops on the front line.

“The royal navy is now operating uncrewed minesweeping craft and is trialling uncrewed submarines; the air force is testing drones which can operate together as an autonomous swarm”

These systems depend on advanced computing and robotics technology to operate – they are just a fraction of a wide range of programmes which the MoD is pioneering based around artificial intelligence (AI) technology.

As yet, these systems are only able to conduct straightforward tasks in relatively simple environments, but the MoD is keen to exploit what it considers to be the ‘battle-winning capability’ of AI.

Risks

AI technologies are not without risk, especially if they are used to control and operate weapon systems. Experience from recent conflicts in Ukraine, Nagorno- Karabakh and the Middle East shows we are beginning to see a new generation of weapon systems being deployed which show a trend towards decreasing levels of human control.

Automated air defence systems have long been in common use. While they are not completely autonomous, with a human operator supposedly in command, they do have complex computer- controlled and increasingly autonomous features.

There have been a number of high-profile failures of such systems including the shooting down of Iran Air Flight 655 (1988), Malaysian Airlines MH17 (2014) and Ukrainian Airlines PS752 (2020). There were also two instances where friendly aircraft were attacked by Patriot air defence systems during the 2003 Gulf War.

There are three main causes for these failures: the speed at which the systems operate; the complexity of the tasks they perform; and the demands their use places on human operators.

As more and more tasks have been delegated to computer systems, the human operators of air defence systems have changed from active controllers to passive supervisors. This has made human control over specific use of force decisions increasingly meaningless, resulting in dangerous mistakes with tragic consequences.

Autonomy in weapon systems, enabled by modern computational methods, is thus a cause for concern as it threatens human rights, raises issues over compliance with the laws of armed conflict, and is based on complex and often opaque technologies which may not be clearly understood by the operator.

Killer robots in Ukraine?

AI and autonomous technology is currently playing a significant role in the Ukraine war. US company Palantir, which specialises in the development of AI technologies for surveillance and military purposes, boasts that its software is ‘responsible for most of the targeting in Ukraine’, supporting the Ukrainian military in identifying tanks, artillery, and other targets for attack.

There has also been speculation that ‘loitering munitions’ operating in autonomous modes, such as the Russian-made KYB-BLA system manufactured by ZALA Aero, may have been used on the battlefield in Ukraine. [A loitering munition is a cross between a cruise missile and a drone. Also known as a ‘suicide drone’, it is a weapon that can hover for a long time high above a target area, then attack whatever has been identified as a target either by a human being or by an automated recognition system – ed]

This does not mean that ‘killer robots’ have been used in combat in Ukraine. Public domain knowledge about the autonomous capabilities of weapons such at the KYB-BLA is vague and based on manufacturer’s claims which are almost certainly exaggerated. The weapon can be operated in autonomous mode or under human control, and there is as yet no evidence to indicate whether those used in Ukraine have operated in an autonomous mode.

There have also been claims that loitering munitions operating autonomously may have been used in combat in Libya. These are based on a report to the UN security council by the United Nations expert panel on Libya which stated that a Turkish- manufactured Kargu-2 drone had ‘hunted down and remotely engaged’ Libyan national army forces during the conflict in 2020.

Again, this appears to be based on a manufacturer’s claims about the capability of their systems rather than clear evidence that the munition is able to operate autonomously.

These two examples show that caution is necessary in relation to media reports claiming that lethal autonomous systems have been used in conflict. At the same time, it is also clear that we are beginning to see a new generation of weapon systems being deployed which show a trend towards decreasing levels of human control.

AI, combined with the use of sophisticated sensors, is allowing increasing levels of autonomy to be assigned to complex weapon systems.

At present a human operator is able to approve an attack using these weapons – but the requirement for human approval can be removed with technical upgrades to the system.

What can we conclude from the use of loitering munitions and AI- based systems in the Ukraine war and other recent conflicts?

Importantly, this does not mean that autonomous weapon systems outside human control are now being routinely used in warfare – nor that the use of such systems in conflict is inevitable.

Neither does it mean that the Russians are taking the lead in developing AI-based weapons or that the West needs to develop equivalent weapons of its own to ‘keep up with’ the Russians, Chinese, or any other military rivals.

However, these developments do mean that modern weapon systems are becoming increasingly autonomous in their capabilities, and this poses risks.

Halt the slide

Weapons with some degree of autonomy are being deployed by both parties to the Ukraine war. This creates pressure towards normalising the development of autonomous military systems and AI-based weapons and it brings the development of weapon systems that operate outside human control a step closer.

There is only a limited amount of time available to call a halt to the slide towards the development of ‘killer robots’. Action is needed now.

What form might this action take? The global Campaign to Stop Killer Robots argues that international legislation is needed to control autonomous weapon systems in the form of a new treaty. This treaty would ban autonomous systems which target humans or which cannot be effectively controlled by humans, with further controls on other types of automated weapon systems.

A key feature of this treaty would be the principle that a weapon must always be under meaningful human control. There must be a human exercising oversight over the system who has both an understanding of how the weapon functions and awareness of the consequences of its use in a given situation. That human overseer must also know that they will also be held responsible for the consequences of the weapon’s use.

Currently a majority of nations support negotiation of a legally-binding instrument on autonomous weapons systems. This includes countries which the US and UK consider ‘adversaries’ such as China and Iran. There are 10 nations which do not support a ban on killer robots, including the US and Britain. The others are US allies (Australia, Estonia, India, Israel, Japan, Poland and South Korea) apart from one – Russia.