Neither Artificial Nor Intelligent

Blog by Peter Burt
UN navy personnel take part in Exercise Unmanned Warrior 2016 in Scotland. They are testing and demonstrating the latest in autonomous naval technologies, in co-operation with British and other forces. Photo: John Williams/US navy

The deadly wars in Ukraine and Gaza are giving us an insight into how battles may be fought in future.  Both wars are acting as trial zones for new military technologies, particularly automated technologies driven by artificial intelligence (AI) to undertake intelligence analysis and targeting. In each case, AI has allowed soldiers to sift rapidly through huge volumes of data - with deadly results.

Artificial Intelligence (AI), automated decision making, and autonomous technologies have already become common in everyday life and may offer immense opportunities to dramatically improve society.  Smartphones, internet search engines, AI personal assistants, and self-driving cars are among the many products and services that rely on AI to function.  However, like all technologies, AI also poses risks if it is poorly understood, unregulated, or used in inappropriate or dangerous ways.  The most dangerous applications of AI – and the one that governments are least interested in controlling – are military applications based around the use of armed force.

There is a lot of hype about AI and its potential uses, and even the phrase 'artificial intelligence' is a form of hype.  Kate Crawford, author of Atlas of AI, believes that AI is neither artificial – it requires vast amounts of resources, fuel, and human input – nor intelligent in any meaningful way that humans understand by intelligence.  AI systems fall far short of human intelligence, but have wider problem-solving capabilities than conventional software.  The broader term ‘computational methods’ is often a better way of describing such systems, which rely on advances in processing power and machine learning techniques to handle huge amounts of data rapidly.

In current AI applications, machines perform a specific task for a specific purpose.  Hypothetically, AI may eventually be able to perform a range of cognitive functions, respond to a wide variety of input data, and understand and solve any problem that a human brain can.  Although this is a goal of some AI research programmes, it remains a distant  prospect.  Jaron Lanier and E. Glen Weyl have pointed out that “'AI' is best understood as a political and social ideology rather than as a basket of algorithms.  The core of the ideology is that a suite of technologies, designed by a small technical elite, can and should become autonomous from and eventually replace, rather than complement, not just individual humans but much of humanity”.  It should therefore be no surprise that those driving ahead the implementation of AI are institutions such as tech sector corporate giants, militaristic governments, and the Chinese Communist Party.

Because of the rapid speed at which AI systems can interpret data and execute commands, they are  seen by the world’s military powers as a way to revolutionise warfare and gain an advantage over enemies.  Military applications of AI have begun to enter operational use and new systems with worrying characteristics are rapidly being rolled out.  Some of the military applications of AI are shown in Box 1.

Box 1: Military applications of artificial intelligence:

    Intelligence, surveillance, and reconnaissance.
    Cyber operations.
    Electronic warfare.
    Command and control and decision support, including targeting.
    Drone swarms.
    Autonomous weapon systems (able to operate with limited or no human control).
    Information warfare.

The UK government, like other big military spenders, has made no secret of the fact that it attaches immense importance to the military applications of AI and other emerging technologies and intends to race ahead with their development.  Significant decisions on the future use of military AI are being made and equipment development programmes and policies are rapidly moving forward in the absence of an ethical compass and guardrails to mitigate the considerable risks posed by the use of AI systems on the battlefield..

AI systems currently under development by major military states undoubtedly pose threats to lives, human rights and well-being.  The risks posed by military AI systems can be grouped into three categories.  The first set of risks are ethical and legal, covering questions such as whether robotic systems would be able to comply with the laws or war, and who would be held accountable if things went wrong.  The second category of risks relate to the practicalities of military operations using AI systems and the inherent technical vulnerabilities of such systems.  AI systems are only as good as their training data and if a data set is unrepresentative of real life, this can have a large impact on the results.  Further problems can arise when humans  misunderstand systems, or when systems are so complex that their outputs are unexplainable or unpredictable.  And like all computer networks, military AI systems are vulnerable to attacks from enemies who may attempt to jam, hack, or spoof them.  Finally, AI poses strategic long-term risks to peace and security because it can lower the threshold at which political leaders will resort to using armed force in conflict, and may result in arms racing and proliferation.  The speed at which AI-enabled military force can be executed may lead to rapid uncontrolled escalation with severe consequences.

The use of AI systems in Israel's invasion of Gaza shows the consequences of using such technology without adequate supervision or safeguards.  Israel has reportedly been using 'The Gospel', an AI decision support system which identifies buildings to target, and a system called 'Lavender' which it is claimed identifies individuals to target for assassination.  Employed with minimal human oversight, untransparent data sources, and a permissive approach to casualties, such systems have undoubtedly contributed to the high civilian death toll resulting from Israeli military action, exploding the myth that high technology weapons are 'smart' systems which can lead to fewer civilian deaths.

Drones, loitering munitions, and robotic systems have grabbed the headlines topping many news stories about the Ukraine war.  Many of these systems can operate with a high degree of autonomy.  For example they can programmed to search in a defined area and highlight possible targets such as tanks to the operator.  In these circumstances they can be independent of human control.  There has been speculation among military analysts and in the media that loitering munitions operating in autonomous modes may have been used as lethal autonomous weapons – 'killer robots' - on the battlefield in Ukraine. Such claims should be taken with a pinch of salt as they are usually based on inflated claims of a weapon's capabilities made by manufacturers, rather than on direct evidence.  However, what cannot be disputed is that we are beginning to see a new generation of weapon systems being deployed which are showing a trend towards decreasing levels of human control.  At present a human operator is able to approve an attack using these weapons, but the requirement for human approval can easily be removed with minor technical upgrades to the system.

This poses risks, and means that action is urgently needed to introduce arms control measures on autonomous weapons systems, including a ban on systems which use target profiles that represent people or cannot be meaningfully controlled by humans.  The UN secretary general has just published a report calling for the conclusion, by 2026, of a international legally binding instrument to prohibit and regulate autonomous weapons systems.  Many states are supporting this aim.  Others, notably the US and Russia, are opposed to such a treaty, and the UK is dragging its feet.  The Campaign to Stop Killer Robots, a coalition of civil society organisations, is pushing states to negotiate on such a treaty and has published a model for what such a treaty should include, based around a requirement that weapon systems should always be under meaningful human control.  Their website has lots of ideas on action individuals can take to support the campaign.