AI: Implications for Peace and Security

Blog by Tim Street
Photo: Student/Young Pugwash Conference

‘Artificial Intelligence (AI) presents enormous global opportunities: it has the potential to transform and enhance human wellbeing, peace and prosperity.’

As highlighted in the Bletchley Declaration – the outcome of the AI Safety Summit, hosted by the UK government in November 2023 and held at the home of the Second World War codebreakers – AI, peace and security are inextricably linked. But in what ways? For example, how can this technology help us create a more peaceful world? What are the military and security implications of AI? How does AI interact with nuclear weapons? And how should this technology be regulated?

These were the questions asked of participants at the Student/Young Pugwash ‘Artificial Intelligence, Peace and Security Conference’ on 27 January, at King’s College London. The conference’s keynote panel boasted four expert speakers who provided an overview of the political, legal, ethical and technical issues AI raises for society.

Speakers included: professor Elena Simperl (King’s College London), Rachel Coldicutt (Careful Industries), Dr Matt Mahmoudi (Amnesty International) and Dr Peter Burt (Drone Wars UK). This panel was followed by a range of presentations from students and young professionals, both from the UK and several other countries.

For some of the participants, the focus was on how AI could actively promote peace. Sarah Weiler examined how AI has supported many of the UN’s peacebuilding efforts, from satellite imagery in humanitarian situations to natural language processing in diplomatic negotiations. However, the majority of presentations focused on the threat this technology poses to peace.

For Dekai Liu, the domestic surveillance threat is a primary concern. Looking at the issue using the thought of French philosopher Michel Foucault, Liu argues that Large Language Models (LLMs) have heightened the risk of Orwell’s ‘Big Brother’ becoming a reality, as illustrated by the use of instant messaging systems to bolster a state’s surveillance capabilities. Jan Quosdorf and Vincent Tadday’s presentation took a more international approach. Using US military contractor Anduril as a case study, they argued that the adoption of AI systems in conflict has far outpaced Europe’s ability to regulate it, and that this void must be filled immediately.

The interplay between AI and nuclear weapons was at the core of a large bulk of the submissions. While AI can improve remote sensing for arms control and treaty verification, Jingjie He made the case that it also threatens to undermine these very same systems. ‘Counter-AI tactics’ come in various forms, He claims, from poisoning the training data to inferring the architecture of LLMs in an effort to steal the models.

Economics PhD researcher Joel Christoph examined the possibilities for AI to reduce nuclear risk, from ameliorating verification mechanisms to enabling better diplomacy through predictive analytics. Also recommending the uptake of AI in this field – so long as it includes transparency measures – Syeda Saba Batool’s ‘AI for Peaceful Use of Nuclear Energy’ explored how AI’s ability to analyse vast swathes of diverse data can support the work of nuclear inspectors.

Beyond the nuclear realm, much of the work revolved around regulation of AI writ large. For some, successful regulation requires a look to the past.

Veerle Moyson argued that lessons for regulating Autonomous Weapon Systems (AWS) can be found in the nuclear regime, particularly if principles such as humanitarianism, equality of states, and long-term sustainability are drawn on. Arian Ng suggested establishing a committee of powerful nation-states with shared ethical standards and transparent communication. Multilateralism was a key theme in the presentations, with Mahmoud Javadi emphasising its importance. Javadi concluded with three bold recommendations for global military AI governance: ‘legal empathy’, an ‘ambitious-cum-humble mindset’ and ‘differentiation’.

Writing from a more legalistic standpoint, PhD student Marco Sanchi honed in on the ‘crisis of causality’ that would arise if an AI system committed a war crime. Sanchi’s answer: allocating liability to those responsible. For example, Sanchi proposes the creation of an International Tribunal for Autonomous Weapons and the employment of ‘Administrative / Military Liability’.

Offering a more cautionary view, Océane Van Geluwe warned against overstating the capabilities of these systems. Despite the risks posed by AI, it is crucial to separate myth from reality, Geluwe claimed, and focus on establishing agile regulatory frameworks that can keep up with the pace of change.

We hope the event will encourage further academic work the connections between AI, peace and security, with attention to both the opportunities and risks that this technology presents.

Topics: Technology, AI