The Consequences of Implementing AI in a Military and Defence Security Environment

Lead Research Organisation: Royal Holloway University of London
Department Name: Information Security

Abstract

This research aims to answer the following research questions: AI as a weapon: how is AI developed, deployed, implemented and monitored in the military (and/or national security)? What controls are required to manage responsible and/or de-escalatory use of 'weaponised AI' capabilities?

In order to approach these questions, the researcher is pursuing a grounded approach, developing theory based on strategy and policy documents as well as data gathered from the field through practitioner interviews, fieldwork and ethnographic observation. These methods involves attending a range of military and technical events looking at AI and defence, as well as a two month placement at the NATO cooperative cyber defence centre of excellence in Tallinn. By drawing insights on how AI technology is being implemented in practice in this environment, the researcher hopes to accurately map out security threats and analyse the relative success of mitigation for these threats (for example, through regional market regulation, military innovation policy, and the development, national and international norms. This research lies within EPSRC's remit alongside the goals of the information security PhD programme as it uses interdisciplinary methods to explain key technical security concerns in the national security environment. Placing emerging technology in the context of economic, social and political developments, this research will further the lively debate occurring around AI-enabled technical deployment in controversial environments, and how unforeseen negative consequences may be avoided.

Planned Impact

The most significant impact of the renewal of Royal Holloway's CDT in Cyber Security will be the production of at least 30 further PhD-level graduates. In view of the strong industry involvement in both the taught and research elements of the programme, CDT graduates are "industry-ready": through industry placements, they have exposure to real-world cyber security problems and working environments; because of the breadth of our taught programme, they gain exposure to cyber security in all its forms; through involvement of our industrial partners at all stages of the programme, the students are regularly exposed to the language and culture of industry. At the same time, they will continue to benefit from generic skills training, equipping them with a broad set of skills that will be of use in their subsequent workplaces (whether in academia, industry or government). They will also engage in PhD-level research projects that will lead to them developing deep topic-specific knowledge as well as general analytical skills.

One of the longer-term impacts of CDT research, expressed directly through research outputs, is to provide mechanisms that help to enhance confidence and trust in the on-line society for ordinary citizens, leading in turn to quality of life enhancement. CDT research has the potential of directly impacting the security of deployed system, for example helping to make the Internet a more secure place to do business. Moreover the work on the socio-technical dimensions of security and privacy also gives us the means to influence government policy to the betterment of society at large. Through the training component of the CDT, and subsequent engagement with industry, our PhD students are exposed to the widest set of cyber security issues and forced to think beyond the technical boundaries of their research. In this way, our CDT is training a generation of cyber security researchers who are equipped - philosophically as well as technically - to cope with whatever cyber security threats the future may bring. The programme equip students with skills that will enable them to understand, represent and solve complex engineering questions, skills that will have an impact in UK industry and academic long beyond the lifetime of the CDT.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/P009301/1 01/10/2016 31/12/2026
1942837 Studentship EP/P009301/1 01/10/2017 18/05/2022 Amy Ertan
 
Description What were the most significant achievements from the award?
In progress:
- Uncovering the fragmented nature of military AI innovation, and the security gaps that arise from this (slow procurement, ill-designed technology).

To what extent were the award objectives met? If you can, briefly explain why any key objectives were not met.
- In progress: Currently mapping the threat landscape and security threats from military AI. On track to develop an appropriate framework / tool that may help mitigate these threats.
- Successfully bringing communities together at a national (DCMS, DSTL) level and regional (fieldwork with NATO cooperative cyber defence center of excellence) level.
How might the findings be taken forward and by whom?
- I have been in conversations with multiple organisations and research think-tanks to tackle the problem as part of initiatives looking at 21st century conflict and security, military procurement, and responsible AI framework development.
Exploitation Route - Drawing together communities to create a unified discussion on 'weaponised AI' in the defence environment, and the security implications of this innovation.

- Best practice frameworks may be utilised by defence actors including the military, policy-makers and private defence organisations, to responsibly design and deploy ML technology in a way that does not amplify security vulnerabilities.
Sectors Aerospace, Defence and Marine,Government, Democracy and Justice,Security and Diplomacy

 
Description - I have consulted on a thinktank report into 'AI and National Security' - led by the Royal United Services Institute as commissioned by GCHQ (due for release late Spring 2020) - I have spoken at industry events at several large defence companies, as well as in military (non-partisan, non-lobby) forums such as AFCEA Europe. - I am contributing to a NATO CCDCOE panel on AI and cyber-warfare in April 2020, which will have a diplomatic, industry and academic audience. - I am in the process of creating an Offensive Cyber working group with colleagues, aiming to unite policy-makers and decision-makers (across the media and government) to discuss issues of emerging technology and security.
First Year Of Impact 2019
Sector Aerospace, Defence and Marine,Government, Democracy and Justice
Impact Types Policy & public services