What kind of human control is necessary for AI applications within national security and defence? Digital ethics and security studies

Lead Research Organisation: University of Oxford
Department Name: Oxford Internet Institute

Abstract

Artificial intelligence-enabled systems can vary in size, hardware and level of autonomy. The level of autonomy is typically classified according to the expected 'meaningful human control' (MHC) which is a metric reflecting the extent to which humans are required to intervene in a system's interactions with the real world. While the environment and scenarios of use may be relatively predictable in the civil domain, several issues arise from the development and deployment of AI-enabled systems in the security and defence domain. AI-enabled systems suffer from a range of technical limitations, ranging from lack of transparency, limited explicability, and susceptivity to adversarial attacks that create a range of ethical and legal issues -especially in International Humanitarian Law. After an initial review of the literature on Human Control of AI-enabled systems in the security and defence domain, which is heavily focused on Lethal Autonomous Weapons Systems (LAWS), it is clear that the MHC notion dominates the conversations and that existing approaches favour a one-size-fits-all solution to ensuring MHC. To avoid both "overly restrictive" and "overly permissive" approaches to MHC this research proposal seeks to establish the groundwork for a modular control framework that would: 1. Base itself on "unifying grounds" provided by ethical and legal principles. For example, the tenets of International Humanitarian Law, such as military necessity, proportionality, principles of distinction and precaution, serve as a good starting point. 2. Formulate a set of rules that would facilitate the operationalisation of such principles across different tasks and environments. This would include rules to assess the effect of various contexts vis-a-vis the level of autonomy and MHC required. 3. Present a war gaming model for the evaluation, verification and validation of such rules. This would also allow new rules to emerge through the discovery of eventual incompatibilities across certain contexts. Building on the work of the International Panel on Regulation of Autonomous Weapons, this research would analyse MHC across three levels of intervention: Control by design; Control in use; Training. In this framework, while each level of intervention would be addressed separately, they would all support each other. Breaking down AHC in such a way would allow me to identify context-agnostics control components and establish rules for the modification of control components according to the context of deployment. The framework would then establish a set of "if-then" rules that would change the level of autonomy given to AI-enabled systems and adapt MHC components.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
ES/P000649/1 01/10/2017 30/09/2027
2593677 Studentship ES/P000649/1 01/10/2021 30/09/2024 Andreas Tsamados