ATM: Automated Threat Modelling for Enterprise AI-enabled Assets

Lead Participant: UNIVERSITY OF SHEFFIELD

Abstract

In today's AI era, most companies use AI assets incorporating machine learning and deep learning models. In this context, AI assists enterprises in their decision-making process. The estimated cost of building and implementing an AI application is **$50k on average**. It is reported that AI will contribute up to **$15.7 trillion** to the global economy **by 2030**. Due to the AI revolution, there has been an increased dependency on AI infrastructures, especially in sectors like Banking and Finance, Retail, Aviation, Autonomous Industries, Insurance, Robotics, etc. Since AI-enabled technologies are present in almost every sector nowadays, the chances of AI going rogue would create a catastrophic impact if not protected.

However, AI-enabled asset manufacturers mainly focus on designing, training and deploying AI-based solutions without considering security requirements. As a result, AI-based attacks have become common over recent years. These attacks can potentially manipulate the decision-making of AI-enabled assets in a way that would be imperceptible to humans. The most common attacks on AI/ML are _**Data Poisoning, Model Hijacking, Adversarial, and Transfer Learning Attacks**._

To address the negative impacts such threats can cause on client assets, they must know if their device is vulnerable. In this context, enterprises perform threat modelling to understand the vulnerability of their infrastructures. Manual vulnerability assessment is the most common approach; however, it is neither a practical nor accurate method for analysing AI-enabled assets' vulnerabilities as it cannot extract and understand AI algorithms and data. Hence, autonomous AI-assisted threat modelling can facilitate not only the design of a comprehensive and accurate threat model but also assist with making an appropriate response. However, to the best of our knowledge, there is no autonomous AI-enabled threat-modelling solution to analyse threats against **AI-enabled assets in great depth**. Moreover, existing vulnerability assessment approaches do not ensure the confidentiality of clients' data.

In this regard, we propose to develop an AI-assisted **Automated Threat Modelling (ATM)** System that will help detect all the threats related to the AI-enabled asset by generating a Threat Model, providing countermeasures, and prioritising them to mitigate discovered threats.

Lead Participant

Project Cost

Grant Offer

UNIVERSITY OF SHEFFIELD £59,735 £ 59,735

Publications

10 25 50