Next generation of Privacy and Security for AI Systems

Lead Participant: MIND FOUNDRY LIMITED

Abstract

The application of Artificial Intelligence (AI) models to all parts of the economy is exponentially expanding. The technology and infrastructure to gather, process and make inference from vast quantities of data is enabling governments, companies and individuals to augment their existing decision-making processes with machine learning models, opening the doors to increasingly automated pipelines and solutions.

However, this exponential growth has also highlighted the need for regulation, with many governments and controlling bodies scrambling to keep up with assessing the new risks that AI models introduce to the security and privacy of individual, corporate and governmental data.

At Mind Foundry, we provide solutions where humans and AI work together to solve the world's most important problems. We work on high-stakes applications of AI--where the correct and ethical use of data, algorithms and models has a direct impact on human outcomes, and there is the potential to make decisions that have a population-level impact.

In doing so, we are uniquely invested in ensuring that we have a robust, transparent and ethical framework in which to build, test and deploy AI models, confident that at all stages of the AI lifecycle, security and privacy are deeply embedded, not an afterthought.

This project seeks to build a new standard for AI security: self-monitoring AI. This standard must be capable of proactively taking or suggesting actions relevant to data security and privacy at each stage of the AI model lifecycle, from data gathering to productionalised deployment. It must showcase weaknesses in a particular pipeline, protect deployed models from vulnerabilities and attacks, and must correctly identify any behaviours that may infringe on the privacy or security of the model over a future time period.

This functionality does not exist. Some solutions existing in this space take a narrow look at different parts of the AI lifecycle, largely focussed around data security and/or model privacy, but this is not sufficient to safeguard the application of AI and ensure compliance. Neither do they take into account the future degradation of a model as it is queried. We believe that the only sustainable solution is to develop a set of technologies that not only assure the compliance of the data and model pipelines at the point of deployment, but also the ability for a model to self-monitor, self-correct and report any risks as they are identified.

Lead Participant

Project Cost

Grant Offer

MIND FOUNDRY LIMITED £490,069 £ 343,048
 

Participant

INNOVATE UK

Publications

10 25 50