Continuous Metalearning for AI lifecycle governance
Lead Participant:
MIND FOUNDRY LIMITED
Abstract
Mind Foundry is applying for an Innovate UK SMART Grant to provide funding for a critical area of AI innovation -- continuous metalearning.
Continuous learning is an umbrella term for AI models that continue to learn and adapt after the point of deployment, not only about their task, but also about their own learning process, using data that they receive periodically (through a continual process) or continuously.
Mind Foundry asserts that in order to measure, manage, explain and govern a continuous learning AI system, you will need to apply a metalearning approach. This will likely require creating passports and containers for both data and models involved in a particular AI system, capturing necessary data and provenance history on the model's usage, performance and calibration over time.
An evolving system will require proactive safeguards and specification in place to ensure it complies with upcoming regulatory requirements, such as the new EU proposal for AI regulation. To that end, full disclosure and transparency about model and system evolution throughout its lifecycle, as well as accurate monitoring and management of changes in the model's behaviour over time, will become a prerequisite for AI system procurement in both the public and private sectors.
The initial goals of Mind Foundry's approach are to continuously detect the key performance warning signals of an AI system such as purpose drift (a deployed model, and thus data, being used in a way it was not intended), model overfitting (through overuse of a single training data set), data governance (right to deletion), interpretability (right to an explanation), and security (risk of model-inversion attack). Ultimately this approach can then be extended to AI systems actioning appropriate mitigations, and raising alerts for and collaborating directly with human users.
Continuous learning is an umbrella term for AI models that continue to learn and adapt after the point of deployment, not only about their task, but also about their own learning process, using data that they receive periodically (through a continual process) or continuously.
Mind Foundry asserts that in order to measure, manage, explain and govern a continuous learning AI system, you will need to apply a metalearning approach. This will likely require creating passports and containers for both data and models involved in a particular AI system, capturing necessary data and provenance history on the model's usage, performance and calibration over time.
An evolving system will require proactive safeguards and specification in place to ensure it complies with upcoming regulatory requirements, such as the new EU proposal for AI regulation. To that end, full disclosure and transparency about model and system evolution throughout its lifecycle, as well as accurate monitoring and management of changes in the model's behaviour over time, will become a prerequisite for AI system procurement in both the public and private sectors.
The initial goals of Mind Foundry's approach are to continuously detect the key performance warning signals of an AI system such as purpose drift (a deployed model, and thus data, being used in a way it was not intended), model overfitting (through overuse of a single training data set), data governance (right to deletion), interpretability (right to an explanation), and security (risk of model-inversion attack). Ultimately this approach can then be extended to AI systems actioning appropriate mitigations, and raising alerts for and collaborating directly with human users.
Lead Participant | Project Cost | Grant Offer |
---|---|---|
MIND FOUNDRY LIMITED | £499,692 | £ 349,784 |
  | ||
Participant |
||
INNOVATE UK |
People |
ORCID iD |
Harriet Bensted (Project Manager) |