Discovery of explainable Digital Twins in Adversarial Environments

Lead Research Organisation: Imperial College London
Department Name: Design Engineering (Dyson School)

Abstract

Digital Twins are machine-learning models that span the whole life cycle of products, services, and interventions, and are used to explain, predict and optimize the behaviour of cyber-physical IoT systems. Many stakeholders are involved in the creation, production, and operation of cyber-physical systems and their Digital Twins. Particular complex Systems of Systems such as factories and ports are composed of other Digital Twins extending the group of involved stakeholders. In such environments, that of multi-party and multi-twin environments, it is essential to establish and guarantee trust, fairness, understanding and cooperation between all parties. This requires a holistic framework that combines novel ongoing research efforts in the field of distributed trust, ML model fairness, and explainability of causality. The research questions to address are:

{i) understand and ensure trust in the interaction of multiple stakeholders with digital twins;
{ii) understand and ensure trust in the interaction between distributed twins;
{iii) identify and learn the causality of physical processes within twins from operational data and being able to explain and predict its effect on their interaction;
{iv) guarantee fairness of the machine learning models in correctly modelling and representing rare events in the systems;
{v) develop platforms where new notions of ownership for cyberphysical assets can be developed;
{vi) and develop mechanisms to support disruptive and ethical business models.

To address these research questions, it will be essential to understand the context and interaction of Digital Twins in potentially adversarial environments, characterised by human decision makers. IBM Research is working on a neuro-symbolic AI approach that combines semantic knowledge graph reasoning and graph neural networks to automatically learn Digital Twins of cyber physical systems. It bases on semantic knowledge graph that describes the available system data and component structure. This is extended with physical reasoning to infer causal relationships into the knowledge graph that represents the underlying physical laws of a CPS. Based on the resulting causal graph the approach can learn the real behaviour of the CPS systems in Graph Convolutional Neural Networks that are naturally constrained by the encoded physics. While this approach allows to represent and learn the causal behaviour of systems in Digital Twins and interaction, it does not address the outline research questions {i)-{iv) as it neither trust, fairness nor the discovery of causality are addressed. Imperial College is developing AI based assets that can be used to establish trust in peer-to-peer settings, and has been developing applications that explain said tools. Application domains include general IoT systems, Industry 4.0 applications, Automotive, and applications in the sharing and circular economy. Expanding this work in the context of Digital Twins would open many research directions in alignment with the UK's digital transformation strategy.

People

ORCID iD

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/X524773/1 01/10/2022 30/09/2027
2792580 Studentship EP/X524773/1 16/01/2023 16/07/2026