T-CHAIN - Trusted Computationally Heterogenous AI Networks

Lead Participant: NQUIRINGMINDS LIMITED

Abstract

**The Challenge**

What is practically needed to ensure that an AI system is trustworthy?

As a minimum the following are needed.

* algorithmic design must be trusted
* algorithm implementation must be trusted
* training data set must be trusted
* validation data set must be trusted
* query data must be trusted
* physical machines on which the system is trained must be trusted
* physical machines on which the system is run must be trusted

Additionally trust is:

* subjective: one organisation may trust the system, another may not; trust is relative to the security and fairness requirements of the user
* transitive: a break in trust in one part of the system impacts all dependent systems
* complex: trust itself is multifaceted, and depending on context is an emergent property of security, usability, fairness, bias etc.
* dynamic: trust in a system can change at any time, it is not static
* reflective: (humble) it possesses a self knowledge and qualified confidence
* transparent: it can explain its results

**The solution**

T-CHAIN provides a method for asserting and verifying the security and trustworthiness of data and AI components in a fully distributed, interoperable and secure way. It covers the full lifecycle (data collection, processing, modelling, evaluation, AI deployment, monitoring and audit).

It builds on state-of-the-art W3C best practice (Verifiable Credentials, VC), which provides a technical method for making trust assertions using a cryptographically secure, community-based web-of-trust model. This allows logical statements or claims to be combined from many sources (be they external auditors, AI ethics teams, data scientists, third party component providers, external data sources, etc.) and assembled into system-wide representations of trust.

T-CHAIN can be provided as a web service, or downloadable licensable software. Any provider of data, AI processing or computational hosts, can make their own assertions of trust, and/or reinforce assertions made by third parties.

Lead Participant

Project Cost

Grant Offer

NQUIRINGMINDS LIMITED £49,722 £ 49,722

Publications

10 25 50