TAIBOM - Trusted AI Bill of Materials

Lead Participant: NQUIRINGMINDS LIMITED

Abstract

TAIBOM (Trusted AI Bill of Materials) addresses two fundamental challenges that impact the development and deployment of trustworthy AI systems.

* **Versioning**: How do we refer to an AI system in a stable way. How do we produce an AI inventory of dependent components? How can we use these references to make statements about a systems trustworthiness or its legal standing? Fundamentally, when we make a claim of trustworthiness, how can be be sure what we are talking about, and how can we be sure its behaviour has not changed?
* **Attestations**: How do we make attestations of trustworthiness about an AI. Whether these claims are about bias, security, right through to the strong legal contractual assertions: how do we make these claims in an interoperable way? How can we assemble the claims from the dependent parts (compositionality)? How to we reason about or validate these claims, factoring in context of use and subjectivity?

An AI system is essentially a highly complex software system. It inherits the complexity of software system management. But this problem is made worse by the fact that the AI system behaviour is further determined by up to 1 trillion parameters (e.g. ChatGPT 4).

The state of the art for "trustworthy software" development is the SBOM - Software Bill or Materials, recently enacted into US and EU law. SBOMs provide the tools to describe the complex dependencies of software components; but has not been designed for AI. TAIBOM builds on this work. CISA itself have stated that this is essential work to progress. It will take current industry best practice (CycloneDX/SPDX) and adapt and extend this work to make it fit for purpose to describe the full complexity of an AI system. This work will explicitly manifest the dependencies on training data, training

TAIBOM creates a standardised ecosystem for describing the nuanced composition of AI systems (versioning), and making contextualised but precise claims about the trustworthiness (attestations) of the system, and its components.

Our approach is dual strand:

* Develop the commercial tools for managing AI system lifecycle
* Develop/refine the interoperable, international, standards that ensure this technology is created in a broad ecosystem.

This work builds on the existing state of the art in the standards space (SBOM and W3C Verifiable Credentials), and applies it to the complex AI system problem. The project is led by a consortium security and AI experts

Lead Participant

Project Cost

Grant Offer

NQUIRINGMINDS LIMITED £1,057,520 £ 740,263
 

Participant

UNIVERSITY OF OXFORD £67,297 £ 67,297
COPPER HORSE LIMITED £222,173 £ 155,521
BSI STANDARDS LIMITED
TECHWORKSHUB LTD. £159,080 £ 159,080
BAE SYSTEMS SURFACE SHIPS LIMITED

Publications

10 25 50