Copacetic Smartening of Small Data for HLC

Lead Research Organisation: Brunel University London
Department Name: Computer Science

Abstract

Abstracts are not currently available in GtR for all funded research. This is normally because the abstract was not required at the time of proposal submission, but may be because it included sensitive information such as personal details.

Publications

10 25 50
 
Description One of the major methodological contributions of this research project is to design and develop a new expert knowledge elicitation methodology to support activity in volatile, uncertain, complex and ambiguous environments. Supported and underpinned by a theoretical review of the human cognition literature, the methodology we have developed combines domain document analysis, Kelly's Repertory Grid (RG), Critical Decision Method (CDM) style interviews and a simulated cognitive walkthrough. This allows us to elicit personal constructs in conjunction with domain and organisational requisites (CLIK graphs) to provide a framework for which intelligent reflective agents can be developed. These agents can be used to support the training and development of more inexperienced users. In addition, we have developed an overall counterfactual reasoning framework in order to support a human machine teaming reasoning task for both expert and inexperienced users.

The second important contribution is the development of the methodology for the mathematical expression of Critical Learning and Inference Kernels (CLICK) graphs i.e. the generation of the joint and counterfactual post intervention equations used to support causal reasoning and asses the causal effects of an intervention which might be characterised by surprise. In addition, the theoretical development of Shackle-like membership functions for modelling surprise has been developed but these have not been tested yet. This development is complementary to and draws on the outputs of the abovementioned new expert knowledge elicitation methodology required for developing the Human-Like Computing eXplainable Intelligent Agent (XAI) models.

The third important contribution is the development of the Counterfactual Reasoning Tool that allows the exploration and assessment of 'What-if' scenarios'. Specifically, we have developed an overall counterfactual reasoning framework and theory in order to support human machine teaming for reasoning activity for both expert and inexperienced users. This is part of developing a Human Machine Teaming capability in the advancement of XAI supported decision making in the area of Human-Like Computing.
Exploitation Route These methodologies and theoretical developments might be used to elicit knowledge, generate the CLIK graphs and their concomitant counterfactual reasoning XAI models for other domains such as Cyber security.
Sectors Aerospace, Defence and Marine,Digital/Communication/Information Technologies (including Software),Financial Services, and Management Consultancy,Healthcare,Government, Democracy and Justice,Security and Diplomacy

 
Description This is work in progress and relates to the counterfactual reasoning tool underpinning the XAI agent we have developed for cyber security applications to address AI defence challenges and help overcome common barriers to implementing AI within defence. This is work in progress with Roke Manor.
First Year Of Impact 2022
Sector Aerospace, Defence and Marine,Digital/Communication/Information Technologies (including Software)
Impact Types Economic

 
Title Causal Reasoning and Inference Tool for Explainable AI 
Description Digital Economy & Cyber Security research group at Brunel University London have developed a Cyber Security Incident Response eXplainable AI (XAI) agent that employs causal reasoning and inference for its decision making. It is designed for dealing with and reacting to new circumstances the agent has not been specifically trained for, including dealing with surprises. It is built to enhance trustworthiness between human and AI to enable new human-machine teaming capabilities for cyber incident response. Underpinning the counterfactual reasoning of the XAI agent is Counterfactual Reasoning Tool (CRT). The CRT provides Incident Respondents with recommendations (the counterfactuals) concerned with optimising the actions they could take when called upon to address a cybersecurity incident. It assesses various courses of action and explains why certain actions are better than others, with explanations automatically provided in natural language for justification. The explanations are underpinned by the causal reasoning computations which help the user understand the agent's recommendation and trust its output. 
Type Of Material Improvements to research infrastructure 
Year Produced 2022 
Provided To Others? No  
Impact Notable impact is that Roke Manor has provided respondents for the validation experiments with the view of collaborating to develop this for cyber security defence applications. This is work in progress. 
 
Description AI for Directed Assembly: Explanation and Counterfactuals (Presenter and Panel Member for Plenary Session) 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Professional Practitioners
Results and Impact Directed Assembly Network: Intelligent reactors for bio-inspired and bio-mimetic assembly and disassembly. The Scale-up in bio-mimetic and bio-inspired assembly has been an unmet challenge for some time. The aim of this one day workshop was to identify the requirements and challenges in scale-up, and to explore the potential of artificial intelligence and intelligent chemical reactors, bringing together the biological, chemical and computing communities with excellent synergistic potential. The day consisted of five talks focusing on bioreactor scale-up, artificial intelligence, biomimetics and the interplay between biology and self-assembly. Of particular note was the how Artificial Intelligence, AI, could be used in the discovery of new material properties, by combining building blocks and materials in a data driven manner using counterfactual reasoning techniques for conduct assembly and disassembly experiments virtual experiments to inform design and experimental possibilities; including informing research and development agenda for AI-enabled Intelligent Chemical Reactors.
Year(s) Of Engagement Activity 2018
URL http://directedassembly.org/2018/04/18/workshop-intelligent-reactors-for-bio-inspired-and-bio-mimeti...
 
Description Causal Reasoning for Intelligent Ship Cybersecurity Situational Awareness Workshop (Strategic Partners' Meeting) 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Industry/Business
Results and Impact The meeting was attended by the stakeholders concerning the development of causal reasoning and inference for exaplainable AI decision support in surprise situations for ships under cyber attack.
Year(s) Of Engagement Activity 2020
 
Description Joint project proposal development in advanced HLC 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Industry/Business
Results and Impact The purpose of this activity is the development of a proposal for the development of AI models that can adapt to the changing environment conditions and surprises.
Year(s) Of Engagement Activity 2020