Copacetic Smartening of Small Data for HLC

Lead Research Organisation: Cranfield University
Department Name: Cranfield Defence and Security

Abstract

The need for more human-like computing, which involves endowing machines with human-like perceptual reasoning and learning abilities, has becoming increasingly evident in the last year. The inexplicable 'black box', highly complex and context dependent models of deep learning techniques and conventional probability approaches, are not always successful in environments like Improvised Explosive Device Disposal (IEDD), which can have severe consequences for incorrect judgements. Moving towards a more transparent, explainable and human-like approach will transform the human-machine relationship and provide a more efficient and effective environment for humans and machines to collaborate in, leading to improved prospects for UK growth and employment.

This feasibility study focuses on those high risk situations where human cognition is superior to any machine, when humans are called to make judgements where information is sparse, time is poor and their previous knowledge, experience and 'gut feel' often play a critical part in their decision making. Unlike machines, humans rely on small scale data and small scale models (e.g. schema or frames) to make their judgements, reflecting on the possibilities or likelihoods of surprise events to improve their sense making in a given situation. A key challenge is to identify those few critical learning and inference kernels (CLIKs) that are at the heart of these schema humans use to make their judgements in a satisficing manner that feels right, i.e. things appear to be in copacetic or perfect order. Using the IEDD context as its setting, this research moves away from the conventional Bayesian and probability-based approaches, instead moving towards a novel approach inspired by the cognitive sciences to develop human-like inference techniques and learning schema. The schema will then be encoded into explainable artificial intelligence (XAI) agents so they can work alongside humans to enhance performance during high cognitive load tasks and for the learning and training of future experts.

Planned Impact

The impact from this project will benefit the inter-disciplinary research community developing novel ways for enhancing human-machine collaboration. One key impact is expected to be growth in the scale and diversity of the research into human-like computing approaches and algorithms that are not restricted by more conventional probabilistic approaches. This will lead to greater understanding of human-like computing and the ways in which explainable artificial intelligence (XAI) can support humans in high risk, uncertain, critical decision-making environments; also how the lessons drawn from critical learning and inference Kernels (CLIK) development could lead to improved business processes, risk assessment and economic growth from new human-like computing tools or services. The project will also impact government departments and organisations across industry that would gain value from implementing human-like computing for enhanced judgements in a wide variety of applications. The development of the CLIKs and XAI will benefit people working increasingly with cyber-forms of business, collaborating with computers and AI; resulting from increased awareness of these challenges of leaders of UK organisations with influence across business and government (primarily in the UK, but also internationally). The longer-term impact, achieved by exploitation of the agent prototypes and approaches developed during the project, will enable co-creation of new XAI services, and creation of new, smarter, approaches to government and new businesses. It is anticipated that these will impact the public, as users of the smarter systems and services created by the project. The case study draws from experienced senior leaders in UK business, industry and government who will in turn benefit from the reflective practice nature of the elicitation interviews. This appreciative, explainable and reflective form of judgement-support working will, in the short-term, help people to make more robust judgements leading to a more stable basis for policy-making in financial sectors and government. This will benefit UK public as they rely on safer systems (e.g. banking) and the more contractual nature of government projects post-Brexit. Improved, explainable judgements in policy-making and more reflective decision-making will generally provide a more stable and sound foundation for leadership, leading to improved prospects for UK growth and employment.

Publications

10 25 50
 
Description One of the major methodological contributions of this research project is to design and develop a new expert knowledge elicitation methodology to support activity in volatile, uncertain, complex and ambiguous environments. Supported and underpinned by a theoretical review of the human cognition literature, the methodology we have developed combines domain document analysis, Kelly's Repertory Grid (RG), Critical Decision Method (CDM) style interviews and a simulated cognitive walkthrough. This allows us to elicit personal constructs in conjunction with domain and organisational requisites (Critical Learning and Inference Kernel graphs that encode the small data) to provide a framework through which intelligent reflective agents can be developed. These agents can be used to support the training and development of more inexperienced users by eliciting and encoding the practical reflective capability of domain experts. In addition, we have developed an overall counterfactual reasoning framework in order to support human machine teaming for reasoning activity for both expert and inexperienced users.

The second important contribution is the development of the methodology for the mathematical expression of Critical Learning and Inference Kernels (CLICK) graphs i.e. the generation of the joint and counterfactual post intervention equations used to support causal reasoning. In addition, the theoretical development of Shackle-like membership functions based on surprise modelling has been developed but these have not been tested yet. This development is complementary to and draws on the outputs of the abovementioned new expert knowledge elicitation methodology required for developing the Human-Like Computing eXplainable Intelligent Agent (XAI) models.
Exploitation Route These methodologies and theoretical developments might be used to elicit knowledge, generate the CLIK graphs and their concomitant counterfactual reasoning XAI models for other domains such as Cyber security or Emergency First Response.
Sectors Aerospace, Defence and Marine,Digital/Communication/Information Technologies (including Software),Financial Services, and Management Consultancy,Healthcare

 
Description The results of engagement with the EOD community through this project have been included in a wider review into the training provision for EOD personnel. Further funding (non-RCUK) has been acquired to run a series of workshops to take the findings of this research back into the training environment.
First Year Of Impact 2021
Sector Aerospace, Defence and Marine
Impact Types Policy & public services

 
Title Expert Knowledge Elicitation 
Description This project created a novel methodology, combining a variant of Kelly's Repertory Grid and a structured Critical Decision Method interview, to elicit sense-making critical inference and learning kernals (CLIKs) from human experts. This method will be published in an upcoming journal paper. 
Type Of Material Model of mechanisms or symptoms - human 
Year Produced 2020 
Provided To Others? No  
Impact This method allowed expert CLIKs to be identified and taken on to support the training of novices. 
 
Title Requisite Rating Scale 
Description The Requisite Rating Scale (RRS) provides a way in which the key decision criteria in volatile, uncertain, conflicting and ambiguous environments can be graded in terms of having the requisite levels in order to make a decision. This will be published in a forthcoming journal publication. 
Type Of Material Model of mechanisms or symptoms - human 
Year Produced 2020 
Provided To Others? No  
Impact The community we engaged with in the research would like to use the RRS in order to rate the level of challenge in their training scenarios. 
 
Title C-IEDD metacognitive requisite ratings 
Description Red/amber/green (RAG) analysis of metacognitive requisites at key decision points throughout a challenging counter-improvised explosive device disposal incident. 
Type Of Material Database/Collection of data 
Year Produced 2021 
Provided To Others? Yes  
URL https://cord.cranfield.ac.uk/articles/dataset/C-IEDD_metacognitive_requisite_ratings/16725454
 
Title C-IEDD metacognitive requisite ratings 
Description Red/amber/green (RAG) analysis of metacognitive requisites at key decision points throughout a challenging counter-improvised explosive device disposal incident. 
Type Of Material Database/Collection of data 
Year Produced 2021 
Provided To Others? Yes  
URL https://cord.cranfield.ac.uk/articles/dataset/C-IEDD_metacognitive_requisite_ratings/16725454/1
 
Description DEODS/CDS 
Organisation Ministry of Defence (MOD)
Country United Kingdom 
Sector Public 
PI Contribution Interactions with the DEODS community through the data collection and results dissemination led to further discussions around the Cognitive Ability requirement in DEODS training provision. Three additional meetings/workshops were held in addition to a small scale pilot study to help inform planning of a wider training review. A proposal to extend the small scale study has been submitted, though progress has been put on hold due to COVID19 restrictions.
Collaborator Contribution Workshop participation, access to participants and training materials/documentation within the DEODS community.
Impact Submission for further funding for a study into Cognitive Ability in the DEODS training provision.
Start Year 2019
 
Description Cognitive Capability Workshops with DEODS 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Policymakers/politicians
Results and Impact Three workshops with a small amount of expert policy makers/professional practitioners were held to develop the practical application of assessing Cognitive Ability in the training provision activities within the DEODS community. The workshops aimed to gain support for a subsequent research proposal to conduct a small scale study.
Year(s) Of Engagement Activity 2020,2021
 
Description Dstl Lunchtime Seminar - Explainable AI Agents 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Professional Practitioners
Results and Impact The research team presented to a group of 30-40 attendees to present the CLIK models developed for the project. Additionally, some of the challenges encountered in developing the models prompted questions about the use of XAI agents in high-risk military environments, so the team engaged with the group to discuss how these issues apply more generally across Defence.
Year(s) Of Engagement Activity 2019
 
Description Future UK IED Threat Workshop 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Professional Practitioners
Results and Impact 20 expert participants from an expert practitioner/policy maker group attended a workshop that discussed Future UK IED threats. The resulting discussion helped to shape the CLIK models developed for the project, identified additional research participants and generated further interest and support for our research.
Year(s) Of Engagement Activity 2019