Copacetic Smartening of Small Data for HLC

Lead Research Organisation: Cranfield University
Department Name: Cranfield Defence and Security


The need for more human-like computing, which involves endowing machines with human-like perceptual reasoning and learning abilities, has becoming increasingly evident in the last year. The inexplicable 'black box', highly complex and context dependent models of deep learning techniques and conventional probability approaches, are not always successful in environments like Improvised Explosive Device Disposal (IEDD), which can have severe consequences for incorrect judgements. Moving towards a more transparent, explainable and human-like approach will transform the human-machine relationship and provide a more efficient and effective environment for humans and machines to collaborate in, leading to improved prospects for UK growth and employment.

This feasibility study focuses on those high risk situations where human cognition is superior to any machine, when humans are called to make judgements where information is sparse, time is poor and their previous knowledge, experience and 'gut feel' often play a critical part in their decision making. Unlike machines, humans rely on small scale data and small scale models (e.g. schema or frames) to make their judgements, reflecting on the possibilities or likelihoods of surprise events to improve their sense making in a given situation. A key challenge is to identify those few critical learning and inference kernels (CLIKs) that are at the heart of these schema humans use to make their judgements in a satisficing manner that feels right, i.e. things appear to be in copacetic or perfect order. Using the IEDD context as its setting, this research moves away from the conventional Bayesian and probability-based approaches, instead moving towards a novel approach inspired by the cognitive sciences to develop human-like inference techniques and learning schema. The schema will then be encoded into explainable artificial intelligence (XAI) agents so they can work alongside humans to enhance performance during high cognitive load tasks and for the learning and training of future experts.

Planned Impact

The impact from this project will benefit the inter-disciplinary research community developing novel ways for enhancing human-machine collaboration. One key impact is expected to be growth in the scale and diversity of the research into human-like computing approaches and algorithms that are not restricted by more conventional probabilistic approaches. This will lead to greater understanding of human-like computing and the ways in which explainable artificial intelligence (XAI) can support humans in high risk, uncertain, critical decision-making environments; also how the lessons drawn from critical learning and inference Kernels (CLIK) development could lead to improved business processes, risk assessment and economic growth from new human-like computing tools or services. The project will also impact government departments and organisations across industry that would gain value from implementing human-like computing for enhanced judgements in a wide variety of applications. The development of the CLIKs and XAI will benefit people working increasingly with cyber-forms of business, collaborating with computers and AI; resulting from increased awareness of these challenges of leaders of UK organisations with influence across business and government (primarily in the UK, but also internationally). The longer-term impact, achieved by exploitation of the agent prototypes and approaches developed during the project, will enable co-creation of new XAI services, and creation of new, smarter, approaches to government and new businesses. It is anticipated that these will impact the public, as users of the smarter systems and services created by the project. The case study draws from experienced senior leaders in UK business, industry and government who will in turn benefit from the reflective practice nature of the elicitation interviews. This appreciative, explainable and reflective form of judgement-support working will, in the short-term, help people to make more robust judgements leading to a more stable basis for policy-making in financial sectors and government. This will benefit UK public as they rely on safer systems (e.g. banking) and the more contractual nature of government projects post-Brexit. Improved, explainable judgements in policy-making and more reflective decision-making will generally provide a more stable and sound foundation for leadership, leading to improved prospects for UK growth and employment.


10 25 50
Description One of the major methodological contributions of this research project is to design and develop a new expert knowledge elicitation methodology to support activity in volatile, uncertain, complex and ambiguous environments. Supported and underpinned by a theoretical review of the human cognition literature, the methodology we have developed combines domain document analysis, Kelly's Repertory Grid (RG), Critical Decision Method (CDM) style interviews and a simulated cognitive walkthrough. This allows us to elicit personal constructs in conjunction with domain and organisational requisites (Critical Learning and Inference Kernel graphs that encode the small data) to provide a framework through which intelligent reflective agents can be developed. These agents can be used to support the training and development of more inexperienced users by eliciting and encoding the practical reflective capability of domain experts. In addition, we have developed an overall counterfactual reasoning framework in order to support human machine teaming for reasoning activity for both expert and inexperienced users.

The second important contribution is the development of the methodology for the mathematical expression of Critical Learning and Inference Kernels (CLICK) graphs i.e. the generation of the joint and counterfactual post intervention equations used to support causal reasoning. In addition, the theoretical development of Shackle-like membership functions based on surprise modelling has been developed but these have not been tested yet. This development is complementary to and draws on the outputs of the abovementioned new expert knowledge elicitation methodology required for developing the Human-Like Computing eXplainable Intelligent Agent (XAI) models.
Exploitation Route These methodologies and theoretical developments might be used to elicit knowledge, generate the CLIK graphs and their concomitant counterfactual reasoning XAI models for other domains such as Cyber security or Emergency First Response.
Sectors Aerospace, Defence and Marine,Digital/Communication/Information Technologies (including Software),Financial Services, and Management Consultancy,Healthcare

Title Expert Knowledge Elicitation 
Description This project created a novel methodology, combining a variant of Kelly's Repertory Grid and a structured Critical Decision Method interview, to elicit sense-making critical inference and learning kernals (CLIKs) from human experts. This method will be published in an upcoming journal paper. 
Type Of Material Model of mechanisms or symptoms - human 
Year Produced 2020 
Provided To Others? No  
Impact This method allowed expert CLIKs to be identified and taken on to support the training of novices. 
Title Requisite Rating Scale 
Description The Requisite Rating Scale (RRS) provides a way in which the key decision criteria in volatile, uncertain, conflicting and ambiguous environments can be graded in terms of having the requisite levels in order to make a decision. This will be published in a forthcoming journal publication. 
Type Of Material Model of mechanisms or symptoms - human 
Year Produced 2020 
Provided To Others? No  
Impact The community we engaged with in the research would like to use the RRS in order to rate the level of challenge in their training scenarios.