Copacetic Smartening of Small Data for HLC

Lead Research Organisation: Brunel University
Department Name: Computer Science

Abstract

The need for more human-like computing, which involves endowing machines with human-like perceptual reasoning and learning abilities, has becoming increasingly evident in the last year. The inexplicable 'black box', highly complex and context dependent models of deep learning techniques and conventional probability approaches, are not always successful in environments like Improvised Explosive Device Disposal (IEDD), which can have severe consequences for incorrect judgements. Moving towards a more transparent, explainable and human-like approach will transform the human-machine relationship and provide a more efficient and effective environment for humans and machines to collaborate in, leading to improved prospects for UK growth and employment.

This feasibility study focuses on those high risk situations where human cognition is superior to any machine, when humans are called to make judgements where information is sparse, time is poor and their previous knowledge, experience and 'gut feel' often play a critical part in their decision making. Unlike machines, humans rely on small scale data and small scale models (e.g. schema or frames) to make their judgements, reflecting on the possibilities or likelihoods of surprise events to improve their sense making in a given situation. A key challenge is to identify those few critical learning and inference kernels (CLIKs) that are at the heart of these schema humans use to make their judgements in a satisficing manner that feels right, i.e. things appear to be in copacetic or perfect order. Using the IEDD context as its setting, this research moves away from the conventional Bayesian and probability-based approaches, instead moving towards a novel approach inspired by the cognitive sciences to develop human-like inference techniques and learning schema. The schema will then be encoded into explainable artificial intelligence (XAI) agents so they can work alongside humans to enhance performance during high cognitive load tasks and for the learning and training of future experts.

Publications

10 25 50
 
Description One of the major methodological contributions of this research project is to design and develop a new expert knowledge elicitation methodology to support activity in volatile, uncertain, complex and ambiguous environments. Supported and underpinned by a theoretical review of the human cognition literature, the methodology we have developed combines domain document analysis, Kelly's Repertory Grid (RG), Critical Decision Method (CDM) style interviews and a simulated cognitive walkthrough. This allows us to elicit personal constructs in conjunction with domain and organisational requisites (CLIK graphs) to provide a framework for which intelligent reflective agents can be developed. These agents can be used to support the training and development of more inexperienced users. In addition, we have developed an overall counterfactual reasoning framework in order to support a human machine teaming reasoning task for both expert and inexperienced users.

The second important contribution is the development of the methodology for the mathematical expression of Critical Learning and Inference Kernels (CLICK) graphs i.e. the generation of the joint and counterfactual post intervention equations used to support causal reasoning and asses the causal effects of an intervention which might be characterised by surprise. In addition, the theoretical development of Shackle-like membership functions for modelling surprise has been developed but these have not been tested yet. This development is complementary to and draws on the outputs of the abovementioned new expert knowledge elicitation methodology required for developing the Human-Like Computing eXplainable Intelligent Agent (XAI) models.
Exploitation Route These methodologies and theoretical developments might be used to elicit knowledge, generate the CLIK graphs and their concomitant counterfactual reasoning XAI models for other domains such as Cyber security.
Sectors Aerospace, Defence and Marine,Digital/Communication/Information Technologies (including Software),Financial Services, and Management Consultancy,Healthcare,Government, Democracy and Justice,Security and Diplomacy

 
Description AI for Directed Assembly: Explanation and Counterfactuals (Presenter and Panel Member for Plenary Session) 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Professional Practitioners
Results and Impact Directed Assembly Network: Intelligent reactors for bio-inspired and bio-mimetic assembly and disassembly. The Scale-up in bio-mimetic and bio-inspired assembly has been an unmet challenge for some time. The aim of this one day workshop was to identify the requirements and challenges in scale-up, and to explore the potential of artificial intelligence and intelligent chemical reactors, bringing together the biological, chemical and computing communities with excellent synergistic potential. The day consisted of five talks focusing on bioreactor scale-up, artificial intelligence, biomimetics and the interplay between biology and self-assembly. Of particular note was the how Artificial Intelligence, AI, could be used in the discovery of new material properties, by combining building blocks and materials in a data driven manner using counterfactual reasoning techniques for conduct assembly and disassembly experiments virtual experiments to inform design and experimental possibilities; including informing research and development agenda for AI-enabled Intelligent Chemical Reactors.
Year(s) Of Engagement Activity 2018
URL http://directedassembly.org/2018/04/18/workshop-intelligent-reactors-for-bio-inspired-and-bio-mimeti...