ENGAGE : Interactive Machine Learning Accelerating Progress in Science, An Emerging Theme of ICT Research

Lead Research Organisation: University of Warwick
Department Name: Statistics


Our vision is to establish and lead a new theme in ICT research based on Interactive Machine Learning (IML). Our expansion of IML will give scientists and non-ICT specialists unprecedented access to cutting-edge Machine Learning algorithms by providing a human-computer interface by which they can directly interact with large scale data and computing resources in an intuitive visual environment. In addition, the outcome of this particular project will have a direct transformative impact on the sciences by making it possible for non-programming individuals (scientists), to create systems that semi-automatically detect objects and events in vast quantities of A) audio and B) visual data. By working together across two parallel, highly interconnected streams of ICT research, we will develop the foundations of statistical methodology, algorithms and systems for IML. As an exemplar, this project partners with world leading scientists grappling with the challenge of analysing enormous quantities of heterogeneous data being generated in Biodiversity Science.

Planned Impact

This research project ultimately will have a broad impact across a range of disciplines and contribute to the strategic development of the EPSRC portfolio in ICT. By the nature of research that reaches across a number of disciplines within ICT and beyond to other sciences, there is a range of beneficiaries as detailed below.

Machine Learning
The ML academic and industrial community will benefit from the investigation of the proposed methods, and systems for IML, which have far wider impact than the application areas being targeted by this research. While Stream.A. of the proposal will develop, analyse and apply new ML methodologies specifically within the IML framework, there is considerable hope that this will lead to further substantial cross-fertilisation of ideas within the ML research community pertaining to user interaction and the formal quantification of information and uncertainty inherent in the synthesis of user and system as a whole.

Computer Vision
Only a few individual groups in the Computer Vision community have put effort into building interactive systems. Dissemination of our findings and prototypes from this project will help focus the CV community on challenges beyond the typical objectives of "just" making algorithms run real-time: the labeler(s) providing the training data must be modeled just like other variables. Most importantly, to maintain our field's track record of transferring technology beyond academia, we must plan ahead to models and algorithms that perform online learning as specialized users become more sophisticated and demanding.

Communications and Engagement

The project will directly impact ICT and scientific communities via workshops, publications, public software releases, and the training of highly qualified personnel. Further details about academic value are explained in the main proposal, so the focus here is on impact beyond the ICT academic sphere.

As a major new area of ICT is being established, it is important that a community is built to foster the interface between the various disciplines that this research in IML will have impact on, to share early results, stimulate enquiry and adoption of the research results, as well as to encourage a wider community to engage in this area of research. To engage researchers and users of IML systems, we will organize two qualitatively different styles of workshops. The first set of workshops will be carried out in the top venues for ML (NIPS), CV (ICCV), and Human Computer Interaction (CHI). These workshops will perform the crucial task of bridging the separate research communities to create the strong inter-community bonds necessary for long-term research in IML.

The second style of workshop will be a hands-on workshop to introduce our IML tools to non-programming scientists who can apply them in their own work. These workshops will be modelled after similar, highly successful endeavours supporting open-source software by the Blender Foundation, a non-profit corporation that has created a number of computer animated short films.

Related Projects

Project Reference Relationship Related To Start End Award Value
EP/K015664/1 01/02/2013 06/01/2014 £674,580
EP/K015664/2 Transfer EP/K015664/1 06/01/2014 06/03/2017 £518,914
Description EPSRC Impact Acceleration Award
Amount £30,000 (GBP)
Funding ID M.2.35 
Organisation University College London 
Department Innovation and Enterprise
Sector Academic/University
Country United Kingdom
Start 11/2016 
End 03/2017
Title Code and data for Structured Prediction of Unobserved Voxels From a Single Depth Image 
Description Code and data underpinning our CVPR 2016 paper "Structured Prediction of Unobserved Voxels From a Single Depth Image", led by Dr Michael Firman. 
Type Of Technology Software 
Year Produced 2016 
Impact This system is being improved further in our group, and has led to other researchers contacting us to collaborate and extend the system. 
URL http://visual.cs.ucl.ac.uk/pubs/depthPrediction/
Title Code for Automated Retinopathy of Prematurity Case Detection with Convolutional Neural Networks 
Description Code underpins our publication in the MICCAI Deep Learning Workshop 2016 "Automated Retinopathy of Prematurity Case Detection with Convolutional Neural Networks", led by Daniel Worrall, and bringing together joint research spanning Computer Science and Opthalmology. 
Type Of Technology Software 
Year Produced 2016 
Impact Software resulted in a publication and public talk at a MICCAI workshop. 
URL http://visual.cs.ucl.ac.uk/pubs/ROPDetection/
Title Code for HarmonNetworks 
Description Code implementing our HarmonicNetworks, which gives a neural network the ability to see rotated images as if they were not rotated, or to measure the amount of rotation present. 
Type Of Technology Software 
Year Produced 2017 
Open Source License? Yes  
Impact Resulted in a publication in the highly regarded conference IEEE CVPR (top ranked publication in Computer Science, according to Google Scholar): Harmonic Networks: Deep Translation and Rotation Equivariance, led by Daniel Worrall 
URL https://github.com/deworrall92/harmonicConvolutions
Title Code for Help, It Looks Confusing: GUI Task Automation Through Demonstration and Follow-up Questions 
Description Code implementing our IUI 2017 paper (received Honorable Mention award), "Help, It Looks Confusing: GUI Task Automation Through Demonstration and Follow-up Questions", led by Thanapong Intharah. 
Type Of Technology Software 
Year Produced 2017 
Open Source License? Yes  
Impact Resulted in a full paper published at IUI 2017, which received an award, and was a featured demo at the conference: Help, It Looks Confusing: GUI Task Automation Through Demonstration and Follow-up Questions. 
URL http://visual.cs.ucl.ac.uk/pubs/HILC/
Title Code for Responsive Action-based Video Synthesis 
Description This software implements the prototype described in our CHI 2017 paper. It allows casual users to injest a video, and turn various elements in the video into loopable clips, that can be triggered by keyboard or through other interfaces. 
Type Of Technology Software 
Year Produced 2017 
Impact The software led to our CHI 2017 paper, led by Corneliu Ilisescu, "Responsive Action-based Video Synthesis". 
URL http://visual.cs.ucl.ac.uk/pubs/actionVideo/
Title Code for Unsupervised Monocular Depth Estimation with Left-Right Consistency 
Description This software and and the associated scripts allow reproduction of our new system, that converts a color image into a depth-image. The software also makes it easy to re-train the statistical model, using other binocular stereo pairs of images. All this software underpins our CVPR 2017 paper. 
Type Of Technology Software 
Year Produced 2017 
Impact The algorithms behind this software have now been filed in a patent application, and we are seeking commercialization opportunities. 
URL http://visual.cs.ucl.ac.uk/pubs/monoDepth/