Seebibyte: Visual Search for the Era of Big Data

Lead Research Organisation: University of Oxford
Department Name: Engineering Science

Abstract

The Programme is organised into two themes.

Research theme one will develop new computer vision algorithms to enable efficient search and description of vast image and video datasets - for example of the entire video archive of the BBC. Our vision is that anything visual should be searchable for, in the manner of a Google search of the web: by specifying a query, and having results returned immediately, irrespective of the size of the data. Such enabling capabilities will have widespread application both for general image/video search - consider how Google's web search has opened up new areas - and also for designing customized solutions for searching.
A second aspect of theme 1 is to automatically extract detailed descriptions of the visual content. The aim here is to achieve human like performance and beyond, for example in recognizing configurations of parts and spatial layout, counting and delineating objects, or recognizing human actions and inter-actions in videos, significantly superseding the current limitations of computer vision systems, and enabling new and far reaching applications. The new algorithms will learn automatically, building on recent breakthroughs in large scale discriminative and deep machine learning. They will be capable of weakly-supervised learning, for example from images and videos downloaded from the internet, and require very little human supervision.

The second theme addresses transfer and translation. This also has two aspects. The first is to apply the new computer vision methodologies to `non-natural' sensors and devices, such as ultrasound imaging and X-ray, which have different characteristics (noise, dimension, invariances) to the standard RGB channels of data captured by `natural' cameras (iphones, TV cameras). The second aspect of this theme is to seek impact in a variety of other disciplines and industry which today greatly under-utilise the power of the latest computer vision ideas. We will target these disciplines to enable them to leapfrog the divide between what they use (or do not use) today which is dominated by manual review and highly interactive analysis frame-by-frame, to a new era where automated efficient sorting, detection and mensuration of very large datasets becomes the norm. In short, our goal is to ensure that the newly developed methods are used by academic researchers in other areas, and turned into products for societal and economic benefit. To this end open source software, datasets, and demonstrators will be disseminated on the project website.

The ubiquity of digital imaging means that every UK citizen may potentially benefit from the Programme research in different ways. One example is an enhanced iplayer that can search for where particular characters appear in a programme, or intelligently fast forward to the next `hugging' sequence. A second is wider deployment of lower cost imaging solutions in healthcare delivery. A third, also motivated by healthcare, is through the employment of new machine learning methods for validating targets for drug discovery based on microscopy images

Planned Impact

The proposed programme encompasses new methodology and applied research in computer vision that will impact not only the imaging field, but other non-imaging disciplines, and it will encourage end-user uptake of imaging technologies and commercial interest in embedding imaging technologies in products. These are the main beneficiaries of programme research.

We have carefully chosen members of our Programme Advisory Board (PAB) and User Group to represent a comprehensive and diverse range of academic and industry interests and expect them to challenge us to ensure that the impact of the Programme is realised. We will ensure that both the PAB and the User Group are constantly refreshed with appropriate representatives.

The Programme will have Economic and Societal impact by
1. Developing new and improved computer vision technologies for commercialisation by a wide range of companies;
2. Enhancing the Big Data capabilities and knowledge base of UK industries.
3. Enhancing quality of life by improving, for instance, healthcare capabilities, surveillance, environmental monitoring of roads, and new means of enjoying digital media in the home. Other engineering advances will aim to make a large impact "behind the scenes", for instance to underpin better understanding of biological effects at the individual cell level and characterisation of advanced materials.
4. Training the next generation of computer vision researchers who will be equipped to support the imaging needs of science, technology and wider society for the future;

Impact on Knowledge includes
1. Realisation of new approaches to essential computer vision technology, and the dissemination of research findings through publications and conference presentations and the distribution of open source software and image databases.
2. Sharing knowledge with collaborators via Transfer and Application Projects (TAPs) and other activities leading to adoption of advanced computer vision methods across many disciplines of science, engineering and medicine that currently do no use them.
3. Communication of advances to a public audience through website articles and other co-ordinated public understanding activities.

Publications

10 25 50
 
Description Our two-stream approach of basic research and dissemination seems to be working. On the first, we are publishing our research at the principal conferences and winning prizes. We conduct dissemination by engaging other communities through our Show-and-Tell events and Transfer Application Projects. We also make available our software and publications that have emerged from our research.
First Year Of Impact 2016
Sector Healthcare,Culture, Heritage, Museums and Collections,Retail,Transport
Impact Types Cultural,Economic

 
Description AWS Machine Learning Research Awards Program
Amount $225,000 (USD)
Organisation Amazon.com 
Sector Private
Country United States
Start 02/2018 
End 01/2020
 
Description Big Data Science in Medicine and Healthcare
Amount £55,000 (GBP)
Organisation University of Oxford 
Department Oxford Martin School
Sector Academic/University
Country United Kingdom
Start 04/2017 
End 03/2020
 
Description CALOPUS - Computer Assisted LOw-cost Point-of-case UltraSound
Amount £1,013,662 (GBP)
Funding ID EP/R013853/1 
Organisation Engineering and Physical Sciences Research Council (EPSRC) 
Sector Academic/University
Country United Kingdom
Start 02/2018 
End 01/2021
 
Description ERC Advanced Grant
Amount € 2,500,000 (EUR)
Organisation European Research Council (ERC) 
Sector Public
Country European Union (EU)
Start 11/2016 
End 10/2021
 
Description ERC Starting Grant
Amount € 1,500,000 (EUR)
Organisation European Research Council (ERC) 
Sector Public
Country European Union (EU)
Start 08/2015 
End 09/2020
 
Description GCRF: Growing Research Capability Call
Amount £8,000,000 (GBP)
Funding ID MR/P027938/1 
Organisation Medical Research Council (MRC) 
Sector Academic/University
Country United Kingdom
Start 10/2017 
End 09/2021
 
Description IARPA BAA-16-13
Amount $1,196,818 (USD)
Organisation Intelligence Advanced Research Projects Activity 
Sector Public
Country Unknown
Start 09/2017 
End 09/2021
 
Description Research Collaboration relating to DNN-based Face Recognition for Surveillance
Amount £200,000 (GBP)
Organisation Toshiba 
Sector Private
Country Japan
Start 10/2017 
End 09/2019
 
Description Visual Recognition
Amount £308,823 (GBP)
Organisation Continental AG 
Sector Private
Country Germany
Start 11/2016 
End 04/2019
 
Title 3D Shape Attributes and the CMU-Oxford Sculpture Dataset 
Description The CMU-Oxford Sculpture dataset contains 143K images depicting 2197 works of art by 242 artists. Each image comes with 12 labels for each of the 3D Shape Attributes defined in our CVPR paper. We additionally provide sample MATLAB code that illustrates reading the data and evaluating a method. 
Type Of Material Database/Collection of data 
Year Produced 2017 
Provided To Others? Yes  
Impact "We have shown that 3D attributes can be inferred directly from images at quite high quality. These attributes open a number of possibilities of applications and extensions. One immediate application is to use this system to complement metric reconstruction: shape attributes can serve as a top-down cue for driving reconstruction that works even on unknown objects. Another area of investigation is explic¬itly formulating our problem in terms of relative attributes: many of our attributes (e.g., planarity) are better modeled in relative terms. Finally, we plan to investigate which cues (e.g., texture, edges) are being used to infer these attributes." 
URL http://www.robots.ox.ac.uk/~vgg/data/sculptures/
 
Title BBC-Oxford Lip Reading Dataset 
Description The dataset consists of up to 1000 utterances of 500 different words, spoken by hundreds of different speakers. All videos are 29 frames (1.16 seconds) in length, and the word occurs in the middle of the video. 
Type Of Material Database/Collection of data 
Year Produced 2016 
Provided To Others? Yes  
Impact Publications have resulted form this research and an award has been won: [1] J. S. Chung, A. Zisserman Lip Reading in the Wild - Best Student Paper Award Asian Conference on Computer Vision, 2016 [2] J. S. Chung, A. Zisserman Out of time: automated lip sync in the wild Workshop on Multi-view Lip-reading, ACCV, 2016 
URL http://www.robots.ox.ac.uk/~vgg/data/lip_reading/
 
Title Celebrity in Places Dataset 
Description The dataset contains over 38k images of celebrities in different types of scenes. There are 4611 celebrities and 16 places involved. The images were obtained using Google Image Search and verified by human annotation. 
Type Of Material Database/Collection of data 
Year Produced 2016 
Provided To Others? Yes  
Impact Publications have resulted from this research based on this dataset. Y. Zhong, R. Arandjelovic, A. Zisserman Faces in Places: Compound Query Retrieval British Machine Vision Conference, 2016 
URL http://www.robots.ox.ac.uk/~vgg/data/celebrity_in_places/
 
Title Text Localisation Dataset 
Description This is a synthetically generated dataset, in which word instances are placed in natural scene images, while taking into account the scene layout. The dataset consists of 800 thousand images with approximately 8 million synthetic word instances. Each text instance is annotated with its text-string, word-level and character-level bounding-boxes 
Type Of Material Database/Collection of data 
Year Produced 2016 
Provided To Others? Yes  
Impact A publication has resulted from this research: A. Gupta, A. Vedaldi, A. Zisserman Synthetic Data for Text Localisation in Natural Images IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016 
URL http://www.robots.ox.ac.uk/~vgg/data/scenetext
 
Title VoxCeleb: a large-scale speaker identification dataset 
Description VoxCeleb contains over 100,000 utterances for 1,251 celebrities, extracted from videos uploaded to YouTube. The dataset is gender balanced, with 55% of the speakers male. The speakers span a wide range of different ethnicities, accents, professions and ages. There are no overlapping identities between development and test sets. 
Type Of Material Database/Collection of data 
Year Produced 2017 
Provided To Others? Yes  
Impact "We provide a fully automated and scalable pipeline for audio data collection and use it to create a large-scale speaker identification dataset called VoxCeleb, with 1,251 speakers and over 100,000 utterances. In order to establish benchmark performance, we develop a novel CNN architecture with the ability to deal with variable length audio inputs, which out¬performs traditional state-of-the-art methods for both speaker identification and verification on this dataset." 
URL http://www.robots.ox.ac.uk/~vgg/data/voxceleb/
 
Description 2017TAP1 - Dante Editions 
Organisation University of Manchester
Country United Kingdom 
Sector Academic/University 
PI Contribution This project will use the SeebiByte image software to undertake a preliminary investigation of the design features of early printed editions of Dante's Divine Comedy, published between 1472 and 1491, held in and digitized by the John Rylands Library, University of Manchester. By focusing on a single iconic literary text in the first twenty years of its print publication, Manchester can investigate the evolution of the page design, from the first editions which contain the text of the poem only, to later ones of increasing visual and navigational sophistication, as elements such as titles, author biographies, commentaries, rubrics, summaries, page numbers, illustrations, and devotional material are introduced into the object. The use of computer vision techniques will allow Manchester to approach these books and the study of Dante in an entirely new way and will add greatly to our knowledge of early modern book technologies and information design.
Collaborator Contribution Manchester will supply data for analysis.
Impact This collaboration is a cross disciplinary work between Visual Geometry and Humanities.
Start Year 2017
 
Description 2017TAP2 - Visual Design 
Organisation University of Leeds
Department School of Languages, Cultures and Societies
Country United Kingdom 
Sector Academic/University 
PI Contribution This project looks at how graphic resources are used in the wild - in specific text genres and locales (languages / cultures / regions). Rather than doing so on the basis of hand-picked examples, intended to illustrate a particular phenomenon, it allows us to ask whether a particular feature or combination of features is found in a particular document. More significantly, it allows us to ask whether the frequency of features varies across corpora of documents - i.e. whether a given feature is more or less common in a given genre or locale.
Collaborator Contribution Leeds will provide data for the project.
Impact This project is a cross disciplinary collaboration between computer vision and Arts and Humanities.
Start Year 2017
 
Description 2017TAP3 - DigiPal (Text) 
Organisation King's College London
Country United Kingdom 
Sector Academic/University 
PI Contribution The project has two main objectives: to develop a tool to automatically count lines on a medieval manuscript page and to test the potential for image segmentation of phrases (and possibly even letter-forms) on a corpus of medieval Scottish charters written in Latin.
Collaborator Contribution KCL will supply data to be analysed.
Impact This project is a cross-disciplinary collaboration between computer vision and humanities.
Start Year 2017
 
Description 2017TAP4 - DegiPal (Tiling) 
Organisation King's College London
Country United Kingdom 
Sector Academic/University 
PI Contribution This project will develop a tool to analyse thousands of images of medieval manuscript and sort them according to agreed criteria (e.g. 'does the image contain an illustration?'). The objective is to eliminate material that is not relevant to researchers and to automatically detect the regions of images which are of interest.
Collaborator Contribution KCL will provide images to be analysed.
Impact This project is a cross-disciplinary collaboration between computer vision and humanities.
Start Year 2017
 
Description 2017TAP5 - 19C Books (Matcher) 
Organisation University of Sheffield
Country United Kingdom 
Sector Academic/University 
PI Contribution Rather than matching a number of illustrations with one specific illustration, it is hoped that by using machine learning, clusters of matches can be found without the need to provide the software with visual attributes of one illustration but to be able to attribute different visual attributes to different clusters of illustrations. By doing this, it will allow the researcher to get to know more about their data and has the potential to lead to unexpected clusters of matches that can initiate further research. Researchers with substantial datasets may not always have particular illustrations in mind that they wish to find matches for. Using machine learning in this way will allow researchers to ask more general questions about their data and provide further lines of enquiry.
Collaborator Contribution Sheffield will provide dataset for the project.
Impact This project is a cross-disciplinary collaboration between computer vision and humanities.
Start Year 2017
 
Description 2017TAP6 - 19C Books (Classifier) 
Organisation University of Sheffield
Country United Kingdom 
Sector Academic/University 
PI Contribution It is the main objective of this project to use machine learning in order to be able to identify the main print processes that were used to produce illustrations in the eighteenth and nineteenth centuries. Rather than focussing upon the iconographic details of the illustration, the aim is to understand the style of the illustration and whether machine learning techniques are a viable way in which to classify style and method as opposed to visual content.
Collaborator Contribution Sheffield will provide dataset.
Impact This project is a cross-disciplinary collaboration between computer vision and humanities.
Start Year 2017
 
Description 2017TAP7 - Cylinder Seals 
Organisation University of Oxford
Department Faculty of Oriental Studies
Country United Kingdom 
Sector Academic/University 
PI Contribution This project will seek to answer the question: why has it proven almost impossible to find any matches between physical seals preserved in collections and seal impressions left on tablets or other clay objects? A number of hypotheses readily present themselves. Were seals continuously re-carved so that the number of possible matches is almost nil? Were those seals used to seal documents and objects deposited differently from those worn as amulets and jewellery? Or have more matches not been found simply because the data has been published in a way that does not facilitate answering this question? None of these questions can be answered without fundamentally changing the way seals and seal impressions are ordered, published, and studied. And none of them can be answered through studies of single seals or small collections, they can only be addressed through a large-scale project relying on innovative, data-driven, and, for the most part, computational analysis.
Collaborator Contribution The Faculty of Oriental Studies will provide dataset.
Impact This project is a cross-disciplinary collaboration between computer vision and humanities.
Start Year 2017
 
Description 2017TAP8 - Fleuron (Matcher) 
Organisation University of Cambridge
Country United Kingdom 
Sector Academic/University 
PI Contribution 'Fleuron' was created by automatically extracting images of printers' ornaments and small illustrations from Eighteenth-Century Collections Online (ECCO), a database of 36 million images of pages from eighteenth-century books. Approximately 1.6 million images were extracted, consisting chiefly of printers' ornaments, arrangements of ornamental type, small illustrations, and diagrams. Some extraneous material such as library stamps and chunks of text were extracted, but most of these were filtered out at an early stage. The extracted images have all of the metadata associated with the original images supplied by ECCO, i.e.: the author and date of the book, the place of publication, the printer(s) and/or publishers(s), the genre and language of the book. Image matching will also help us to remove any remaining extraneous material in the database (i.e. images falsely identified as non-textual material).
Collaborator Contribution Cambridge will provide the dataset.
Impact This project is a cross-disciplinary collaboration between computer vision and humanities.
Start Year 2017
 
Description 2017TAP9 - Fleuron (Classifier) 
Organisation University of Cambridge
Country United Kingdom 
Sector Academic/University 
PI Contribution 'Fleuron' was created by automatically extracting images of printers' ornaments and small illustrations from Eighteenth-Century Collections Online (ECCO), a database of 36 million images of pages from eighteenth-century books. Approximately 1.6 million images were extracted, consisting chiefly of printers' ornaments, arrangements of ornamental type, small illustrations, and diagrams. Some extraneous material such as library stamps and chunks of text were extracted, but most of these were filtered out at an early stage. Currently, the keyword searches available to users of 'Fleuron' do not allow the subject matter of the images to be discovered. The keyword searches are useful for the study of ornaments owned particular printers or use in works by particular authors, but they do not significantly advance the study of the ornaments for their own sake (other than by speeding up the process of browsing). Classification would allow users to find particular types of images within the database, and to investigate the history of certain images and themes.
Collaborator Contribution Cambridge will provide the dataset.
Impact This project is a cross-disciplinary collaboration between computer vision and humanities.
Start Year 2017
 
Description Graphene Defect Detection 
Organisation University of Oxford
Country United Kingdom 
Sector Academic/University 
PI Contribution We provide software algorithm.
Collaborator Contribution The partner provide dataset and interpretation of the computer analysis.
Impact Project paper is in Progress.
Start Year 2016
 
Description Metal Crystal Counting 
Organisation University of Oxford
Country United Kingdom 
Sector Academic/University 
PI Contribution We provide software algorithm.
Collaborator Contribution The partner provide dataset and interpretation of the data.
Impact Software given to collaborator. Project paper is in progress.
Start Year 2016
 
Description Micrograph Defect Detection 
Organisation University of Oxford
Country United Kingdom 
Sector Academic/University 
PI Contribution We provide software algorithm.
Collaborator Contribution The partner provide dataset and interpretation of the computer analysis.
Impact Software has been given to the collaborator.
Start Year 2016
 
Description Penguin Counting 
Organisation University of Oxford
Country United Kingdom 
Sector Academic/University 
PI Contribution We provide software algorithm.
Collaborator Contribution The collaboration provide dataset and specialised analysing methods.
Impact Paper: counting in the Wild, by Carlos Arteta, Victor Lempitsky and Andrew Zisserman. This collaboration is between Information Engineering and Zoology disciplines.
Start Year 2016
 
Description Video Recognition from the Dashboard 
Organisation Continental AG
Country Germany 
Sector Private 
PI Contribution Working with research engineers to develop recognition in road scenes and human gestures.
Collaborator Contribution Supplying data.
Impact N/A
Start Year 2016
 
Title Convnet Human Action Recognitio 
Description This is a model to recognize human actions in video. 
Type Of Technology Software 
Year Produced 2016 
Open Source License? Yes  
Impact A publication has resulted from this research: Convolutional Two-Stream Network Fusion for Video Action Recognition. C. Feichtenhofer, A. Pinz, A. Zisserman, CVPR, 2016. 
URL http://www.robots.ox.ac.uk/~vgg/software/two_stream_action/
 
Title Convnet Keypoint Detection 
Description It is a model based on convolution neural network to automatically detect keypoints (like head, elbow, ankle, etc.) in a photograph of a human body. 
Type Of Technology Software 
Year Produced 2016 
Open Source License? Yes  
Impact A paper has resulted from this research: V. Belagiannis, A. Zisserman Recurrent Human Pose Estimation arXiv:1605.02914 
URL http://www.robots.ox.ac.uk/~vgg/software/keypoint_detection/
 
Title Convnet text spotting 
Description This is a model based on convolution neural network to automatically detect English text in a natural images 
Type Of Technology Software 
Year Produced 2016 
Open Source License? Yes  
Impact A publication has resulted from this research: A. Gupta, A. Vedaldi, A. Zisserman Synthetic Data for Text Localisation in Natural Images IEEE Conference on Computer Vision and Pattern Recognition, 2016 
URL http://www.robots.ox.ac.uk/~vgg/software/textspot/
 
Title Lip Synchronisation 
Description This is an sudio-to-video synchronisation network which can be used for audio-visual synchronisation tasks including: (1) removing temporal lags between the audio and visual streams in a video, and (2) determining who is speaking amongst multiple faces in a video. 
Type Of Technology Software 
Year Produced 2016 
Open Source License? Yes  
Impact A publication has resulted from this research: J. S. Chung, A. Zisserman Out of time: automated lip sync in the wild Workshop on Multi-view Lip-reading, ACCV, 2016 
URL http://www.robots.ox.ac.uk/~vgg/software/lipsync/
 
Title MatConvNet 
Description MatConvNet is a MATLAB toolbox implementing Convolutional Neural Networks (CNNs) for computer vision applications. It is simple, efficient, and can run and learn state-of-the-art CNNs. Many pre-trained CNNs for image classification, segmentation, face recognition, and text detection are available. 
Type Of Technology Software 
Year Produced 2016 
Open Source License? Yes  
Impact The MatConvNet toolbox is widely employed in the researches conducted by researchers in the Visual Geometry Group in the University of Oxford including Text Spotting, Penguin Counting and Human Action Recognition. The researcher Andrea Vedaldi has taught this software in following Summer Schools: Medical Imaging Summer School (MISS), Favignana (Sicily), 2016: (Somewhat) Advanced Convolutional Neural Networks [slides]; Understanding CNNs using visualisation and transformation analysis [slides]; All video lectures from the summer school. iV&L Net Training School 2016. Malta. 
URL http://www.vlfeat.org/matconvnet/
 
Title VGG Image Annotator 
Description VGG Image Annotator is a standalone application, with which you can define regions in an image and create a textual description of those regions. Such image regions and descriptions are useful for supervised training of learning algorithms. 
Type Of Technology Software 
Year Produced 2016 
Open Source License? Yes  
Impact The VIA tool has been employed to annotate large volume of scanned images of 15th Century books in the Faculty of Medieval and Modern Languages in the University of Oxford for the 15th Century Booktrade project (http://15cbooktrade.ox.ac.uk/). 
URL http://www.robots.ox.ac.uk/~vgg/software/via/
 
Title VGG Image Classification (VIC) Engine 
Description VIC is a web application that serves as a web engine to perform image classification queries over an user-defined image dataset. It is based on the original application created by VGG to perform visual searchers over a large dataset of images from BBC News. 
Type Of Technology Webtool/Application 
Year Produced 2017 
Open Source License? Yes  
Impact This software performs following functions: -Performs queries by entering a text or an image -Automatically downloads training images from Google -Performs automatic training, classification and ranking of results -Automatically caches query results -Provides a user management interface -Allows further query refinement -Enables users to create curated queries using their own training images Is capable of data ingestion, i.e., users can search their own dataset and define their own metadata 
URL http://www.robots.ox.ac.uk/~vgg/software/vic/
 
Title VGG Image Search Engine (VISE) 
Description VISE is a tool that can be used to search a large dataset for images that match any part of a given image. 
Type Of Technology Webtool/Application 
Year Produced 2017 
Open Source License? Yes  
Impact This standalone application can be used to make a large collection of images searchable by using image regions as a query. 
URL http://www.robots.ox.ac.uk/~vgg/software/vise/
 
Description AVinDH workshop 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact DH is the largest Digital Humanities conference, and attracts a largely academic audience, at all levels. It's diverse, and gives a good sense of what people are up to in all fields of the humanities that involve computers.
Year(s) Of Engagement Activity 2017
URL https://avindhsig.wordpress.com/workshop-2017-montreal/
 
Description Bodleian Conservators 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Other audiences
Results and Impact Bodleian conservators work on books, prints, photographs, papyri and other media. They often take digital pictures for the purposes of recording condition or analysis.
Year(s) Of Engagement Activity 2017
 
Description Oxford Digital Humanities Summer School 2017 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact OXDHSS is the second-largest digital humanities summer school in the world and the largest in Europe. Now based in Engineering Science (through OeRC) It attracts c.250 students to Oxford to take one of several week-long courses, together with lectures and posters. We presented on the general 'Introduction to Diigital Humanities' course, which is the biggest and broadest, and intended to give an introduction to the field(s) for managers, librarians, IT staff or academics who are interested in knowing more or gettiing their institution involved.
Year(s) Of Engagement Activity 2017
URL http://digital.humanities.ox.ac.uk/dhoxss/2017/
 
Description UCL Digital Humanities Seminar 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact Department of Information Studies research seminar
Year(s) Of Engagement Activity 2017
 
Description Blocks, Plates Stones Conference 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact Possibly the first-ever conference on printing surfaces (blocks, plates and stones) dealing with historical research, conservation issues and artistic possibilities with collections.
Year(s) Of Engagement Activity 2017
URL https://www.ies.sas.ac.uk/events/conferences/previous-conferences/blocks-plates-stones-conference
 
Description Blocks, Plates Stones ECR training day 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact Training day for ECRs in printing history
Year(s) Of Engagement Activity 2017
URL http://www.academia.edu/33139617/CALL_FOR_APPLICATIONS_ECR_Training_Day_Using_Historical_Matrices_an...
 
Description Bodleian Digital Scholarship Research Uncovered lecture 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Professional Practitioners
Results and Impact Bodleian Centre for Digital Scholarship hosts a lecture series, open to all.
Year(s) Of Engagement Activity 2017
 
Description British Library Digital Labs Symposium 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Professional Practitioners
Results and Impact Showcase of the British Library's digital collections and projects, in the form of presentations and posters.
Year(s) Of Engagement Activity 2017
URL http://blogs.bl.uk/digital-scholarship/2017/09/bl-labs-symposium-2017-mon-30-oct-book-your-place-now...
 
Description Iberian books workshop 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Small workshop for digital humanities project
Year(s) Of Engagement Activity 2017
 
Description Invited talk at CVPR 2017 workshop 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Gave an invited talk as part of a workshop at CVPR 2017 (Hawaii) which aimed to give an overview to computer vision researchers or problems and state-of-the-art research in medical image analysis. A little follow-up but discussion was quite passive (we might have been competing with the weather on the last day of the meeting!)
Year(s) Of Engagement Activity 2017
 
Description Invited talk at International Ultrasonics Symposium 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Invited key note talk in the inaugural session on machine learning in ultrasonics. The session was packed reflecting the interest in not only my group's work but the interest in machine learning as well.
Year(s) Of Engagement Activity 2017
 
Description Keynote speaker 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Professional Practitioners
Results and Impact Over 230 Academics and Industry Experts attended MEIbioeng 16 to meet, share, debate and learn from their peers.

The annual conference supported the discussion of newly developing Biomedical Engineering research areas alongside established work that contribute towards the common goal of improving human health and well-being via development of new healthcare technologies.
Year(s) Of Engagement Activity 2016
URL http://www.ibme.ox.ac.uk/news-events/events/meibioeng-16
 
Description MISS Summer School 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact Invited lecturer at international summer school.
Year(s) Of Engagement Activity 2016
 
Description McGill University Library 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Professional Practitioners
Results and Impact This was a private presentation to Special Collections librarians, library IT staff and a couple of academics interested in some collections of early printed material, and rare printers' woodblocks.
Year(s) Of Engagement Activity 2017
 
Description Microsoft Postgraduate Summer School 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact Invited talk at Microsoft Summer school
Year(s) Of Engagement Activity 2016
 
Description Oxford Humanities Division Poster Workshop 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Postgraduate students
Results and Impact This was a showcase of posters run by the Training Officer of the University's Humanities Division, aimed particularly at ECRs.
Year(s) Of Engagement Activity 2017
 
Description Queen Elizabeth Prize schools event at the Science Museum 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Schools
Results and Impact I was a panel member, along with the awardees of the 2017 Queen Elizabeth Prize (Eric Fossum, Michael Tompsett and Nobukazu Teranishi) discussing their inventions related to digital sensors/imaging and how the digital imaging world has changed with a schools audience. The event was held at the Science Museum. My invitation stemmed from involvement in the nominations panel for the QEP as well as research interest in digital image analysis.
Year(s) Of Engagement Activity 2017
 
Description SEAHA 2017 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Postgraduate students
Results and Impact SEAHA is a doctoral training partnership in heritage science, the members of which are the University of Oxford, Brighton and UCL. Their annual conference is their main plenary gathering, attended by 100+ members of the consortium (students, their supervisors, researchers and professional staff) with exhibits from companies and organisations. The background of attendees ranges from art conservation to material science, or a mixture.
Year(s) Of Engagement Activity 2017
URL http://www.seaha-cdt.ac.uk/activities/events/seaha17/
 
Description Samsung Satellitte Symposium, European Congress in Radiology 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Talk was part of a lunch symposium presenting latest research in AI applied to radiology
Year(s) Of Engagement Activity 2017
 
Description School talk 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Schools
Results and Impact Talk at Headington School as the Key Note speaker for their Year of Science.
Year(s) Of Engagement Activity 2017
 
Description Show and Tell Event - Computer Vision Software - 14 June 2016 (Oxford) 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Postgraduate students
Results and Impact A main aim of the Seebibyte Project is to transfer the latest computer vision methods into other disciplines and industry. We want the software developed in this project to be taken up and used widely by people working in industry and other academic disciplines, and are organizing regular Show and Tell events to demonstrate new software developed by project researchers. A main outcome from these events will be new inter-disciplinary collaborations. As a first step, Transfer and Application Projects (TAPs) are developed with new collaborators.

This first Show and Tell event was restricted to participants from the University of Oxford only, in particular researchers from the Department of Engineering Science, the Department of Earth Sciences and the Department of Materials. Future events will also target external participants, including from industry. The June 14 event focused on four topics: 1) Counting; 2) Landmark Detection (KeyPoint Detection); 3) Segmentation (Region Labelling); and 4) Text Spotting. Further information for each of the topics - including the event presentations and new software demos - is available on the event webpage (www.seebibyte.org/June14.html). The event received a positive feedback from participants and has resulted in several new TAPs being completed. It is anticipated that some of these will lead to new collaborations.
Year(s) Of Engagement Activity 2016
URL http://www.seebibyte.org/June14.html
 
Description Speaker - International Women in Engineering Day 2017 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Schools
Results and Impact Secondary school girls from a number of local schools visited the department to see different areas of engineering and do some simple activities related to engineering. I gave the short talk at tea on some of emerging areas of engineering (wacky engineering) and talked a little about my own research and my field. Feedback from schools was positive for the whole event.
Year(s) Of Engagement Activity 2017
 
Description Teaching in Summer School ICVSS 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact This International Computer Vision Summer School aims to provide both an objective and clear overview and an in-depth analysis of the state-of-the-art research in Computer Vision and Machine Learning. The participants benefited from direct interaction and discussions with world leaders in Computer Vision.
Year(s) Of Engagement Activity 2015
URL http://iplab.dmi.unict.it/icvss2015/
 
Description Teaching in Summer School MISS 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact The Medical Imaging Summer School is the largest summer school in its field. Around 200 students attended the school and received training in the science and technology of medical imaging. Students expressed interest in future research in the area.
Year(s) Of Engagement Activity 2016
URL http://iplab.dmi.unict.it/miss/index.html
 
Description Teaching in Summer School iV&L 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact The iV&L Training School aims at bringing together Vision and Language researchers and to provide the opportunity for cross-disciplinary teaching and learning. Over 80 students attended the summer school and received training in deep learning across two disciplines, Computer Vision and Natural Language Processing. Students expressed interest in future research in the area.
Year(s) Of Engagement Activity 2016
URL http://ivl-net.eu/ivl-net-training-school-2016/
 
Description The 2017 IEEE-EURASIP Summer School on Signal Processing (S3P-2017) 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact The 2017 IEEE-EURASIP Summer School on Signal Processing (S3P-2017), is the 5th edition of a successful series, organized by the IEEE SPS Italy Chapter and the National Telecommunications and Information Technologies Group - GTTI, with the sponsorship of IEEE (S3P program) and EURASIP (Seasonal School Co-Sponsorship agreement). S3P-2017 represents a stimulating environment where top international scientists in signal processing and related disciplines share their ideas on fundamental and ground-breaking methodologies in the field. It provides PhD students and researcher with a unique networking opportunity and a possibility of interaction with leading scientists.

The theme of this 5th edition is "Signal Processing meets Deep Learning". Deep machine learning is changing the rules in the signal and multimedia processing field. On the other hand, signal processing methods and tools are fundamental for machine learning. Time for these worlds to meet.
Year(s) Of Engagement Activity 2017
URL http://www.grip.unina.it/s3p2017/