Vision for the Future

Lead Research Organisation: University of Bristol
Department Name: Electrical and Electronic Engineering

Abstract

Approximately half the cortical matter in the human brain is involved in processing visual information, more than for all of the other senses combined. This reflects the importance of vision for function and survival but also explains its role in entertaining us, training us and informing our decision-making processes. However, we still understand relatively little about visual processes in naturalistic environments and this is why it is such an important research area across such a broad range of applications.

Vision is important: YouTube video accounts for 25% of all internet traffic and in the US, Netflix accounts for 33% of peak traffic; by 2016 video is predicted by CISCO to account for 54% of all traffic (86% if P2P video distribution is included) where the total IP traffic is predicted to be 1.3 zettabytes. Mobile network operators predict a 1000 fold increase in demand over the next 10 years driven primarily by video traffic. At the other extreme, the mammalian eye is used by cheetahs to implement stable locomotion over natural terrain at over 80km/h and by humans to thread a needle with sub-millimetre accuracy or to recognise subtle changes in facial expression. The mantis shrimp uses 12 colour channels (humans use only three) together with polarisation and it possesses the fastest and most accurate strike in the animal kingdom.

Vision is thus central to the way animals interact with the world. A deeper understanding of the fundamental aspects of perception and visual processing in humans and animals, across the domains of immersion, movement and visual search, coupled with innovation in engineering solutions, is therefore essential in delivering future technology related to consumer, internet, robotic and environmental monitoring applications.

This project will conduct research across three interdisciplinary strands: Visual Immersion, Finding and Hiding Things, and Vision in Motion. These are key to understanding how humans interact with the visual world. By drawing on knowledge and closely coupled research across computer science, electronic engineering, psychology and biology we will deliver radically new approaches to, and solutions in, the design of vision based technology.

We recognise that it is critical to balance high risk research with the coherence of the underlying programme. We will thus instigate a new sandpit approach to ideas generation where researchers can develop their own mini-projects. This will be aligned with a risk management process using peer review to ensure that the full potential of the grant is realised. The management team will periodically and when needed, seek independent advice through a BVI Advisory panel.

Our PDRAs will benefit in ways beyond those on conventional grants. They will for example be mentored to:
i) engage in ideas generation workshops, defining and delivering their own mini-projects within the programme;
ii) develop these into full proposals (grants or fellowships) if appropriate;
iii) undertake secondments to international collaborator organisations, enabling them to gain experience of different research cultures;
iv) lead the organisation of key events such as the BVI Young Researchers' Colloquium; v) be trained as STEM ambassadors to engage in outreach activities and public engagement; and
vii) explore exploitation of their intellectual property.
Finally we will closely link BVI's doctoral training activities to this grant, providing greater research leverage and experience of research supervision for our staff.

Planned Impact

Vision is central to the way humans interact with the world. A deeper understanding of the fundamental aspects of human perception and visual processing in humans and animals, will lead to innovation in engineering solutions. Our programme will therefore be instrumental in delivering future technology related to consumer, internet, robotic and environmental monitoring applications.

Through a closely coupled research programme across engineering, computer science, psychology and biology, this grant will deliver in each of these areas. Firstly, this research will be relevant to research communities across disciplines: it will benefit psychologists in generating realistic real world scenarios and data sets and results which help us to understand the way humans interact with the visual world; It will benefit biologists in providing visual models for understanding the evolution and ecology of vision; it will benefit engineers and computer scientists in providing radically new approaches to solving technology problems.

The research in Visual Immersion will be of great significance to the ICT community commercially in terms of future video acquisition formats, new compression methods, new quality assessment methods and immersive measurements. This will inform the future of immersive consumer products - 'beyond 3D'. In particular the project will deliver an understanding of the complex interactions between video parameters in delivering a more immersive visual experience. This will not only be relevant to entertainment, but also in visual analytics, surveillance and healthcare. Our results are likely to inform future international activity in video format standardisation in film, broadcast and internet delivery, moving thinking from 'end to end solutions' to the 'creative continuum' where content creation, production delivery, display, consumption and quality assessment, are all intimately interrelated. Our work will also help us to understand how humans interact with complex environments, or are distracted by environmental changes - leading to better design of interfaces for task based operations and hence improved situational awareness.

In terms of Finding and Hiding Things - impact will be created in areas such as visual camouflage patterns, offering a principled design framework which takes account of environmental factors and mission characteristics. It will also provide enhanced means of detecting difficult targets, through better understanding of the interactions between task and environment. It will provide benefits in application areas such as situational awareness and stealthy operation - highly relevant to surveillance applications. The work will also contribute in related areas such as environmental visual impact of entities such as windfarms, buildings or pylons. Hence the relevance of the research to energy providers and civil engineers. Finally, visual interaction with complex scenes is a key enabler for the 'internet of things'.

In the case of Vision in Motion, the research will deliver impact in the design of truly autonomous machines, exploiting our understanding of the way in which animals and humans adapt to the environment. The beneficiaries in this case will be organisations in the commercial, domestic and surveillance robotics or UAV sectors. Furthermore, understanding the interactions between motion and camouflage has widespread relevance to environmental applications and to anomaly detection. Through a better understanding of the effects of motion we can design improved visual acquisition methods, better consumer interfaces, displays and content formats. This will be of broad benefit across the ICT sector, with particular relevance to designers of visual interfaces and to content providers in the entertainment sector. Furthermore the research will benefit those working in healthcare - for example in rehabilitation or in the design of point of care systems incorporating exocentric vision systems
 
Title Visualization 1.mp4 
Description Representation of the perception of Haidinger's brushes as observed under white linearly polarized light rotating clockwise. 
Type Of Art Film/Video/Animation 
Year Produced 2019 
URL https://opticapublishing.figshare.com/articles/media/Visualization_1_mp4/7358156
 
Description I) Spatio temporal resampling combined with superresolution upsampling as a basis for perceptual video compression; ii) the benefits of using polarization imagery for feature extraction; iii) How video content features can be used to predict rate-quality performance; New methods for B-Line extraction from lung ultrasound; iv) The use of polarization vision in humans as an indicator of Age Related Macular Degeneration; iii) The limits of temporal resolution in high frame rate video acquisition, iv) New insights into animal camouflage and ecology, v) understanding how people view AI -created art.
Exploitation Route VisTRA codec has been submitted for consideration by MPEG; Polarisation in AMD is the basis for start up Azul Optics
Sectors Aerospace

Defence and Marine

Creative Economy

Digital/Communication/Information Technologies (including Software)

Education

Healthcare

Leisure Activities

including Sports

Recreation and Tourism

Manufacturing

including Industrial Biotechology

Retail

Security and Diplomacy

Transport

URL http://www.bristol.ac.uk/vision-institute
 
Description The work funded by the Platform Grant Vision for the Future has had impact in a number of different areas. i) Research funding Dr Shelby Temple and Nick Roberts has been exploited in spin off Azul optics. ii) Building on the work of the grant and its relevance to the Creative Industries, Bull led a consortium of 30 companies in the West of England Creative sector - linked to major organisations across the globe, which won UKRI Strength in Places funding (MyWorld) (£30m, 2021-26). Of some 280 bids, only seven were awarded in the first round. Strength in Places is a devolved research funding mechanism that recognises the strength and potential of regional academic-industry partnerships and the catalytic effect of strategic funding in key sectors. Please see MyWorld award return in Researchfish. iii) The funding for Zhang on training databases for deep video compression and its performance analysis resulted in the BVI-DVC database and its performance analysis was commenced using Platform grant resources. This has now been adopted by MPEG as a primary training database for the development of future deep video compression tools and standards. iv) Work in the grant has contributed to underpinning a unique long term strategic relationship with Netflix in Los Gatos, now in its 7th year.
First Year Of Impact 2020
Sector Creative Economy,Digital/Communication/Information Technologies (including Software),Healthcare
Impact Types Societal

Economic

 
Description 5g -Ege XR
Amount £1,486,000 (GBP)
Organisation Innovate UK 
Sector Public
Country United Kingdom
Start 08/2020 
End 04/2022
 
Description BBSRC / RSE Innovation Fellowship
Amount £45,000 (GBP)
Organisation Biotechnology and Biological Sciences Research Council (BBSRC) 
Sector Public
Country United Kingdom
Start 03/2016 
End 04/2017
 
Description EPSRC IAA Immersive Measurements
Amount £20,000 (GBP)
Organisation University of Bristol 
Sector Academic/University
Country United Kingdom
Start 07/2017 
End 12/2017
 
Description EPSRC IAA ViSTRA
Amount £20,000 (GBP)
Organisation University of Bristol 
Sector Academic/University
Country United Kingdom
Start 12/2017 
End 10/2018
 
Description ISCF Bristol and Bath Creative Industries Cluster
Amount £5,700,000 (GBP)
Organisation Arts & Humanities Research Council (AHRC) 
Sector Public
Country United Kingdom
Start 09/2018 
End 03/2023
 
Description Imaging Magmatic Architecture using Strain Tomography (MAST)
Amount £1,433,000 (GBP)
Organisation European Research Council (ERC) 
Sector Public
Country Belgium
Start 08/2021 
End 08/2026
 
Description Impact Acceleration
Amount £150,000 (GBP)
Organisation University of Bristol 
Sector Academic/University
Country United Kingdom
Start 08/2015 
End 09/2016
 
Description Intelligent Video Compression for AoM
Amount $340,000 (USD)
Organisation Netflix, Inc. 
Sector Private
Country United States
Start 03/2022 
End 02/2024
 
Description Learning Optimal Deep Video Compression
Amount £90,000 (GBP)
Organisation Defence Science & Technology Laboratory (DSTL) 
Sector Public
Country United Kingdom
Start 03/2020 
End 01/2021
 
Description Leverhulme early career fellowship - A. Katsenou
Amount £90,000 (GBP)
Organisation The Leverhulme Trust 
Sector Charity/Non Profit
Country United Kingdom
Start 03/2018 
End 02/2021
 
Description MyWorld
Amount £29,900,000 (GBP)
Organisation United Kingdom Research and Innovation 
Department Research England
Sector Public
Country United Kingdom
Start 03/2021 
End 03/2026
 
Description Netflix
Amount £50,000 (GBP)
Organisation Netflix, Inc. 
Sector Private
Country United States
Start 03/2018 
End 03/2019
 
Description Netflix Perceptual Video Coding
Amount £58,000 (GBP)
Organisation Netflix, Inc. 
Sector Private
Country United States
Start 01/2018 
End 03/2019
 
Description Perceptually optimised video compression
Amount £198,000 (GBP)
Organisation Netflix, Inc. 
Sector Private
Country United States
Start 03/2019 
End 03/2021
 
Description Vision-based object recognition under atmospheric distortions
Amount £340,000 (GBP)
Organisation Defence Science & Technology Laboratory (DSTL) 
Sector Public
Country United Kingdom
Start 03/2023 
End 03/2025
 
Description YouTube Faculty research Award
Amount £40,000 (GBP)
Organisation YouTube 
Sector Private
Country United States
Start 06/2017 
End 09/2020
 
Title BVI-DVC 
Description Deep Learning Dataset plus extensive metadata for video compression 
Type Of Material Improvements to research infrastructure 
Year Produced 2021 
Provided To Others? Yes  
Impact Adopted by MPEG as a primary database for training deep video compression systems and the development of future standards. 
URL https://research-information.bris.ac.uk/en/datasets/bvi-dvc
 
Title HABnet 
Description A HAB software tool for the detection and prediction of HAB events based on datacube analysis and machine learning. This code is for generating classification scores for HAB databases There are two basic classification methods: 1. Extract features from each frame with a ConvNet, passing the sequence to an RNN, in a separate network 2. Extract features from each frame with a ConvNet and pass the sequence to an MLP/RF system Datacube extraction software tools have also been developed and are available: https://github.com/csprh/extractData 
Type Of Material Improvements to research infrastructure 
Year Produced 2019 
Provided To Others? Yes  
Impact Disseminated to other partners and used to validate the developed machine learning methods within the Arabian Gulf. 
URL http://github.com/csprh/modelHAB
 
Title A Training Database for Deep Video Compression 
Description Deep learning methods are increasingly being applied in the optimisation of video compression algorithms and can achieve significantly enhanced coding gains, compared to conventional approaches. Such approaches often employ Convolutional Neural Networks (CNNs) which are trained on databases with relatively limited content coverage. BVI-DVC is a new extensive and representative video database for training CNN-based coding tools, which contains 800 sequences at various spatial resolutions from 270p to 2160p. Experimental results show that the database produces significant improvements in terms of coding gains over three existing (commonly used) image/video training databases. 
Type Of Material Database/Collection of data 
Year Produced 2020 
Provided To Others? Yes  
URL https://data.bris.ac.uk/data/dataset/3hj4t64fkbrgn2ghwp9en4vhtn/
 
Title BV High frame rate database 
Description Collection of high frame rate clips with associated metadata for testing and developing future immersive video formats 
Type Of Material Database/Collection of data 
Year Produced 2015 
Provided To Others? Yes  
Impact None at present 
URL http://data.bris.ac.uk/data/dataset/k8bfn0qsj9fs1rwnc2x75z6t7
 
Title BVI Texture database 
Description Collection of static and dynamic video textures for compression testing 
Type Of Material Database/Collection of data 
Year Produced 2015 
Provided To Others? Yes  
Impact Used by several groups around the world 
URL http://data.bris.ac.uk/datasets/1if54ya4xpph81fbo1gkpk5kk4/
 
Title BVI-DVC Part 1 
Description Deep learning methods are increasingly being applied in the optimisation of video compression algorithms and can achieve significantly enhanced coding gains, compared to conventional approaches. Such approaches often employ Convolutional Neural Networks (CNNs) which are trained on databases with relatively limited content coverage. BVI-DVC is a new extensive and representative video database for training CNN-based coding tools, which contains 772 sequences at various spatial resolutions from 270p to 2160p. Experimental results show that the database produces significant improvements in terms of coding gains over three existing (commonly used) image/video training databases. 
Type Of Material Database/Collection of data 
Year Produced 2021 
Provided To Others? Yes  
Impact Database has been adopted by MPEG as primary training/validation database for deep video compression. 
URL https://data.bris.ac.uk/data/dataset/3h0hduxrq4awq2ffvhabjzbzi1/
 
Title BVI-Lowlight: Image 
Description Although image denoising algorithms have attracted significant research attention, surprisingly few have been proposed for, or evaluated on, noise from imagery acquired under real low-light conditions. Moreover, noise characteristics are often assumed to be spatially invariant, leading to edges and textures being distorted after denoising. Here, we introduce a novel topological loss function which is based on persistent homology. We compare its performance across popular denoising architectures and loss functions, training the networks on our new comprehensive dataset of natural images captured in low-light conditions - BVI-LOWLIGHT.-IMAGE. Analysis reveals that this approach outperforms existing methods, adapting well to complex structures and suppressing common artifacts. 
Type Of Material Database/Collection of data 
Year Produced 2023 
Provided To Others? Yes  
Impact Publication: https://www.sciencedirect.com/science/article/pii/S016516842300155X 
URL http://www.doi.org/10.1016/j.sigpro.2023.109081
 
Title BVI-SR Database 
Description BVI-SR contains 24 unique video sequences at a range of spatial resolutions up to UHD-1 (3840p). These sequences were used as the basis for a large-scale subjective experiment exploring the relationship between visual quality and spatial resolution when using three distinct spatial adaptation filters (including a CNN-based super-resolution method). The results demonstrate that while spatial resolution has a significant impact on mean opinion scores (MOS), no significant reduction in visual quality between UHD-1 and HD resolutions for the superresolution method is reported. A selection of image quality metrics were benchmarked on the subjective evaluations, and analysis indicates that VIF offers the best performance. This dataset is published in support of the paper published by IEEE at: https://doi.org/10.1109/ICIP.2018.8451225, and available on the University of Bristol's repository at http://hdl.handle.net/1983/99d89816-e64c-4c75-bb8e-5c7e3015cae7 
Type Of Material Database/Collection of data 
Year Produced 2020 
Provided To Others? Yes  
URL https://data.bris.ac.uk/data/dataset/1gqlebyalf4ha25k228qxh5rqz/
 
Title BVI-SynTex 
Description BVI-SynTex was generated using a Computer Graphics Imagery (CGI) environment. It contains 186 sequences clustered in three different texture types. A subset of the BVI-SynTex dataset was selected to perform a subjective evaluation of compression using the MPEG HEVC codec (HM16.20).The publicly available BVI-SynTex dataset contains all source sequences, the objective and subjective analysis results, providing a valuable resource for the research community. Note: Part of this dataset was published in 2018 under DOI 10.5523/bris.24imj6d9s27me2n211cf6jxkio and can be found here https://data.bris.ac.uk/data/dataset/24imj6d9s27me2n211cf6jxkio 
Type Of Material Database/Collection of data 
Year Produced 2019 
Provided To Others? Yes  
 
Title Data and data preparation code from Crowe et al 2021 
Description Raw data and data preparation files for two experiments from Motion Silencing in Dynamic Visual Search for an Orientation Change, Crowe, Howard, Gilchrist, Kent (2021). 
Type Of Material Database/Collection of data 
Year Produced 2021 
Provided To Others? Yes  
Impact None to date 
URL https://data.bris.ac.uk/data/dataset/1ayzsmttl78pg2wymtkevg2zld/
 
Title HomTex 
Description A database of homogeneous texture video clips 
Type Of Material Database/Collection of data 
Year Produced 2017 
Provided To Others? No  
Impact Led to feature - based content coding methods and secondment of Mariana Afonso and Felix Mercer Moss to Netflix. Contributed to a new strategic relationship with Netflix. 
URL https://data.bris.ac.uk/data/dataset/1h2kpxmxdhccf1gbi2pmvga6qp
 
Title VIL: Synthetic Video Texture Dataset - SynTex 
Description This dataset contains 186 Full High Definition (FHD) video texture sequences at 60fps and in YUV 4:2:0 format. The dataset was created using Unreal Engine Software UE4 and it contains versions of the same video textures but with different parameters, such as different granularity, velocity, camera position, etc. 
Type Of Material Database/Collection of data 
Year Produced 2018 
Provided To Others? Yes  
 
Description BBC data collection 
Organisation British Broadcasting Corporation (BBC)
Department BBC Research & Development
Country United Kingdom 
Sector Public 
PI Contribution AI workflow development - 1st joint denoising, colorisation and enhancement framewiork
Collaborator Contribution Filming and collation of low light video content at various light levels and parameter settings for a range of realistic scenarios.
Impact Dataset of low light video content used in joint enhancement and denoising of natural history content
Start Year 2020
 
Description Drone Simulation Virtual Production 
Organisation British Broadcasting Corporation (BBC)
Department BBC Research & Development
Country United Kingdom 
Sector Public 
PI Contribution Creation of drone simulator
Collaborator Contribution Use cases, evaluation of platform, filmmaking
Impact None to date
Start Year 2020
 
Description Google Faculty research Award 
Organisation YouTube
Country United States 
Sector Private 
PI Contribution Development and enhancement of ViSTRA codec - Intelligent and perceptual resampling and superresolution.
Collaborator Contribution Financial contribution.
Impact ViSTRA patent application and MPEG submission
Start Year 2017
 
Description Immersive Assessments 
Organisation Aarhus University
Country Denmark 
Sector Academic/University 
PI Contribution Collaboration with Aarhus Univ on development of Immersive assessment methods.
Collaborator Contribution Ongoing collaboration
Impact None yet - ongoing
Start Year 2016
 
Description Learning Optimal Deep Video Compression 
Organisation Thales Group
Department Thales UK Limited
Country United Kingdom 
Sector Private 
PI Contribution Collaboration on research following on from Platform grant on Deep Video Compression. Linked to Thales Airborne Platforms. Collaboration on DASA research grant Vision 2020
Collaborator Contribution Collaboration on research Deep Video Compression. Linked to Thales Airborne Platforms. Collaboration on DASA research grant Vision 2020. Data sets and rewuirements.
Impact Award of DASA grant.
Start Year 2019
 
Description Netflix Phase 3 
Organisation Netflix, Inc.
Country United States 
Sector Private 
PI Contribution Recruitment underway, Research into intelligent tools for AoM AV2 video compression standard. Research into practical implementation of our enhanced VMAF visual quality metric.
Collaborator Contribution Support of research into intelligent tools for AoM AV2 video compression standard.
Impact Recruitment and detailed planning underway
Start Year 2022
 
Description Netflix collaboration 
Organisation Netflix, Inc.
Country United States 
Sector Private 
PI Contribution Video codec research, perceptual metrics and dynamic optimisation. From 2022 research into intelligent tools for the AoM AV2 standard.
Collaborator Contribution Data set access, shared resources and expertise.
Impact Characterisation and enhancement of perceptual VMAF metric; performance comparisons AV1 vs HEVC. New Feature based Dynamic Optimisation Method.
Start Year 2018
 
Description Tencent: Compression of User Generated Video 
Organisation Tencent
Department Tencent America LLC
Country United States 
Sector Private 
PI Contribution Started in 2022. Researcher Recruited to start work in March 2022.
Collaborator Contribution Contribution of funding to support researcher over 3 years with in kind management and technical support from Tencent.
Impact Papers submitted - see Publications section..
Start Year 2022
 
Title Video Processing Method (ViSTRA) 
Description Optimisation of video codec using in loop perceptual metrics and superresolution upscaling 
IP Reference P123219GB 
Protection Patent application published
Year Protection Granted
Licensed No
Impact Submitted to MPEG Beyond HEVC
 
Company Name Azul Optics 
Description Azul Optics develops a device which can be used by health professionals to measure the density of macular pigments in the eye, which help protect the eye from sunlight, in order to diagnose the risk of the patient developing age-related macular degeneration (AMD). 
Year Established 2016 
Impact None yet - product under development
Website https://azuloptics.com/
 
Description AHRC Beyond Conference Keynote 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Policymakers/politicians
Results and Impact Keynote Lecture at AHRC Beyond Conference to launch the Creative Industries ISCF collaboration.
Year(s) Of Engagement Activity 2018
 
Description BBC Digital Cities 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Professional Practitioners
Results and Impact BBC Digital Cities Masterclass, 2020: "MyWorld, R&D and the Visual Future" Professor Bull was an expert panel member.
Year(s) Of Engagement Activity 2020
 
Description CogX panel 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact CogX Createch2020, Expert Panel on Emerging Technologies in the Creative Industries. Professor Bull was an expert panel member with J.
Silver (Director Digital Catapult), Andrew Thompson (Chief Executive AHRC), Emma Lloyd,
Operations Director Sky) and Rebecca Gregory-Clarke, Director Story Futures Academy).
Year(s) Of Engagement Activity 2020
URL https://createch.io/
 
Description Discussion Panel Beyond 2019 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Professional Practitioners
Results and Impact Panel discussion - AHRC Beyond 2019 Conference
Year(s) Of Engagement Activity 2019
 
Description IEEE Picture Coding Symposium 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact The Picture Coding Symposium (PCS) is an international forum devoted to advances in visual data coding. Established in 1969, it has the longest history of any conference in this area. The 35th event in the series, PCS 2021, was hosted in Bristol, UK. Professor Bull was the General Chair of this prestigious event. Papers presented by MyWorld researchers Bull, Zhang and Duolikun Danier.
Year(s) Of Engagement Activity 2021
URL http://pcs2021.org
 
Description Keynote lecture at conference 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact Invited keynote lecture, Chinese Ornithological Congress, Xian, China, 22-25 September, 2015. "What camouflage tells us about avian perception and cognition"
Year(s) Of Engagement Activity 2017
 
Description Keynote: EPSRC VIHM Workshop 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Keynote lecture EPSRC Vision in Humans and Machines Workshop Bath 2016
Year(s) Of Engagement Activity 2016
 
Description Keynote: IET ISP 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Keynote Lecture IET ISP- Perceptual Video coding
Year(s) Of Engagement Activity 2015
 
Description Picture Coding Symposium 2021 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact The Picture Coding Symposium (PCS) is hosted and organized by Bristol Vision Institute at the University of Bristol. PCS is the pioneer conference in this area and, since 1969, has provided the most engaging forum for the visual coding community attracting world-leading academics and industrialists. PCS attracts industrial and academic experts from across the globe
Year(s) Of Engagement Activity 2021
URL http://pcs2021.org
 
Description Presentation at a military-themed workshop of the National Academies of Science, Engineering, and Medicine, in Washington DC. 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Policymakers/politicians
Results and Impact Invited to speak at a workshop on Bioinspired Signature Management on 16 September 2019, run by the Board on Army Research and Development (BOARD) of the National Academies of Science, Engineering, and Medicine, in Washington DC. My presentation remit was blue skies research on animal camouflage, with a view to possible military applications. The outcomes of the meeting are classified.
Year(s) Of Engagement Activity 2019
URL https://sites.nationalacademies.org/DEPS/board/index.htm
 
Description Public engagement activity - Festival of Nature 2017 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Public/other audiences
Results and Impact "Nature expert" at event at the 2017 Festival of Nature, a 2-day free public event organised by the the Bristol Natural History Consortium (http://www.bnhc.org.uk/festival-of-nature/). I took part in "Nature Roulette" talking about animal coloration.
Year(s) Of Engagement Activity 2017
URL http://www.bnhc.org.uk/nature-roulette-will-meet/
 
Description Talk at local school 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Schools
Results and Impact Talk to GCSE and lower 6th form students on animal defensive coloration, followed by presentation and discussion on careers in biology.
Year(s) Of Engagement Activity 2018
 
Description Talk at local school 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Schools
Results and Impact Talk to GCSE and lower 6th form students on animal camouflage, followed by presentation and discussion on careers in biology.
Year(s) Of Engagement Activity 2017
 
Description Talk on animal defensive coloration at the University of Groningen, The Netherlands 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact Invited research talk to graduate students, undergraduates and postdocs at the School of Life Sciences, University of Groningen, The Netherlands
Year(s) Of Engagement Activity 2018