Vision for the Future

Lead Research Organisation: University of Bristol
Department Name: Electrical and Electronic Engineering


Approximately half the cortical matter in the human brain is involved in processing visual information, more than for all of the other senses combined. This reflects the importance of vision for function and survival but also explains its role in entertaining us, training us and informing our decision-making processes. However, we still understand relatively little about visual processes in naturalistic environments and this is why it is such an important research area across such a broad range of applications.

Vision is important: YouTube video accounts for 25% of all internet traffic and in the US, Netflix accounts for 33% of peak traffic; by 2016 video is predicted by CISCO to account for 54% of all traffic (86% if P2P video distribution is included) where the total IP traffic is predicted to be 1.3 zettabytes. Mobile network operators predict a 1000 fold increase in demand over the next 10 years driven primarily by video traffic. At the other extreme, the mammalian eye is used by cheetahs to implement stable locomotion over natural terrain at over 80km/h and by humans to thread a needle with sub-millimetre accuracy or to recognise subtle changes in facial expression. The mantis shrimp uses 12 colour channels (humans use only three) together with polarisation and it possesses the fastest and most accurate strike in the animal kingdom.

Vision is thus central to the way animals interact with the world. A deeper understanding of the fundamental aspects of perception and visual processing in humans and animals, across the domains of immersion, movement and visual search, coupled with innovation in engineering solutions, is therefore essential in delivering future technology related to consumer, internet, robotic and environmental monitoring applications.

This project will conduct research across three interdisciplinary strands: Visual Immersion, Finding and Hiding Things, and Vision in Motion. These are key to understanding how humans interact with the visual world. By drawing on knowledge and closely coupled research across computer science, electronic engineering, psychology and biology we will deliver radically new approaches to, and solutions in, the design of vision based technology.

We recognise that it is critical to balance high risk research with the coherence of the underlying programme. We will thus instigate a new sandpit approach to ideas generation where researchers can develop their own mini-projects. This will be aligned with a risk management process using peer review to ensure that the full potential of the grant is realised. The management team will periodically and when needed, seek independent advice through a BVI Advisory panel.

Our PDRAs will benefit in ways beyond those on conventional grants. They will for example be mentored to:
i) engage in ideas generation workshops, defining and delivering their own mini-projects within the programme;
ii) develop these into full proposals (grants or fellowships) if appropriate;
iii) undertake secondments to international collaborator organisations, enabling them to gain experience of different research cultures;
iv) lead the organisation of key events such as the BVI Young Researchers' Colloquium; v) be trained as STEM ambassadors to engage in outreach activities and public engagement; and
vii) explore exploitation of their intellectual property.
Finally we will closely link BVI's doctoral training activities to this grant, providing greater research leverage and experience of research supervision for our staff.

Planned Impact

Vision is central to the way humans interact with the world. A deeper understanding of the fundamental aspects of human perception and visual processing in humans and animals, will lead to innovation in engineering solutions. Our programme will therefore be instrumental in delivering future technology related to consumer, internet, robotic and environmental monitoring applications.

Through a closely coupled research programme across engineering, computer science, psychology and biology, this grant will deliver in each of these areas. Firstly, this research will be relevant to research communities across disciplines: it will benefit psychologists in generating realistic real world scenarios and data sets and results which help us to understand the way humans interact with the visual world; It will benefit biologists in providing visual models for understanding the evolution and ecology of vision; it will benefit engineers and computer scientists in providing radically new approaches to solving technology problems.

The research in Visual Immersion will be of great significance to the ICT community commercially in terms of future video acquisition formats, new compression methods, new quality assessment methods and immersive measurements. This will inform the future of immersive consumer products - 'beyond 3D'. In particular the project will deliver an understanding of the complex interactions between video parameters in delivering a more immersive visual experience. This will not only be relevant to entertainment, but also in visual analytics, surveillance and healthcare. Our results are likely to inform future international activity in video format standardisation in film, broadcast and internet delivery, moving thinking from 'end to end solutions' to the 'creative continuum' where content creation, production delivery, display, consumption and quality assessment, are all intimately interrelated. Our work will also help us to understand how humans interact with complex environments, or are distracted by environmental changes - leading to better design of interfaces for task based operations and hence improved situational awareness.

In terms of Finding and Hiding Things - impact will be created in areas such as visual camouflage patterns, offering a principled design framework which takes account of environmental factors and mission characteristics. It will also provide enhanced means of detecting difficult targets, through better understanding of the interactions between task and environment. It will provide benefits in application areas such as situational awareness and stealthy operation - highly relevant to surveillance applications. The work will also contribute in related areas such as environmental visual impact of entities such as windfarms, buildings or pylons. Hence the relevance of the research to energy providers and civil engineers. Finally, visual interaction with complex scenes is a key enabler for the 'internet of things'.

In the case of Vision in Motion, the research will deliver impact in the design of truly autonomous machines, exploiting our understanding of the way in which animals and humans adapt to the environment. The beneficiaries in this case will be organisations in the commercial, domestic and surveillance robotics or UAV sectors. Furthermore, understanding the interactions between motion and camouflage has widespread relevance to environmental applications and to anomaly detection. Through a better understanding of the effects of motion we can design improved visual acquisition methods, better consumer interfaces, displays and content formats. This will be of broad benefit across the ICT sector, with particular relevance to designers of visual interfaces and to content providers in the entertainment sector. Furthermore the research will benefit those working in healthcare - for example in rehabilitation or in the design of point of care systems incorporating exocentric vision systems
Description This grant has been active for two years of its 5 year duration. It has already provided partial funding for 10 RAs: Fan Zhang, Felix Mercer Moss, Guarav Malhotra, David Gibson, Shelby temple, Pui Anantrasirichai, Paul Hill, Ilse Daly, Steve Hinde and Henry Knowles.

The early phase of the grant has already made significant contributions:

1. Insight into the psychovisual durations required for subjective video assessment - showing that durations can be reduced from the current 10 secs to 3 seconds without statistically significant changes to mean opinion scores.
2. Modifications to the Rate Quality Optimisation Process for future video standards using a content-adaptive approach to the selection of Lagrangian multipliers and QP offsets
3. The development of a new transform (The undecimated dual tree complex wavelet transform) that is showing enhanced performance for image feature description for classification and detection applications.
4. New computational camera technology that supports flexible parameterisation during video acquisition.
5. New perceptual approaches to image denoising and image fusion. This includes analysis of the frequently used contrast sensitivity function in the context of spectral weighting. We have found that alternatives to the usual Gabor-based CSF may be more appropriate when processing images formed using other transforms (e.g wavelets)
6. We have undertaken a study collaboratively with BBC R&;D that has analysed all video content broadcast over the past 7 years on BBC. Preliminary results show that we can define descriptors that characterise the content.
7. Temple has demonstrated the link between human sensitivity to polarized light and macular degeneration, leading to a spin off Azul Optics.
8. Salience and priority estimation for the human visual system during locomotion. We have created a robust priority map from probabilities of gaze fixations. Texture-based features and CNN based features are employed. Two training streams are used for two fixation types: i) homogeneous area for a safe foot placement and ii) area around edge for awareness. The results show significant promise and much higher correlation with eye tracked data than the prior art.
9. New ways of mitigating the effects of atmospheric turbulence on long range imagery.
Exploitation Route 1. Modification of video presentation durations in future subjective testing methodologies
2. Contributions to future compression standards such as H.266
3. Enhanced more robust features (based on the Undecimated DT-CWT) for classification and detection applications.
4. Computational camera technology being developed in collaboration with DSTL.
5. New perceptual approaches to image denoising and image fusion that are currently under development
6. Our work on Content classification will be useful in defining reduced test data sets and for online media analytics.
7. Our work on predicting macular degeneration is now being taken to larger scale trials and could have major impact.
8. Salience and priority estimation for the human visual system during locomotion could have major impact on the design of control and navigation systems for autonomous vehicles, especially biped robots.
9. Mitigating the effects of Atmospheric Turbulence in long range surveillance video.
Sectors Aerospace, Defence and Marine,Creative Economy,Digital/Communication/Information Technologies (including Software),Education,Healthcare,Leisure Activities, including Sports, Recreation and Tourism,Manufacturing, including Industrial Biotechology,Retail,Security and Diplomacy,Transport

Description Exploited in spin off Azul optics
First Year Of Impact 2016
Sector Healthcare
Impact Types Societal,Economic

Title BV High frame rate database 
Description Collection of high frame rate clips with associated metadata for testing and developing future immersive video formats 
Type Of Material Database/Collection of data 
Year Produced 2015 
Provided To Others? Yes  
Impact None at present 
Title BVI Texture database 
Description Collection of static and dynamic video textures for compression testing 
Type Of Material Database/Collection of data 
Year Produced 2015 
Provided To Others? Yes  
Impact Used by several groups around the world 
Description BBC Immersive Technology Laboratory 
Organisation British Broadcasting Corporation (BBC)
Department BBC Research & Development
Country United Kingdom of Great Britain & Northern Ireland (UK) 
Sector Public 
PI Contribution High Dynamic range coding optimisation for HEVC Perceptual video compression results REDUX database analytics
Collaborator Contribution Provision of REDUX Support for PhD students Collaboration on perceptual quantisation Secondment of BBC employees
Impact New method of perceptual quantisation for HDR HEVC Analysis of BBC archive in terms of feature classification
Start Year 2012
Description Immersive Assessments 
Organisation Aarhus University
Country Denmark, Kingdom of 
Sector Academic/University 
PI Contribution Collaboration with Aarhus Univ on development of Immersive assessment methods.
Collaborator Contribution Ongoing collaboration
Impact None yet - ongoing
Start Year 2016
Company Name Azul Optics 
Description Exploiting polarisation vision in humans to detect Age Related Macular Degeneration. Established by BVI Platform Grant researcher Shelby Temple based on work partially completed under the grant. 
Year Established 2016 
Impact None yet - product under development
Description Keynote: EPSRC VIHM Workshop 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Keynote lecture EPSRC Vision in Humans and Machines Workshop Bath 2016
Year(s) Of Engagement Activity 2016
Description Keynote: IET ISP 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Keynote Lecture IET ISP- Perceptual Video coding
Year(s) Of Engagement Activity 2015