Natural dynamic scenes and human vision

Lead Research Organisation: University of Cambridge
Department Name: Physiology Development and Neuroscience

Abstract

There is a widespread, and reasonable, assumption that our visual system has developed to see and interpret our natural surroundings. This has been investigated in the past by studying the relationship between the structure of natural scenes and the properties of biological systems looking at those scenes. However, much past work has suffered from two assumptions known to be false: first, that nothing moves in the scenes, and second, that the observers do not move their eyes. The reason for this over-simplification has been the technical difficulty of adding these important variables. We have assembled a team of researchers in two Universities (Bristol and Cambridge) who, together, have the necessary expertise to take on this task. We will collect a large number of images and video clips of outdoor scenes in which there is natural movement, such as leaves rustling in the wind, or of objects in motion. We will study how the information in these scenes is encoded by the visual brain, both with theoretical models and with experiments involving human observers looking at the video clips and having to decide whether or not successive video clips are the same, or different from each other.What makes the modelling challenging is the second issue to be explored here - namely, that we move our gaze to a particular place because only one part of our retina, the fovea, has high spatial resolution. The eye movements that we make provide us with sequential information about a scene. We want our model to know (a) how this information is taken up when the eye is looking at one place (it is said to be fixating ), and (b) how it is combined with information from previous and future fixation locations. In other words, how does vision integrate information across eye movements?The novelty of this work is manifold. First, we will calibrate video and still cameras to obtain accurate images of natural scenes from which we can work out how human cones at each location would respond when looking at the scene. There has not been a study of the time-varying properties of natural scenes, and we will provide a resource both for this study and other interested researchers. Furthermore, we will study the interplay between fixation and information uptake and storage in human vision, for natural scenes. Finally, we will develop a computational model capable of predicting to what extent human observers will notice differences between two scenes when they move their eyes, and when scenes contain movement. Such a model is useful for many applications, such as measuring whether people will notice errors in the quality of graphics images, and for estimating the degree to which people will notice the presence of camouflaged objects in the scene.

Publications

10 25 50
 
Description The project has sought to develop rigorous experimental protocols for studying human perception of features in photographs of natural scenes. This is a big step beyond the usual laboratory studies of human vision, which use simplified visual stimuli which do not relate directly to everday visual experience. From our experiments, we have attempted to build a computer model of how nerve cells in the human brain each respond to features in natural scenes so that we can predict which features or changes will be visible and which will be invisible. In this project, we have specifically compared direct, foveal vision with peripheral vision which conveys less detail; we have examined how people evaluate differences between movie clips of similar events; and we have extended the study of "visual search" to understand how people search for natural targets.
Exploitation Route This has clear applications in defence for trying to evaluate whether particular camouglage schemes might be more or less effective than others. From working (indirectlt) with Network rail on the conspicuity of railside signage, we expect that the software package could be extended to study the visibility of signage in many sectors, or the visibility of safety-critical situations such as railway crossings. Our work has been published in typical peer review scientific journals; but we have also developed software packages for use by commercial partners who may wish to estimate how well human observers might see items of importance. The software examines colour photographs of scenes and then attempts to model how the visual coding in the brain would perceive the presence or absence of key features.
Sectors Aerospace, Defence and Marine,Digital/Communication/Information Technologies (including Software),Environment,Transport

 
Description Advisory to DstL and Network Rail on visual conspicuity of targets and signage. Our Vision reaserach on complex natural scenes led to the development of a computer model of human perception of natural scenes. that software can be used to estimate the conspicuity of, e.g., critical rail signs and we can advise on their placement and organisation. Beneficiaries: MoD/Dstl; Network Rail Contribution Method: advisory and development of software algorithms to guide placement of signage or to study conspicuity of targets
First Year Of Impact 2008
Sector Aerospace, Defence and Marine,Environment
Impact Types Societal

 
Description commercial collaboration 
Organisation Environmental Resources Management
Country Global 
Sector Private 
PI Contribution Advisory through ERM of signal and sign placing for Network Rail
Collaborator Contribution evaluate software package
Impact no formal outputs
Start Year 2007