The Spatial Integration and Segmentation of Luminance Contrast in Human Spatial Vision

Lead Research Organisation: Aston University
Department Name: Sch of Life and Health Sciences

Abstract

When we open our eyes, we see, without effort. Our visual experience begins with the mechanics of focussing the image on the back of eye; but to make sense of the image-to perceive-our brains must identify the various parts of the image, and understand their relations. Just like a silicon-based computer, the brain performs millions of computations quickly and effectively, more efficiently than we can ever sense. But what are the computations that are needed to recognise, say, your mother; to segment an object from its background; or even appreciate that one part of an image belongs with another? The starting point for this analysis is the distribution of light levels across the retinal image, which we can think of as a set of pixels. Interesting parts of the image (e.g. object boundaries) occur at regions of change: where neighbouring pixels have very different values. These regions are identified by neurons in primary visual cortex (V1) by computing differences between adjacent pixel values to build a neural image of local contrasts: the 'contrast-image'. These contrast-defined local image features are then combined across retinal space at later stages of the visual hierarchy to represent elongated contours (e.g. the branches of a tree) and textured surfaces (e.g. a ploughed field) in what is sometimes known as a 'feature-map'.One major goal in vision science is to construct accurate computer models of the visual system so that computers can be made to process images in the same way as human brains. But there has been a major obstacle. Experiments confirm that feature integration (summing) is involved in constructing the 'feature-map', but also imply that contrast is not summed beyond the neighbourhood of each local contrast processor in V1. But how can local feature representations be summed without also summing the underlying contrast codes?We achieved the breakthrough on this by designing novel images containing patches of contrast distributed over retinal space (Meese & Summers, 2007). These allowed us to measure the contrast integration process while controlling the confounding effects of neural noise and retinal-inhomogeneity that have plagued previous studies. By analysing the relation between visual performance (an observer's probability of detecting the target stimulus) and stimulus contrast, we showed that contrast is summed over substantial regions of the retina after all, but that under normal viewing conditions its effects go unnoticed because of a counterbalancing effect of blanket suppression from a system of contrast gain control. In other words, we have shown that contrast summation is organised very differently from the way first proposed. These results have dispelled orthodoxy and now prompt a thorough re-evaluation of our understanding of contrast and feature integration in human vision.In the current proposed project we will use our new type of stimulus and modelling framework to investigate the computational rules that control the point-by-point integration of information in the 'contrast image'. In particular, our working hypothesis proposes that the visual system does this by maximising the 'signal to noise ratio'. But what directs and limits the signal integration? And how does this relate to the grouping rules of Gestalt psychology and other results on contour integration and contrast perception? Through careful stimulus manipulations, our 19+ experiments will address these issues, mainly using normal healthy observers, but we will also study the disrupted amblyopic visual system as a way of further probing the system's organization. Overall, this work will illuminate the links between pixel-based contrast responses, and later region-based symbolic feature analyses. Only with these links in place can we begin to appreciate how the brain transforms the retinal image to the subjective experience of seeing.
 
Description The image on the back of our eye is broken down into tiny fragments by the initial stages of visual processing in the primary visual cortex. This decomposition is an important part of the initial analysis of the image, but objects, textures, surfaces and so on extend over much larger areas than each of the fragments, so how is it all sewn back together? This project focussed on one particular part of this problem: how image contrast is analysed by the brain across those millions of fragments. The results of psychophysical investigation and computational analysis have shed new insights into this, showing how a neural hierarchy underpins our perception of image contrast.
Exploitation Route Our results will be of interest to anyone working on the perception and detection of image contrast.
Sectors Education,Other

 
Description Our results have influenced work in several academic research laboratories across the globe.
First Year Of Impact 2009
Sector Education,Other