Heard and Scene

Lead Research Organisation: University College London
Department Name: Ear Institute

Abstract

Abstracts are not currently available in GtR for all funded research. This is normally because the abstract was not required at the time of proposal submission, but may be because it included sensitive information such as personal details.

Technical Summary

Sensory substitution is already used to reduce the impact of being blind. The senses of touch and sound have both been used as sensory replacements through the transformation of visual images. We propose to develop a system that would enable a blind person to ?see? a live scene by means of sound stimulation. We will explore the possibility of using a multi-resolution system with video camera input and software to map visual fields to sound fields, allowing a blind person to interrogate their surroundings. We also propose exploring the use of augmented cognition by translating visual invariants (i.e. fixed objects) to auditory invariants. Some research teams have reported that it is possible to acquire a unified representation of visual patterns through hearing. Ultimately it is suggested that training, with the system we propose, could lead to a useful form of substituted vision with truly visual sensations.
We will investigate potential for the human brain to substitute sensory input from sound to vision with training. It has already been shown that this is possible. However, sensory substitution was achieved by high level processing on a PC. The sensory substitution there occurred by high level cerebral processing with investigator predefined auditory parameters. We will investigate the potential of the human brain to used low level processing subconsciously to produce its own sensory substitution parameters.

Publications

10 25 50