Future Colour Imaging

Lead Research Organisation: University of East Anglia
Department Name: Computing Sciences

Abstract

Colour Imaging is part of every day life. Whether we watch TV, browse content on our tablets or phones or use apps and software in our work the content we see on our screens is the result of decades of colour & imaging research.

In the future, the challenge is to understand more about the content images. As an example, in autonomous driving we wish to build a platform that sees the road independent of the atmospheric conditions, we don't want to crash when we are driving in fog. It is well known that an image that records the near-infrared signal is much sharper (compared to RGB) in foggy conditions. What is near infrared? The visible spectrum has a natural rainbow order: Violet, Indigo, Blue, Green, Yellow Orange and Red. Infrared is the 'next colour' after red that we can't quite see. Image fusion can be used to map the RGB+NIR signal to a fused RGB counterpart, that we can see. Through image fusion the same detail will be present in foggy or non-foggy conditions. Advantageously, Image Fusion is a tool that will allow non visible information to be incorporated and deployed in existing RGB-based AI scene interpretation systems with minimal retraining.

Our project begins with the Spectral Edge Image fusion method, the current leading technique. This method - and most image fusion algorithms - works by combining edges from the 4 images (RGB+NIR) to make a fused RGB-only 3-channel edge map. The edges are then transformed (the technical term is reintegrated) back to form a colour image. Unfortunately, and necessarily, the reintegrated images often have defects such as bright halos round edges or smearing. We argue that the defects are a direct consequence of how 'edges' are defined. In our research we will - based on a surprising mathematical insight - develop a new definition of edge, quite a bold thing to do after 50 years of image processing research! By construction the reintegrated new edges will have much less halo and smearing artefacts.

We will then use our improved edge representation and improved image fusion algorithm to make better looking images. These might be the fused images themselves: wouldn't it be great to have smart binoculars that allow us to see more detail in images when it is rainy or a landscape that is blurred by distance. However, we also believe the future of photography, in general, is content-based and that image fusion will help us determine the content in an image. As an example, when we take a picture at sunset, the shadows in the scene are very blue. But, outside of the shadow the light is very warm (orangish). The best image reproductions for these scenes involves manually and differentially processing shadow and non shadow regions. Here, we seek to find the illumination content in image automatically. Then in a second step we will develop a new content-based framework for manipulating images so that, for this sunset example, we don't need to edit the photos ourselves.

In complementary work, we are also interested in helping people see better. Indeed, there is a lot of research that demonstrates that coloured filters can help mitigate visual stress. Coloured filters are used in Dyslexia (sometimes leading to dramatic improvements in reading speed) and there is now blue absorbing glass which will reduces the blue light coming from a tablet display (since blue light at night tends to keep you awake). Much of the prior art in this area is 'direct'. We find a filter to directly impact on how we see (simply, if we put a yellow filter in front of the eye then everything looks more yellow). Our idea is to deign filters that are related to the tasks we need to solve. For the problem of matching colours we will design filters so that if you suffer from colour-blindness you will be able to colour match as if you had normal colour vision. We will also develop indirect solutions for the 'blue light' problem and visual stress.

Planned Impact

The Future Colour Imaging proposal comprises 4 projects: in the areas of sensing, seeing, understanding and experiencing light. In the seeing and sensing projects we propose the novel that to deliver the pictures we want to see, or equally to understand our own visual perception we need to build non-standard cameras i.e. cameras that are not built in analogy to our own visual system but rather designed from different first principles.

The possible applications of the technology are almost too numerous to mention but range from getting better pictures in your smart phone to making better judgments about the health or otherwise of crops in the field, two topics directly addressed in this proposal. A key aspect of understanding light is being able to determine when light changes in images e.g. at shadow edges and this will also help in preferred photo processing. More generally, understanding light means estimating the light field everywhere in a scene. Progress here will find wide application in automotive (in driverless vehicles where we want systems to continue to work in all lighting conditions) and augmented reality (where objects composited into images need to look as if they are lit in the same way as the rest of the scene). Finally, there is now a large corpus of research that reports that how we see, perform in tasks or feel is directly impacted by the spectral nature of the light entering our eye. In the experiencing light part of our research we will consider an indirect approach - indirect in the sense that we know that the signal in our vision systems with respect to which we make visual systems is processed beyond the retina (an example is colour matching which we explore). With our partners we will also consider how lighting might improve sleep or reduce visual stress.

Spectral Edge Ltd, a spin out from the UEA, develops 'system on a chip' image fusion systems for application domains ranging from photography to surveillance. The new fusion algorithms we develop will be prototyped and evaluated within Spectral Edge. Through working with Spectral Edge, the potential impact is enhanced in two ways. First, in terms of making preferred images, the company will help to tune - via its large scale image evaluation process - the developed algorithms . Second, Spectral Edge will be able to engineer the developed mathematical algorithms into hardware designs.

Our work on determining the spatially varying illumination map from an image will be developed with Apple Inc. As a leading manufacturer of smartphone camera technology they are ideally place to evaluate and integrate our research. Again, the potential impact is enhanced by Apple's involvement. They too have integration engineers and they lead the field in image evaluation.

Over and above generating preferred images, our algorithms will make it easier to infer properties about the content of the image. We will partner with the Earlham institute and will help them to develop their CropQuant technology (an automated in-the-field system for crop monitoring). Their technology that is currently based on visible spectrum imaging will be enhanced by the addition of non-visible image channels (e.g. near infrared).

It is possible that our work on lighting may be relevant to emerging standards (e.g. ISO/TC 274). This TC runs in collaboration with the CIE (the international standards body on colour). Any work we carry out on standardisation will be led by Professor Luo (our project partner and a Vice President of the CIE).

This project - in close collaboration with the Society for Imaging Science and Technology - launches the 'London Imaging Meeting' a new yearly conference in the field. Finally, Prof Finlayson is a fellow of the Royal Photographic Society and the Institute of Engineering Technology (as well as the IS&T). He will seek opportunities to report his work within these societies and the National Science and Media Museum (Bradford).

Publications

10 25 50