Future Colour Imaging

Lead Research Organisation: University of East Anglia
Department Name: Computing Sciences


Colour Imaging is part of every day life. Whether we watch TV, browse content on our tablets or phones or use apps and software in our work the content we see on our screens is the result of decades of colour & imaging research.

In the future, the challenge is to understand more about the content images. As an example, in autonomous driving we wish to build a platform that sees the road independent of the atmospheric conditions, we don't want to crash when we are driving in fog. It is well known that an image that records the near-infrared signal is much sharper (compared to RGB) in foggy conditions. What is near infrared? The visible spectrum has a natural rainbow order: Violet, Indigo, Blue, Green, Yellow Orange and Red. Infrared is the 'next colour' after red that we can't quite see. Image fusion can be used to map the RGB+NIR signal to a fused RGB counterpart, that we can see. Through image fusion the same detail will be present in foggy or non-foggy conditions. Advantageously, Image Fusion is a tool that will allow non visible information to be incorporated and deployed in existing RGB-based AI scene interpretation systems with minimal retraining.

Our project begins with the Spectral Edge Image fusion method, the current leading technique. This method - and most image fusion algorithms - works by combining edges from the 4 images (RGB+NIR) to make a fused RGB-only 3-channel edge map. The edges are then transformed (the technical term is reintegrated) back to form a colour image. Unfortunately, and necessarily, the reintegrated images often have defects such as bright halos round edges or smearing. We argue that the defects are a direct consequence of how 'edges' are defined. In our research we will - based on a surprising mathematical insight - develop a new definition of edge, quite a bold thing to do after 50 years of image processing research! By construction the reintegrated new edges will have much less halo and smearing artefacts.

We will then use our improved edge representation and improved image fusion algorithm to make better looking images. These might be the fused images themselves: wouldn't it be great to have smart binoculars that allow us to see more detail in images when it is rainy or a landscape that is blurred by distance. However, we also believe the future of photography, in general, is content-based and that image fusion will help us determine the content in an image. As an example, when we take a picture at sunset, the shadows in the scene are very blue. But, outside of the shadow the light is very warm (orangish). The best image reproductions for these scenes involves manually and differentially processing shadow and non shadow regions. Here, we seek to find the illumination content in image automatically. Then in a second step we will develop a new content-based framework for manipulating images so that, for this sunset example, we don't need to edit the photos ourselves.

In complementary work, we are also interested in helping people see better. Indeed, there is a lot of research that demonstrates that coloured filters can help mitigate visual stress. Coloured filters are used in Dyslexia (sometimes leading to dramatic improvements in reading speed) and there is now blue absorbing glass which will reduces the blue light coming from a tablet display (since blue light at night tends to keep you awake). Much of the prior art in this area is 'direct'. We find a filter to directly impact on how we see (simply, if we put a yellow filter in front of the eye then everything looks more yellow). Our idea is to deign filters that are related to the tasks we need to solve. For the problem of matching colours we will design filters so that if you suffer from colour-blindness you will be able to colour match as if you had normal colour vision. We will also develop indirect solutions for the 'blue light' problem and visual stress.

Planned Impact

The Future Colour Imaging proposal comprises 4 projects: in the areas of sensing, seeing, understanding and experiencing light. In the seeing and sensing projects we propose the novel that to deliver the pictures we want to see, or equally to understand our own visual perception we need to build non-standard cameras i.e. cameras that are not built in analogy to our own visual system but rather designed from different first principles.

The possible applications of the technology are almost too numerous to mention but range from getting better pictures in your smart phone to making better judgments about the health or otherwise of crops in the field, two topics directly addressed in this proposal. A key aspect of understanding light is being able to determine when light changes in images e.g. at shadow edges and this will also help in preferred photo processing. More generally, understanding light means estimating the light field everywhere in a scene. Progress here will find wide application in automotive (in driverless vehicles where we want systems to continue to work in all lighting conditions) and augmented reality (where objects composited into images need to look as if they are lit in the same way as the rest of the scene). Finally, there is now a large corpus of research that reports that how we see, perform in tasks or feel is directly impacted by the spectral nature of the light entering our eye. In the experiencing light part of our research we will consider an indirect approach - indirect in the sense that we know that the signal in our vision systems with respect to which we make visual systems is processed beyond the retina (an example is colour matching which we explore). With our partners we will also consider how lighting might improve sleep or reduce visual stress.

Spectral Edge Ltd, a spin out from the UEA, develops 'system on a chip' image fusion systems for application domains ranging from photography to surveillance. The new fusion algorithms we develop will be prototyped and evaluated within Spectral Edge. Through working with Spectral Edge, the potential impact is enhanced in two ways. First, in terms of making preferred images, the company will help to tune - via its large scale image evaluation process - the developed algorithms . Second, Spectral Edge will be able to engineer the developed mathematical algorithms into hardware designs.

Our work on determining the spatially varying illumination map from an image will be developed with Apple Inc. As a leading manufacturer of smartphone camera technology they are ideally place to evaluate and integrate our research. Again, the potential impact is enhanced by Apple's involvement. They too have integration engineers and they lead the field in image evaluation.

Over and above generating preferred images, our algorithms will make it easier to infer properties about the content of the image. We will partner with the Earlham institute and will help them to develop their CropQuant technology (an automated in-the-field system for crop monitoring). Their technology that is currently based on visible spectrum imaging will be enhanced by the addition of non-visible image channels (e.g. near infrared).

It is possible that our work on lighting may be relevant to emerging standards (e.g. ISO/TC 274). This TC runs in collaboration with the CIE (the international standards body on colour). Any work we carry out on standardisation will be led by Professor Luo (our project partner and a Vice President of the CIE).

This project - in close collaboration with the Society for Imaging Science and Technology - launches the 'London Imaging Meeting' a new yearly conference in the field. Finally, Prof Finlayson is a fellow of the Royal Photographic Society and the Institute of Engineering Technology (as well as the IS&T). He will seek opportunities to report his work within these societies and the National Science and Media Museum (Bradford).


10 25 50
Description The current focus of "Future Colour Imaging" is spectral recovery and filter design. Everyone is probably aware that colour images are made up from millions of pixels and the colour of each pixel is encoded as an R, G and B triplet. However, there are many applications where it would be useful to have more than 3 measurements. Ideally, it would be good if we could recover the spectrum of light (from which the RGB image is formed). Measured spectra are useful in applications ranging from surveillance, to forensics and to disease detection in plants. In the last 2 years, we have advanced the state of the art in estimating spectra from RGB. data (including using the ubiquitous tools of AI). One of our most important results is the development of a generic technique that when it is integrated with existing algorithms it always improves their efficacy. Our research has also shown - by providing various extensions to the theory - that classical regression methods work as well as ML methods but are faster to train and execute.

In our second research strand we have been looking at design coloured filters that have precise spectral properties. In one aspect of our work, we change the spectral sensitivity of a camera by putting a colour filter in front of it. Think, putting sunglasses on a camera. We know from our own visual experience that filters of different colour (e.g. brown versus black sun glasses) yield a different visual perception. The same is true for cameras. In terms of measuring and communicating colours there are 'ideal' sensors that a camera should have but - because of the difficulty of manufacturing - the sensors found in cameras are far from idea. Our research shows that we can design a special colour filter that when we place the filter in front of the camera the camera becomes much closer to the ideal colour measurement device. However, manufacturing our sensors has proven to be difficult. While we are now actively looking at manufacturability, we have developed a system for modulating the illuminant generated by a spectrally tunable light source to simulate the effect of a filter (with good results). In complementary research we have also been developing filters that are designed to always map a typical real illuminant spectra to another real spectra so that we can view a scene under two lights (but we only need one light and our filter). Our so-called Locus filter theory has a rich mathematical underpinning and builds on and extends the physics of black-body radiation,
Exploitation Route So far, we have developing leading algorithms for mapping the RGBs measured in images to corresponding radiance spectra. We are confident this spectral recover research will be used in practical (possibly commercial) camera systems. As an example, spectral measurements are commonly made in the Agritech domain. Indeed, various studies have shown that desirable properties of plants manifest themselves clearly in image spectra (e.g. the shape of a spectrum - for a given plant - can provide a key indication of whether a plant is healthy or suffering from stress e.g. the early stages of disease). We will investigate the extent to which, in the farmer's field, existing camera technology can be used to measure spectral information. While we can, to some extent map RGBs to spectra, cameras with more than 3 sensors are coming. Our work on spectral recovery is, by design, extensible to this multi-sensor camera case (and we and others will be pursuing this line of research).

Our research has also shown that existing cameras can be made more 'colorimetric' (i.e. to sample light like the human visual system) by placing a coloured filter - with a precise spectral design - in front of the camera. However, these filters have proven to be difficult to manufacture. One way around this problem is to
take a 'proxy' approach. Since placing a filter in front of a camera is physically similar to modulating a light source spectrally we will examine the latter approach (in concert with Thouslite and our partners in Zhejiang University). Initial investigations have returned positive results.

Returning to the question of manufacturability, we are currently considering how filters can be designed using thin film optics and have applied for additional research funds to follow this line of research. We are proposing this research (with partners at the University of Bradford) with an emphasis on telepresence for healthcare applications. Indeed, it is well known in the field of dermatology both that accurate colour measurement is highly desirable and that existing systems work poorly. A colorimetric camera (delivered by our camera+filter approach) will impact strongly and positively in the healthcare domain.
Sectors Digital/Communication/Information Technologies (including Software)

URL https://www.mdpi.com/1424-8220/20/21/6399,https://ieeexplore.ieee.org/document/9266578,https://www.mdpi.com/1424-8220/21/16/5586,https://arxiv.org/abs/2201.11700,https://www.mdpi.com/2313-433X/8/12/325,https://onlinelibrary.wiley.com/doi/10.1002/col.22843,
Description A key partner when the fellowship was written was Spectral Edge Ltd. Spectral Edge was spun out of Finlayson's lab in 2011 and was acquired by Apple Inc a few months after the fellowship began. There are a suite of patents (see IP section) most of which have been granted in the first two years of the fellowship. Here is a quote from Apple about the Spectral Edfge technology (which is contributed to by this fellowship): "Through Apple's relationship with Professor Finlayson and University of East Anglia we were able to monitor the steady progress of Spectral Edge as they built their core technologies and talented team of engineers - several of which were a product of UEA's Colour & Imaging Lab. The development of fundamental Spectral Edge algorithms into practical engineering deliverables gave us the confidence that these could quickly make an impact on Apple products and improve experiences for our customers. This team has also helped us attract new talent to grow our expertise at a new Apple facility in Cambridge"
First Year Of Impact 2019
Sector Creative Economy,Digital/Communication/Information Technologies (including Software)
Impact Types Economic

Description Apple (H022236) 
Organisation Apple
Country United States 
Sector Private 
PI Contribution Apple are interested in making better pictures from photos. A key part of this process is a better understanding of colour constancy. We have developed new colour constancy algorithms. Prof Finlayson has worked on site at Apple in Cupertino
Collaborator Contribution Apple has assisted in the development of computational algorithms and have benchmarked the research developed.
Impact Exemplar publications: Drew M, Reza H, Finlayson GD, "The Zeta-Image, Illuminant Estimation and Specularity Manipulation," Computer Vision and Image Understanding, 1-13, 2014. Finlayson, GD, "Corrected-Moment Illuminant Estimation," IEEE International Conference on Computer Vision, 1904-1911, 2013.
Start Year 2010
Description Color Matching 
Organisation Zhejiang University
Department Department of Optical Engineering
Country China 
Sector Academic/University 
PI Contribution We are working with Prof Ronnier Luo of the Optical Engineering department at Zhejiang on how filter modulation affects colour matching.
Collaborator Contribution The project is at the planning stage. The intent is UEA will provide the algorithms that will be tested in situ in Zheijiang (when we carry out experiments with human observers)
Impact The project is beginning. At this stage the outcomes is an agreed workplan for the coming year.
Start Year 2021
Description EU Real Vision ITN 
Organisation Technical University of Denmark
Country Denmark 
Sector Academic/University 
PI Contribution RealVision: The aim of realistic digital imaging is the creation of high quality imagery, which faithfully represents the physical environment. The ultimate goal is to create images, which are perceptually indistinguishable from a real scene. The RealVision network brings together leading universities and centres focused on industrial development and companies in multimedia, optics, visual communication, visual computing, computer graphics, and human vision research across Europe, with the aim of training a new generation of scientists, technologists, and entrepreneurs that will move Europe into a leading role in innovative hyper-realistic imaging technologies. I collaborate in part through Spectral Edge Ltd
Collaborator Contribution RealVision: The aim of realistic digital imaging is the creation of high quality imagery, which faithfully represents the physical environment. The ultimate goal is to create images, which are perceptually indistinguishable from a real scene. The RealVision network brings together leading universities and centres focused on industrial development and companies in multimedia, optics, visual communication, visual computing, computer graphics, and human vision research across Europe, with the aim of training a new generation of scientists, technologists, and entrepreneurs that will move Europe into a leading role in innovative hyper-realistic imaging technologies.
Impact This is an EU ITN. The outcome will be trained PhDs and their impact (through internships) with the academic institutes and companies involved.
Start Year 2019
Description Spectral Edge Ltd 
Organisation Spectral Edge Ltd
Country United Kingdom 
Sector Private 
PI Contribution The Radiometric Caibration is a technique we developed to reverse engineer camera processing pipelines
Collaborator Contribution Spectral Edge Ltd tested our pipeline in the context of the commercial cameras they work with and are exploring routes to commercialisation
Impact US Patent Application US20170272619A1 https://patents.google.com/patent/US7986830
Start Year 2015
Description An image processing method, system and device are disclosed. In the method, sensor responses for an input image are estimated, the sensor responses including sensor responses for cone sensors including a first cone sensor set and a second, different, cone sensor set. In dependence on the sensor responses, a transform mapping for application to a source image is determined to generate a modified image. 
IP Reference WO2015004437 
Protection Patent application published
Year Protection Granted 2015
Licensed Yes
Impact This patent is the foundation of 'eyeteq' a suite of platforms being developed to help colour blind people see better. This technology is close to commercial license and has been validated in an extensive third party study (by i2media)
Description A method and system for producing a scalar image from a derivative field and a vector image is disclosed. A function class c is selected, where all members of the class c are functions which map each vector of the vector image to a unique scalar value. A function f is selected from the class c which maps the vector image to a scalar image, the derivative of which is closest to the derivative field. The scalar image is generated from the vector image by using f to calculate each scalar value in the scalar image from a corresponding vector in the vector image. 
IP Reference WO2011021012 
Protection Patent application published
Year Protection Granted 2011
Licensed Yes
Impact This patent is a key foundation of Spectral Edge's - a UEA spin out - business. It subserves its work on image fusion and image enhancement. Spectral Edge was acquired by an industry major at the end of 2019
Title Image enhancement system and method 
Description An image enhancement method and system comprising: receiving an input and target image pair, each of the input and target images including data representing pixel intensities then processing the data to determine a plurality of basis functions, each basis function being determined in dependence on the content of the input image and determining a combination of the basis functions to modify the intensity of pixels of the input image to approximate the target image then applying the plurality of basis functions to the input image to produce an approximation of the target image. The basis functions may be determined using derivatives of the data and each function may be determined on the basis of colour, intensity or shapes or elements identified or designated in the input image and the basis functions preferably decompose the input image into corresponding image layers. The basis functions may be determined according to binary or non-binary decomposition or a continuous distribution in which each function is blurred, and the output cross bilaterally filtered using the input image as a guide. The determination of a combination may involve solving optimisation of a per channel polynomial transform of the input image to approximate the target image wherein the polynomial corresponds to the basis functions. 
IP Reference GB2579911 
Protection Patent application published
Year Protection Granted 2020
Licensed Commercial In Confidence
Impact This patent is part of a commercial product (an embedded device)
Description A method and system for determining parameters of an image processing pipeline of a digital camera is disclosed. The image processing pipeline transforms captured image data on a scene into rendered image data. Rendered image data produced by the image processing pipeline of the camera is obtained from the captured image data on the scene. At least a subset of the captured image data on the scene is determined and a ranking order for pixels of the rendered image data is obtained. A set of constraints from the captured image data and the ranked rendered image data is determined, each constraint of the set being determined in dependence on selected pair combinations of pixel values when taken in said ranking order of the rendered image data and corresponding pair combinations of the captured image data. Parameters of the image processing pipeline are determined that satisfy the sets of constraints. 
IP Reference WO2016083796 
Protection Patent application published
Year Protection Granted 2016
Licensed Commercial In Confidence
Impact This patent is licenced and has been commercialised.
Description A method and system for producing accented image data for an accented image is disclosed. The method includes decomposing each of a first and a second image into a gradient representation which comprises spectral and edge components. The first image comprises more spectral dimensions than the second image. The edge component from the first image is combined with the spectral component from the second image to form a combined gradient representation. Accented image data for the accented image is then generated from data including the combined gradient representation. 
IP Reference US2011052029 
Protection Patent granted
Year Protection Granted 2011
Licensed Yes
Impact Spectral Edge Ltd, a spin out from the university of East Anglia, is taking this technology to market. The company is focused on developing technology to fuse RGB+Near Infrared (or thermal) images for the photographic and surveillance industries. Spectral Edge was acquired by an industry major at the end of 2019
Title Regularized Derivative Operators for Image Processing System and Method 
Description Devices, methods, and non-transitory program storage devices are disclosed herein to provide improved image processing, the techniques comprising: obtaining an input image and target image data, and then calculating derivatives for the target image data using a regularized derivative kernel operator. In some embodiments, the regularized operator may comprise the following operator: [-1 (1+e)], wherein e may be a controllable system parameter and preferably is independent of the particular type of image processing being applied to the image. In some embodiments, the techniques may find look-up table (LUT) mappings or analytical functions to approximate the derivative structure of the target image data. Finally, the techniques disclosed herein may generate an output image from the input image based on attempting to closely approximate the calculated derivatives for the target image data. In preferred embodiments, by controlling the mapping, e.g., using regularization techniques, halos and other image artifacts may be ameliorated. 
IP Reference US2020394776 
Protection Patent application published
Year Protection Granted 2020
Licensed Commercial In Confidence
Impact This patent is used in a commercial embedded system
Description A method, system and reference target for estimating spectral data on a selected one of three spectral information types is disclosed. Spectral information types comprise illumination of a scene, spectral sensitivity of an imager imaging the scene and reflectance of a surface in the scene. The method comprises obtaining a ranking order for plural sensor responses produced by the imager, each sensor responses being produced from a reference target in the scene, obtaining, from an alternate source, data on the other two spectral information types, determining a set of constraints, the set including, for each sequential pair combination of sensor responses when taken in said ranking order, a constraint determined in dependence on the ranking and on the other two spectral information types for the respective sensor responses and, in dependence on the ranking order and on the set of constraints, determining said spectral data that optimally satisfies said constraints. 
IP Reference US2014307104 
Protection Patent granted
Year Protection Granted 2014
Licensed Commercial In Confidence
Impact This patent has been licenced on a commercial basis
Description A system and method for generating a colour filter for modifying the spectral response of a vision system are disclosed. The method includes receiving an RGB spectral response of the vision system for a colour target under predetermined illumination and executing, by a processor of a computer system, computer program instructions configured to apply the RGB spectral response to a bilinear optimisation problem that simultaneously determines: i) a colour correction matrix to transform the RGB spectral response to XYZ colour space; and, ii) parameters of the colour filter. The method further executes computer program instructions configured solving the bilinear optimisation problem; and then provides the parameters or causes a colour filter to be formed using the parameters. 
IP Reference US20220003988 
Protection Patent application published
Year Protection Granted 2019
Licensed No
Impact We worked with Image Engineering to see if they could manufacture an early filter design (they couldn't). We have applied for funding to deliver manufacturable filters
Description London Imaging Meeting 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact With the Society for Imaging Science and Technology (USA) and the Institute of Physics (UK) I launched the London Imaging Meeting which took place at the end of September 2020. The LIM meeting is a topics-based conference/workshop in the broad area of imaging science. Last year, in concert with the fellowship, the topic was "Future Colour Imaging". Over two days about 35 papers were presented and more than half of these report the work of PhD students. A key aspect of the LIM meeting is to provide a forum for PhD students to meet established experts in the field. There is a proceedings of refereed conference papers. At this year's LIM there were about 150 attendees.

The Topic for LIM 2021 is "Imaging for Deep Learning" and will take place at the IoP in September
Year(s) Of Engagement Activity 2020
URL https://www.imaging.org/site/IST/Conferences/London_Imaging_Meeting/LIM_2020/IST/Conferences/LIM/LIM...