A spatio-chromatic colour appearance model for retargeting high dynamic range image appearance across viewing conditions

Lead Research Organisation: University of Cambridge
Department Name: Computer Science and Technology

Abstract

This project will investigate human perception in the context of novel high dynamic range display technologies. Specifically, it will devise and validate a new model of spatial colour vision that will support detailed analysis and prediction of how content on new displays will be perceived. Such a model can then be used to automatically process images so that their appearance is preserved when presented in a significantly different manner: at different brightness levels (display dimming), at different contrast (tone-mapping, ambient light compensation), under different viewing conditions (dark cinema vs. bright living room). The model will also be able to take into account potential individual differences in observer sensitivity and implement the effect of age-related changes in the visual system.

To build such a model, a large dataset of colour appearance data will be collected from both existing sources and from new measurements. A new method will be devised and used to take new measurements only to fill in the "gaps" in the dataset, where information is the sparsest. The new appearance model will be created by simultaneously training and testing a large collection of candidate models, which will be tested for overfitting using information criteria. The candidate models will further help to identify the gaps in the existing dataset and will thus direct collection of new data.

The model will be tested in novel applications, such as adjustment of image appearance depending on the user's visual performance (age adaptive rendering), and adjustment for display brightness, contrast and ambient illumination (display adaptive rendering).

Planned Impact

The most significant impact of the proposed work will be in Graphics & Visualization, an area of high importance to the creative industries sector, and to other sectors such as construction and manufacturing, by combining expertise from computer science and vision science. The need to strengthen this link to maximize impact has been recognized by the Technology Strategies Board Creative Industries. It will impact the Knowledge Economy by developing a toolset for the next generation of vision scientists (efficient data acquisition) and will address societal challenges of an ageing population by designing a computer interface that compensates for the age-related visual acuity and appearance changes and thereby improves the quality of life of the ageing population.
 
Description Our data demonstrate a previously unknown characteristic of the visual system at high luminance. Using the data, we developed a new, fundamental model of the human visual system (the spatio-chromatic sensitivity function) that can predict the visibility of black-and-white and colour patterns under very dim (0.02 cd/m^2) and very bright (10,000 cd/m^2) viewing conditions. We are currently investigating the application of the model in several problems, including high-dynamic-range video compression and modelling of colour appearance.
Exploitation Route Our work has implications for improving future HDR image and video coding standards. Our work can also help to design better colour spaces and metrics for high-dynamic-range content.

One of such metrics was presented in a recently published SIGGRAPH paper: http://dx.doi.org/10.1145/3450626.3459831
Sectors Creative Economy,Electronics

URL https://www.cl.cam.ac.uk/research/rainbow/projects/hdr-csf/
 
Description Our new CSF function and the related research on models of banding have lead to the development of two important quality metrics, now widely used in the industry (video compression, method evaluation, display engineering, AR/VR modeling): * PU21: A novel perceptually uniform encoding for adapting existing quality metrics for HDR - http://dx.doi.org/10.1109/PCS50896.2021.9477471 - https://github.com/gfxdisp/pu21 * FovVideoVDP: A visible difference predictor for wide field-of-view video - http://dx.doi.org/10.1145/3450626.3459831 - https://www.cl.cam.ac.uk/research/rainbow/projects/fovvideovdp/ The work also raised the interest of the industrial partners, which are now funding follow-up research.
First Year Of Impact 2021
Sector Creative Economy,Digital/Communication/Information Technologies (including Software),Electronics
 
Title Cross-content quality scaling of TID2013 image quality dataset 
Description This dataset improves the accuracy and consistency of the quality scores for the TID2013 image quality dataset: http://www.ponomarenko.info/tid2013.htm (Version 1.0) The details in this improved quality scores can be found in the paper: Aliaksei Mikhailiuk, María Pérez Ortiz and Rafal K. Mantiuk "Psychometric scaling of TID2013 dataset" Proc. of 10th International Conference on Quality of Multimedia Experience (QoMEX 2018) http://www.cl.cam.ac.uk/~rkm38/pdfs/mikhailiuk2018tid_psych_scaling.pdf. More details about the data files included is available in the README.txt file. 
Type Of Material Database/Collection of data 
Year Produced 2018 
Provided To Others? Yes  
 
Title Measurements of spatio-chromatic contrast sensitivity up to 7000 cd/m^2 
Description  
Type Of Material Database/Collection of data 
Year Produced 2020 
Provided To Others? Yes  
URL https://www.repository.cam.ac.uk/handle/1810/304228
 
Title UPIQ: Unified Photometric Image Quality dataset (04.2021) 
Description Unified Photometric Image Quality dataset (UPIQ) UPIQ dataset is intended for training and evaluation of full-reference HDR image quality metrics. The dataset contains 84 reference images and 4159 distorted images from four datasets, TID2013 [1] (SDR), LIVE [2] (SDR), Narwaria et al. [3] (HDR) and Korshunov et al. [4] (HDR). Quality scores were obtained by re-aligning existing datasets to a common unified quality scale. This was achieved by collecting additional cross-dataset quality comparisons and re-scaling existing data with a psychometric scaling method. Images in the dataset are represented in absolute photometric and colorimetric units, corresponding to light emitted from a display. This is an updated version of the dataset with the fixed pix_per_deg column. See README.md. [1] Ponomarenko, N., Jin, L., Ieremeiev, O., Lukin, V., Egiazarian, K., Astola, J., Benoit: Image database tid2013: Peculiarities, results and perspectives. Signal Processing: Image Communication 30, 57 - 77 (2015) [2] Sheikh, H., Sabir, M., Bovik, A.: A Statistical Evaluation of Recent Full Reference Image Quality Assessment Algorithms. IEEE Transactions on Image Processing 15(11), 3440-3451 (2006). https://doi.org/10.1109/TIP.2006.881959, http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1709988 [3] Narwaria, M., P. Da Silva, M., Le Callet, P., Pepion, R.: Tone mapping-based high-dynamic-range image compression: study of optimization criterion and perceptual quality. Optical Engineering 52(10) (2013). https://doi.org/10.1117/1.OE.52.10.102008 [4] Korshunov, P., Hanhart, P., Richter, T., Artusi, A., Mantiuk, R., Ebrahimi, T.: Subjective quality assessment database of HDR images compressed with jpeg xt. In: 2015 Seventh International Workshop on Quality of Multimedia Experience (QoMEX). pp. 1-6 (May 2015). https://doi.org/10.1109/QoMEX.2015.7148119 
Type Of Material Database/Collection of data 
Year Produced 2021 
Provided To Others? Yes  
URL https://www.repository.cam.ac.uk/handle/1810/321331
 
Title pwcmp 
Description This is a set of matlab functions for scaling of pairwise comparison experiment results based on Thurstone's model V assumptions. The main features: The scaling can work with imbalanced and incomplete data, in which not all pairs are compared and some pairs are compared more often than the others. Additional priors reduce bias due to the non-linear nature of the problem. Outlier rejection to screen observers that perform differently than the rest. The code can compute confidence intervals using bootstrapping. 
Type Of Technology Software 
Year Produced 2018 
Open Source License? Yes  
Impact Used in a large number of research projects, 50+ citations. 
 
Description Colloquium at the Stanford Center for Image Systems Engineering 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Industry/Business
Results and Impact An invited Colloquium at the Stanford Center for Image Systems Engineering

January 18, 2019 10:00 am to 11:00 am

Location: Packard 101

Talk Title: How many pixels are too many?

Talk Abstract: We start to lack the processing power and bandwidth to drive 8K and high-resolution head-mounted displays. However, as the human eye and visual system have their own limitations, the relevant question is what spatial and temporal resolution is the ultimate limit for any technology. In this talk, I will review the visual models of spatio-temporal and chromatic contrast sensitivity which can explain such limitations. Then I will show how they can be used to reduce rendering cost in VR-applications, find more efficient encoding of high dynamic range images and compress images in a visually lossless manner.
Year(s) Of Engagement Activity 2019
URL http://scien.stanford.edu/index.php/category/scien-colloquia/?ec3_before=today&order=asc
 
Description Invited talk at Color and Photometry in Computer Vision Workshop 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact An invited talk on "Color and Photometry in Computer Vision", presented at the 6th Color and Photometry in Computer Vision Workshop, part of the ICCV conference. October 2017.
Year(s) Of Engagement Activity 2017
URL https://cpcv.data61.csiro.au/
 
Description Talk at the Twenty-fifth Color and Imaging Conference (CIC25) 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Talk on "Color and contrast appearance across the luminance range" at the workshop "Visual perception and emerging technologies in cinema", part of the the Twenty-fifth Color and Imaging Conference (CIC), September 2017.
Year(s) Of Engagement Activity 2017
URL https://www.imaging.org/Site/PDFS/Conferences/CIC/2017/CIC25_PrelimProgram.pdf