A Generalized and Flexible Deep Learning Framework for the Reconstruction of High Dynamic Range Images and Video

Lead Research Organisation: Queen Mary University of London
Department Name: Sch of Electronic Eng & Computer Science

Abstract

There are a number of limitations to current methods of HDR reconstruction for images and videos and we aim to address some of these in this thesis
1. Currently state-of-the-art methods have rigid input requirements and cannot adaptively make use of an arbitrary number of input images to reconstruct HDR images.
2. Motion alignment remains a challenging problem, especially in the HDR case where the input frames have varying exposures and can face problems such as heavy noise and large saturated regions.
3. HDR video estimation methods are computationally intensive and unsuited to on-the-fly use cases such as smartphone photography.

To address the issues outlined above, we propose to develop a generalized and flexible deep learning framework for HDR image and video reconstruction.

Firstly, our method will be capable of using an arbitrary number of input images by using principles from set neural networks [15, 1], allowing us to combine the benefits of single image and multi-image approaches. This would make it possible to adaptively decide how many frames and which exposure values to use depending on the scene being captured. A single frame might be the best choice when there is extreme motion or the image is well-exposed, while more frames might be required in scenes with a very large dynamic range and less extreme motion.

Secondly, we will investigate how to perform accurate motion alignment between frames specifically for the HDR case, taking into account the issues of large noisy or saturated regions.
Current approaches [6, 13, 11] typically apply a generic motion alignment model without considering the specific challenges associated with HDR reconstruction and hence have limited success.
Explicitly modelling these sources of error will allow us to effectively reduce ghosting artefacts and improve reconstruction detail.
Finally, we will explore the possibility of extending our method to video reconstruction in a highly efficient manner for on-the-fly capture of high resolution HDR video content on smartphones.

To be updated in February 2024 (at Year 3 Progression)

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/V519935/1 30/09/2020 29/04/2028
2496737 Studentship EP/V519935/1 08/02/2021 07/02/2025 Sibi Catley-Chandar