Understanding, Predicting and Controlling AI Hallucination in Diffusion Models for Image Inverse Problems
Lead Research Organisation:
Imperial College London
Department Name: Electrical and Electronic Engineering
Abstract
The problem of AI hallucination has been observed in a range of generative deep learning models. For this research we consider hallucination where the model outputs are realistic or plausible but may be factually incorrect or inconsistent. For example, a language model may generate untrue facts, or an image restoration model could produce an image which is semantically different from the ground truth image.
This research focusses on hallucination in image restoration/inverse problems. The objective of image restoration is to recover a high-quality image from an input image with added noise, blur, or other degradation. In particular, we consider diffusion model-based methods. Diffusion models are a class of generative deep learning models which iteratively add noise to a signal and learn the reverse denoising process. Diffusion models have attained state-of-the-art performance in image generation tasks and have demonstrated ability to learn expressive prior distributions over image domains. Using various conditioning methods, the generation process can be guided by the degraded input image.
While these models produce highly realistic images, hallucinated images occur frequently where the input is significantly degraded. This phenomenon is not observed for classical (non deep learning) algorithms, where a poor restoration may contain non-natural artifacts or residual distortions but maintain semantic consistency. While hallucination is generally undesirable for image restoration, it may be advantageous or even necessary for creative applications.
Current research into AI hallucination is limited, particularly for the image domain. Our research aims are twofold: firstly, we aim to investigate the source of hallucination in diffusion models. We hypothesise that the generation process may be overly influenced by a training image which is similar to the input, leading to semantic elements being duplicated in the output. Research into the privacy of diffusion models has shown that the model memorises and can reproduce some training data images if given appropriate inputs. Another contributing factor could be that current methods of conditioning the generation process may not effectively establish the semantic contents of the input in the iterative generation process.
Secondly, with an understanding of the causes of hallucination, we aim to design systems which can detect when hallucination may be occurring, which allows potentially unreliable results to be identified. The user could be provided an estimated probability that the result is hallucinated, or a "hallucination map" indicating regions of the image which are likely to contain hallucinated content. Methods of using this system during the generation process to either reduce or enhance hallucination effects will be explored.
Initially we plan to conduct experiments with pre-trained diffusion models, focussing on the conditioning method and iterative sampling process. A variety of image domains and datasets covering faces and natural images will be considered.
Our investigations of the source of AI hallucination in diffusion models could provide deeper insight into the information diffusion models learn and how image semantics and details are generated during inference. It is hoped that better understanding and control of hallucination would enable the use of generative deep learning-based methods with an indication of confidence in the results. This could hold particular benefit for applications in medical image processing or other scientific imaging applications, where accurate and reliable solutions are vital.
This research focusses on hallucination in image restoration/inverse problems. The objective of image restoration is to recover a high-quality image from an input image with added noise, blur, or other degradation. In particular, we consider diffusion model-based methods. Diffusion models are a class of generative deep learning models which iteratively add noise to a signal and learn the reverse denoising process. Diffusion models have attained state-of-the-art performance in image generation tasks and have demonstrated ability to learn expressive prior distributions over image domains. Using various conditioning methods, the generation process can be guided by the degraded input image.
While these models produce highly realistic images, hallucinated images occur frequently where the input is significantly degraded. This phenomenon is not observed for classical (non deep learning) algorithms, where a poor restoration may contain non-natural artifacts or residual distortions but maintain semantic consistency. While hallucination is generally undesirable for image restoration, it may be advantageous or even necessary for creative applications.
Current research into AI hallucination is limited, particularly for the image domain. Our research aims are twofold: firstly, we aim to investigate the source of hallucination in diffusion models. We hypothesise that the generation process may be overly influenced by a training image which is similar to the input, leading to semantic elements being duplicated in the output. Research into the privacy of diffusion models has shown that the model memorises and can reproduce some training data images if given appropriate inputs. Another contributing factor could be that current methods of conditioning the generation process may not effectively establish the semantic contents of the input in the iterative generation process.
Secondly, with an understanding of the causes of hallucination, we aim to design systems which can detect when hallucination may be occurring, which allows potentially unreliable results to be identified. The user could be provided an estimated probability that the result is hallucinated, or a "hallucination map" indicating regions of the image which are likely to contain hallucinated content. Methods of using this system during the generation process to either reduce or enhance hallucination effects will be explored.
Initially we plan to conduct experiments with pre-trained diffusion models, focussing on the conditioning method and iterative sampling process. A variety of image domains and datasets covering faces and natural images will be considered.
Our investigations of the source of AI hallucination in diffusion models could provide deeper insight into the information diffusion models learn and how image semantics and details are generated during inference. It is hoped that better understanding and control of hallucination would enable the use of generative deep learning-based methods with an indication of confidence in the results. This could hold particular benefit for applications in medical image processing or other scientific imaging applications, where accurate and reliable solutions are vital.
Organisations
People |
ORCID iD |
Pier Luigi Dragotti (Primary Supervisor) |
Studentship Projects
Project Reference | Relationship | Related To | Start | End | Student Name |
---|---|---|---|---|---|
EP/W524323/1 | 30/09/2022 | 29/09/2028 | |||
2906295 | Studentship | EP/W524323/1 | 06/01/2024 | 04/07/2028 |