Learned Exascale Computational Imaging (LEXCI)
Lead Research Organisation:
University College London
Department Name: Mullard Space Science Laboratory
Abstract
The emerging era of exascale computing that will be ushered in by the forthcoming generation of supercomputers will provide both opportunities and challenges. The raw compute power of such high performance computing (HPC) hardware has the potential to revolutionize many areas of science and industry. However, novel computing algorithms and software must be developed to ensure the potential of novel HPC architectures is realized.
Computational imaging, where the goal is to recover images of interest from raw data acquired by some observational instrument, is one of the most widely encountered class of problem in science and industry, with myriad applications across astronomy, medicine, planetary and climate science, computer graphics and virtual reality, geophysics, molecular biology, and beyond.
The rise of exascale computing, coupled with recent advances in instrumentation, is leading to novel and often huge datasets that, in principle, could be imaged for the first time in an interpretable manner at high fidelity. However, to unlock interpretable, high-fidelity imaging of big-data novel methodological approaches, algorithms and software implementations are required -- we will develop precisely these components as part of the Learned EXascale Computational Imaging (LEXCI) project.
Firstly, whereas traditional computational imaging algorithms are based on relatively simple hand-crafted prior models of images, in LEXCI we will learn appropriate image priors and physical instrument simulation models from data, leading to much more accurate representations. Our hybrid techniques will be guided by model-based approaches to ensure effectiveness, efficiency, generalizability and uncertainty quantification. Secondly, we will develop novel algorithmic structures that support highly parallelized and distributed implementations, for deployment across a wide range of modern HPC architectures. Thirdly, we will implement these algorithms in professional research software. The structure of our algorithms will not only allow computations to be distributed across multi-node architectures, but memory and storage requirements also. We will develop a tiered parallelization approach targeting both large-scale distributed-memory parallelization, for distributing work across processors and co-processors, and light-weight data parallelism through vectorization or light-weight threads, for distributing work on processors and co-processors. Our tiered parallelization approach will ensure the software can be used across the full range of modern HPC systems. Combined, these developments will provide a future computing paradigm to help usher in the era of exascale computational imaging.
The resulting computational imaging framework will have widespread application and will be applied to a number of diverse problems as part of the project, including radio interferometric imaging, magnetic resonance imaging, seismic imaging, computer graphics, and beyond. The resulting software will be deployed on the latest HPC computing resources to evaluate their performance and to feed back to the community the computing lessons learned and techniques developed, so as to support the general advance of exascale computing.
Computational imaging, where the goal is to recover images of interest from raw data acquired by some observational instrument, is one of the most widely encountered class of problem in science and industry, with myriad applications across astronomy, medicine, planetary and climate science, computer graphics and virtual reality, geophysics, molecular biology, and beyond.
The rise of exascale computing, coupled with recent advances in instrumentation, is leading to novel and often huge datasets that, in principle, could be imaged for the first time in an interpretable manner at high fidelity. However, to unlock interpretable, high-fidelity imaging of big-data novel methodological approaches, algorithms and software implementations are required -- we will develop precisely these components as part of the Learned EXascale Computational Imaging (LEXCI) project.
Firstly, whereas traditional computational imaging algorithms are based on relatively simple hand-crafted prior models of images, in LEXCI we will learn appropriate image priors and physical instrument simulation models from data, leading to much more accurate representations. Our hybrid techniques will be guided by model-based approaches to ensure effectiveness, efficiency, generalizability and uncertainty quantification. Secondly, we will develop novel algorithmic structures that support highly parallelized and distributed implementations, for deployment across a wide range of modern HPC architectures. Thirdly, we will implement these algorithms in professional research software. The structure of our algorithms will not only allow computations to be distributed across multi-node architectures, but memory and storage requirements also. We will develop a tiered parallelization approach targeting both large-scale distributed-memory parallelization, for distributing work across processors and co-processors, and light-weight data parallelism through vectorization or light-weight threads, for distributing work on processors and co-processors. Our tiered parallelization approach will ensure the software can be used across the full range of modern HPC systems. Combined, these developments will provide a future computing paradigm to help usher in the era of exascale computational imaging.
The resulting computational imaging framework will have widespread application and will be applied to a number of diverse problems as part of the project, including radio interferometric imaging, magnetic resonance imaging, seismic imaging, computer graphics, and beyond. The resulting software will be deployed on the latest HPC computing resources to evaluate their performance and to feed back to the community the computing lessons learned and techniques developed, so as to support the general advance of exascale computing.
Organisations
Publications
Betcke M
(2023)
Mathematics of biomedical imaging today-a perspective
in Progress in Biomedical Engineering
Cai X
(2022)
Proximal nested sampling for high-dimensional Bayesian model selection
in Statistics and Computing
Mancini A
(2022)
Bayesian model comparison for simulation-based inference
Marignier A
(2022)
Sparse Bayesian mass-mapping using trans-dimensional MCMC
Marignier A
(2023)
Sparse Bayesian mass-mapping using trans-dimensional MCMC
in The Open Journal of Astrophysics
Marignier A
(2023)
Posterior sampling for inverse imaging problems on the sphere in seismology and cosmology
in RAS Techniques and Instruments
Mars M
(2023)
Learned interferometric imaging for the SPIDER instrument
in RAS Techniques and Instruments
Description | Techniques to combine AI and uncertainty quantification for exascale imaging have been developed. |
Exploitation Route | While particularly useful for the SKA, these technique are highly general and can be applied to many other imaging modalities. |
Sectors | Digital/Communication/Information Technologies (including Software) Education |
Description | SKA South Africa collaboration |
Organisation | Rhodes University |
Country | South Africa |
Sector | Academic/University |
PI Contribution | Hosted a postdoctoral researcher from South Africa for 2 weeks to work collaborative with a PhD student in our UCL group. |
Collaborator Contribution | Postdoctoral researcher sent from South Africa to visit UCL for 2 weeks to collaborate with PhD student. |
Impact | Collaborative research project on applying learned postprocessing AI techniques for SKA to real data from MeerKAT. South African postdoctoral research demonstrated and taught UCL group about software tools for processing real data and also for realistically simulating telescope observations. UCL group demonstrated and taught South African researcher about compressive sensing and AI imaging techniques and the software developed at UCL to apply to real telescope data. |
Start Year | 2024 |
Description | SPIDER collaboration |
Organisation | University of California, Davis |
Country | United States |
Sector | Academic/University |
PI Contribution | PhD student visited UC Davis for 1 month to advise on imaging methods for SPIDER and assist with instrumentation work. |
Collaborator Contribution | Prof. Ben Yoo and his group hosted PhD for 1 month and made laboratory available for collaborative work. |
Impact | SPIDER gen-3 chip prototype was constructed in lab. Extensive discussion on potential imaging techniques for SPIDER for further collaboration |
Start Year | 2023 |
Title | Optimus-Primal: A Lightweight primal-dual solver |
Description | optimusprimal is a light weight proximal splitting Forward Backward Primal Dual based solver for convex optimization problems. The current version supports finding the minimum of f(x) + h(A x) + p(B x) + g(x), where f, h, and p are lower semi continuous and have proximal operators, and g is differentiable. A and B are linear operators. |
Type Of Technology | Software |
Year Produced | 2022 |
Open Source License? | Yes |
Impact | Used for developing and prototyping scalable imaging and uncertainty quantification techniques. |
URL | https://github.com/astro-informatics/Optimus-Primal |
Title | ProxNest: Proximal nested sampling for high-dimensional Bayesian model selection |
Description | ProxNest is an open source, well tested and documented Python implementation of the proximal nested sampling framework (Cai et al. 2022) to compute the Bayesian model evidence or marginal likelihood in high-dimensional log-convex settings. Furthermore, non-smooth sparsity-promoting priors are also supported. This is achieved by exploiting tools from proximal calculus and Moreau-Yosida regularisation (Moreau 1962) to efficiently sample from the prior subject to the hard likelihood constraint. The resulting Markov chain iterations include a gradient step, approximating (with arbitrary precision) an overdamped Langevin SDE that can scale to very high-dimensional applications. |
Type Of Technology | Software |
Year Produced | 2022 |
Open Source License? | Yes |
Impact | Used to produce the results of the paper on Proximal Nested Sampling for High-Dimensional Bayesian Model Selection. In the process of being extended to develop new methods. |
URL | https://github.com/astro-informatics/proxnest |
Title | QuantifAI |
Description | quantifai is a PyTorch-based open-source radio interferometric imaging reconstruction package with scalable Bayesian uncertainty quantification relying on data-driven (learned) priors. |
Type Of Technology | Software |
Year Produced | 2023 |
Open Source License? | Yes |
Impact | Used to product the results of paper on Scalable Bayesian Uncertainty Quantification with Data-Driven Priors for Radio Interferometric Imaging |
URL | https://github.com/astro-informatics/QuantifAI |
Title | S2FFT: Differentiable and accelerated spherical transforms |
Description | S2FFT is a Python package for computing Fourier transforms on the sphere and rotation group using JAX or PyTorch. It leverages autodiff to provide differentiable transforms, which are also deployable on hardware accelerators (e.g. GPUs and TPUs). More specifically, S2FFT provides support for spin spherical harmonic and Wigner transforms (for both real and complex signals), with support for adjoint transformations where needed, and comes with different optimisations (precompute or not) that one may select depending on available resources and desired angular resolution. |
Type Of Technology | Software |
Year Produced | 2023 |
Open Source License? | Yes |
Impact | Provides differentiable spherical harmonic transforms Provides GPU acceleration |
URL | https://github.com/astro-informatics/s2fft |
Title | S2WAV: Differentiable and accelerated wavelet transform on the sphere |
Description | S2WAV is a JAX package for computing wavelet transforms on the sphere and rotation group. It leverages autodiff to provide differentiable transforms, which are also deployable on modern hardware accelerators (e.g. GPUs and TPUs), and can be mapped across multiple accelerators. More specifically, S2WAV provides support for scale-discretised wavelet transforms on the sphere and rotation group (for both real and complex signals), with support for adjoints where needed, and comes with a variety of different optimisations (e.g. precompute or not, multi-resolution algorithms) that one may select depending on available resources and desired angular resolution. S2WAV is a sister package of S2FFT, both of which are part of the SAX project, which aims to provide comprehensive support for differentiable transforms on the sphere and rotation group. |
Type Of Technology | Software |
Year Produced | 2023 |
Open Source License? | Yes |
Impact | Provides differentiable wavelet transforms in order to construct different scattering transforms for generative modelling |
URL | https://github.com/astro-informatics/s2wav |