Mesoscale structural biology using deep learning

Lead Research Organisation: King's College London
Department Name: Randall Div of Cell and Molecular Biophy

Abstract

There are many structures in the cell which are thought to be the same (or almost the same) every time they form. Examples include the nuclear pore complex and the centriole. Structures from a lengthscale from around 30nm to a micron can be imaged by a form of fluorescence microscopy called localisation microscopy, where the position of each individual fluorophore is found to high precision. The localisation microscopy methods which are simplest to analyse and least likely to produce artifacts create images where the 3D structure is projected down onto a 2D image. This means that it is difficult to deduce what the 3D structure is. There are a number of other microscopy techniques, particularly cryo-electron microscopy, which have faced similar challenges. In general, this is approached by putting images into a number of classes which are then averaged to improve signal to noise and a model is then optimised to fit all of the information.
However, there is a property of localisation microscopy which means that we can take a different approach, which has the potential to fit the data much better. In localisation microscopy the position of each individual fluorophore is found, and the image of the sample is then reconstructed by displaying a Gaussian at the location of each fluorophore. This means that the system used to display the data can be easily created as a differentiable renderer (i.e. a system of display where the first derivative at each point can be calculated).
We will use this property to create a deep learning based optimisation system which will generate an optimised 3D model of points to describe a dataset with many 2D images of the structure. The model will start off as a random distribution of points. At each stage of the optimisation the model will be compared to all the 2D images, and for each of them the angle which produces the best fit to the data will be found. The model will then be changed and the process repeated, gradually optimising the model fit the data. The final result will be a 3D model which incorporates all the information from the different 2D images. This will be an unusual application of deep learning, since instead of training a network which will be useful for people to use directly, the training of the network will lead to the creation of the final model.
Since we are fitting to each individual image, it will not be necessary to perform averaging of the images to improve the signal to noise. For relatively large structures such as the ones we are considering, this is an advantage because the structures are likely to flex or deform to some extent. Averaging would therefore wash out structure. In contrast, we can build deformation into our model and therefore will get an accurate structure back even if there are slight variations between different instances of the structure.
We will test the performance of our method on simulations and experimental data. Simulations will allow us to assess the impact that experimental effects will have on our results. In particular, there is an uncertainty associated with the localisation of each fluorophore, and a certain proportion of the proteins are either not labelled or not detected.
The method will then be tested on experimental datasets of different centriole proteins, each with several thousand images of individual centrioles. Since this is not enough to train a deep learning network, we will carry out data augmentation, in which the image is shifted slightly and rotated in the x,y plane to create new images. This artificially creates more data and assists the network in learning small shifts and rotations. The results of fitting to experimental data will be compared to images of the same structures imaged using another super-resolution microscopy technique where the sample is embedded in a gel which is then expanded. This will allow us to be confident that our method is able to reproduce real structure from experimental data.

Technical Summary

We will create a deep learning based method to allow a 3D model consisting of a number of points to be fitted to a large number of 2D images of the structure under consideration. For each image in the dataset, the rotation of the 3D model that allows the best fit of the data will be found. The model will then be optimised to minimise the total error. This will allow the optimisation of the sample model without assuming particular symmetry constraints.
The architecture of the deep learning network that extracts the pose information will comprise an encoding section that predicts a rotation, a differentiable renderer, and a loss function that takes an input and output image as its parameters. To allow the system to converge on the correct structure, input and output images will be heavily blurred initially (i.e. a large Gaussian will be used to render them from the point data), and the blur will be decreased as the model is optimised. Since real biological structures a few hundred nanometres in size often exhibit some variation in structure, we will allow a limited affine transformation for each image to model small amounts of flex and distortion.
We will test on simulations of different structures to better understand what type of data this system will perform well on. In particular, the impact of localisation precision and the labelling rate will be tested by varying both systematically. This will inform our treatment of experimental data, since the data can be filtered for higher localisation precision, at the cost of a lower labelling rate. The performance on experimental data will be tested on localisation microscopy data of the centriole from the Manley lab at EPFL, with the performance of the method being cross checked against images of the same proteins imaged with expansion microscopy combined with structured illumination microscopy.

Planned Impact

The initial impact is expected to be seen in an improved ability to reconstruct 3D structures from sets of 2D localisation microscopy images. We will initially target proteins of the centriole for reconstruction, with other possible targets being the nuclear pore complex and clathrin coated pits. These are all systems of high biomedical importance and in the longer term the greatest impact of this project is likely to be enabling and accelerating new biomedical research.

The basic approach which we are developing could also be of much wider interest. We have already made initial contact with Professor Helen Saibil, a prominent member of the EM community, who thought that the method could be of considerable interest to those developing EM software. More generally, the method could also have applications for other types of fluorescence microscopy, since many techniques have a much worse resolution in z than in xy, meaning that in effect each image is a projection. For continuous structures (i.e. anything except small points), our approach could be the base of a method to reproduce the 3D structure of the sample at better than the resolution limit.

We have links to a number of microscopy companies, including Nikon. When the algorithm is developed we will approach microscope companies both with a view towards entering into a dialogue into how their systems might be of use for acquiring data that could then be used with our method, and also potentially with regard to interest in a commercial version of the algorithm. Microscope companies stand to benefit from our work because it would extend the experiments that could be carried out on their systems and give users greater confidence in their results. In turn, their users will benefit because we will be able to advise on changes to the hardware and software of the system which would optimise performance.

The post-doctoral researcher employed on the project will receive training in python programming with PyTorch, neural network architecture and testing procedures, image analysis and super-resolution microscopy. With regard to the deep learning component of the project, the approach taken here is highly unusual in the context of microscopy, where most uses of deep learning take fairly standard approaches to classification, or creation of images using generative adversarial networks. In contrast, here we are using computer vision/deep learning approaches at the cutting edge of the engineering field, using the operation of the neural network itself as a tool with which to perform the model optimisation. Both advanced microscopy and deep learning are rapidly growing areas, with many jobs being created and a shortage of people with in-depth training. We anticipate that the training and experience that the postdoc would acquire over the course of the project would be highly beneficial in enabling their future career choices.

Publications

10 25 50
 
Description We have created a system to synthesise multiple 2D images of a single structure into 3D models. Initial results are promising, with deep learning being able to massively accelerate the fitting and allowing an unconstrained fit. However, the heterogeneity of biological data at this lengthscale (several hundred nanometres) poses serious challenges to getting the best possible optimisation. We have built in ways to model this biological heterogeneity and have applied it to simulated and experimental datasets.
Exploitation Route This result could be relevant to other data synthesis applications such as cryo-EM
Sectors Pharmaceuticals and Medical Biotechnology

 
Title 3D structure from 2D super-resolution data 
Description We are able to reconstruct 3D biological structures from multiple 2D views, with both accelerated computation and better modelling of biological heterogeneity compared to other methods. 
Type Of Material Technology assay or reagent 
Year Produced 2023 
Provided To Others? No  
Impact We anticipate that this method will be applicable for a range of biological structures, and will furthermore find applications in resolution assessment.