High speed, ultra-low photon flux imaging

Lead Research Organisation: University of Strathclyde
Department Name: Physics

Abstract

The development of imaging systems has historically been focussed on the design of optics and camera technologies. Major advances have been made in electronic detectors, including compact silicon pixel arrays, and small form-factor lenses, making visible light cameras cheap, high resolution and mass producible. These devices also underpin advances in bio-medical and remote imaging where greater pixel density and therefore resolution has been the major goal.

Future systems require performance that goes well beyond simple image capture. Accurate timing information provides access to 3D imaging techniques, while smart image processing and spatially tunable optics can produce sub-diffraction limited images. Operation of cameras in the few photon limit can make use of correlation effects, beating classical limits and offering security. To enable these kinds of imaging applications optical sources must be considered in parallel with the detector technologies.

At the Institute of Photonics we have been developing micro-LED display technologies with remarkable performance parameters. Each pixel is only a few microns in diameter and can be switched at 100's of MHz rates, with pulse widths of nanosecond duration. Arrays of these pixels are bonded directly onto CMOS drive electronics providing unprecedented control over their spatio-temporal output. The spatially structured light-field gives access to an entirely new form of illumination that has shown application in visible light communications and indoor navigation.

This project will develop the potential of spatio-temporal illumination sources further, targeting their operation at ultra-low light levels in the single photon range. By using Single Photon Avalanche Detector (SPAD) arrays, we will be able to create correlations of generated and detected photons in both space and time. This ability will allow the capture of images with extremely few photons, combined with sparse image processing techniques. The applications of these imaging systems include low flux biological systems, underwater data communications and navigation, and quantum imaging and robotic control.

The PhD student will have access to state-of-the-art LED and SPAD arrays with which to create next generation imaging systems. They will develop spatio-temporal modulation and decoding schemes for low flux imaging and navigation, implementing these using custom electronics. The systems will be demonstrated in macro and micro-scale imaging applications.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/N509760/1 01/10/2016 30/09/2021
1960290 Studentship EP/N509760/1 01/10/2017 31/07/2021 Emma Le Francois
 
Description The aim of this ongoing award is to investigate a new imaging system that would enhance current video surveillance and monitoring system in public areas and industries by bringing a third dimension to existing 2D imaging technique. This would be useful in different domains, such as defence and security, robot navigation, autonomous vehicle systems, facial recognition and surveillance.
The idea is to smartly use already existing illumination sources from building infrastructure, such as light emitting diodes (LEDs), in a way that we can achieve a 3D reconstruction of a scene. At the moment, a key finding associated with this award is the successful reconstruction of the topography of an object, with an error within millimetre range, using commercially available LEDs and a smartphone. Basically, four LEDS at different positions, mounted on a gantry, illuminates an object that is placed in the centre of a scene. So the illumination comes from the top of the object and is modulated in way that we don't require any synchronisation between the LEDs and the smartphone. Moreover, the illumination flicker is above visual flicker recognition so that no one can notice the modulation. The smartphone is placed in front of the object within the scene and records the frames at high speed (960 frames per second). The stack of frames is then processed in a way that we end up with four different images. Each image is linked to one LED hence we can see the illumination direction of each source. Knowing the position of the LEDS regarding the object and having these four distinct images enable us to apply a 3D imaging technique called Photometric Stereo imaging. This 3D imaging technique gives us information on the object that we process in an algorithm to reconstruct the topography of the object.
Another important achievement of this technique is that the method is robust to unwanted unmodulated background light. Therefore, the reconstruction would work in a room full of light, such as general lighting or sunlight.
This 3D imaging technique has been achieved for static and moving object.

Since the last submission, we investigated a hybrid method that improves our current 3D imaging technique (Photometric Stereo) to be robust to discontinuous scenes. Indeed, previous results showed a high-resolution 3D reconstruction of objects but was not robust to discontinuities. By combining another 3D imaging method called Time-of-Flight, we can improve our current work and achieve high-resolution even within discontinuous scenes. The Time-of-Flight technique determines a depth map of a scene using blue pulsed and a single photon avalanche detector to measure the distance between the detector and the objects. The depth map can then be used as a tool to select objects within the scene by selecting the distance. Once the object is selected, we can run the exact same method than before using Photometric Stereo to reconstruct the 3D shape of the object. In other words, time-of-flight is used as a mask tool to auto select objects that will be reconstructed in high-resolution with the photometric stereo imaging method. This hybrid technique shows error of 3 mm which is like our previous result. The advantage now is that we can easily select the object to reconstruct, which increases the computational time.

The award is still active and more work is ongoing to improve this system as it has some limitation at the moment, such as materials of object that can be reconstructed.
Another objective of the award that has not been met yet is the real-time aspect of the technique. At the moment, all the reconstruction is done off-line and takes three to four minutes to retrieve the topography of the object. A key objective of this award would be to make the process faster so that the reconstruction would be displayed on a screen only few seconds after recording the object.
Exploitation Route The work done in this award is under collaboration with Aralia Robotics from Bristol, UK.
The outcomes of this funding will first be carried on by a research team at the University of Strathclyde to finish the collaboration with Aralia. Then if everything works out as it should be, if a prototype can be built and tested in different kind of infrastructures, then the outcomes of this funding would be taken over by non-academic routes.
Sectors Digital/Communication/Information Technologies (including Software),Electronics