Noise Analysis and Mitigation for Scalable Quantum Computation


Quantum computers promise an unprecedented increase in the existing computational power, enabling improved performance to a range of applications from medicine, biology and search for new materials to improved machine learning and more accurate predictions in finance. During the last decade, we have entered the "quantum technology era", where this theoretical prospect is becoming a reality. Quantum computers improve rapidly both in terms of size and "quality", and we have now crossed the limit where these devices can be simulated by classical machines.

However, the main obstacle in using quantum computers for practical applications is the fact that they are very sensitive to imperfections and undesired effects of their environment. There are two approaches to this issue. The first is to construct the computations in a way that unwanted errors are "corrected" automatically and in general. While this idea seems very appealing, it comes with a cost. For each "true" unit of quantum information used, one needs to manipulate many more physical units. The direct consequence of this is that for useful applications we would require quantum computers that are much bigger than those we can hope to have in the near-future.

The second approach is the one we take here, and that is considered the most promising for near-term applications. Instead of correcting the errors, one can try to mitigate them and reduce the effect they have on the computation. This can be done by breaking a large computation to smaller parts (some run by classical computers), cancelling some undesired effects while classically processing the results, ensure that the quantum part of the computation is done in a way that accumulates the smaller possible errors. To do this it is crucial to understand in depth the inner workings of the quantum hardware that is used. A major obstacle in this understanding is the same phenomenon that makes quantum computing powerful in the first place, namely its "holistic" nature, i.e. the total is more than the sum of its parts.

In this feasibility study we will characterise and model abstractly the imperfections of one of the most promising quantum hardware approaches (superconducting qubits) in a scalable way. Using this new understanding, we will develop software that is aware of the detailed imperfections of the hardware it runs, and in return provides the best way to mitigate the undesired errors for a given application.

Lead Participant

Project Cost

Grant Offer



UNIVERSITY OF EDINBURGH £149,938 £ 149,938


10 25 50