Guide me through the errors on a quantum computer!
Quantum computers have made a long journey from being a sci-fy to reality. Recently IBM announced its largest ever quantum computer, having 433 quantum bits, or qubits. A 433 qubit device, with noiseless ideal qubits, should have been sufficient to show advantage over even the best possible supercomputer in some fields of interest. However, qubits are extremely fragile, and the slightest interaction with the environment creates a change in their information content. This fragility, called noise or error, largely restricts the capacity of current quantum computers. While error correction is the ultimate goal to achieve, it requires way more qubits, and hardware with way lower noise profile, than we have today. While we wait for engineers to solve these serious issues, intelligent techniques to suppress and mitigate noise in the system have been developed to increase the usability of current quantum devices.
A general class of algorithms, developed for the near-term noisy quantum devices, is the hybrid quantum-classical algorithms. In these algorithms, the overall workload is divided between a quantum processing unit (QPU) and a classical processing unit (CPU) (Fig. 1). These algorithms aim to obtain a good expectation value for the problem at hand. A parameterised circuit designed from the domain-knowledge is generated with random, or some known initial values of the parameters. The quantum circuit is executed multiple times to obtain an expectation value, which is then fed to the classical optimiser. The classical optimiser suggests a new set of parameters which are expected to produce a better expectation value in the next iteration. Refer to this blog for more detail.
Such algorithms usually require low depth circuit since a large chunk of the computation is transferred to the CPU. These algorithms are, therefore, a natural choice for current quantum devices.
Error suppression for hybrid quantum-classical algorithms
Error suppression collectively refers to some technique used to restrict the effect of noise of the device — such as intelligent selection of qubits, elimination of noisy gates in the circuit, dynamical decoupling etc., leading to a lower noise profile in the system. Quantum approximate optimisation algorithm (QAOA) is a class of hybrid quantum-classical algorithms used to find good approximate solutions to combinatorial optimisation problems. For quantum systems, any unitary matrix is a valid quantum gate. This makes the design of a quantum system difficult. But, it has been shown that if a quantum architecture can implement a set of single qubit and two-qubit gates, collectively called the basis gate set, then it can implement any unitary operation to arbitrary precision. Two-qubit gates are of particular importance since these gates can create entanglement, but they are also one of the primary noisy component in current quantum hardwares. The number of two qubit gates in the QAOA quantum circuit is proportional to the number of edges in the graph representing the combinatorial optimisation problem. Therefore, we propose three techniques to eliminate two qubit gates from the circuit while retaining functional equivalence.
In a two-qubit gate, the first qubit is the control and the second qubit is the target. A Controlled-NOT (CNOT) gate behaves as a NOT gate on the target qubit, enabled whenever the control qubit is 1. Using this, we find that some CNOT gates in a QAOA circuit do not transfer any information, and can be eliminated. The challenge is to arrange the CNOT gates in the circuit, without changing functional equivalence, so that the maximum number of CNOT gates can be eliminated. The first CNOT gates corresponding to each edge is shown to be redundant. This naturally conforms to the edge colouring problem — the CNOT gates corresponding to the edges of same color are disjoint, and can be executed together. We arrange the edges corresponding to the colour with highest cardinality to eliminate the maximum number of CNOT gates. For a graph with n vertices and m edges, this method can eliminate at most n/2 gates. An example of CNOT elimination via edge colouring method is shown in Fig. 2.
A better approach is obtained using a simple Depth First Search (DFS) approach which can eliminate n-1 CNOT gates, shown to be optimal, at the expense of increasing the depth of the circuit to some extent [1]. We finally propose a heuristic algorithm which retains the n-1 CNOT elimination, but at the same time restricts the increase in circuit depth [2]. The heuristic approach achieves a ~10x reduction in the infidelity as opposed to the usual general QAOA circuit for the Max-Cut problem.
Error mitigation for hybrid quantum-classical algorithms
Apart from CNOT elimination to suppress the effect of noise, it is also possible to mitigate noise in a quantum circuit. Error mitigation typically refers to some method to abstract a value which is close to the noiseless ideal expectation value from multiple noisy expectation values. A common method used for error mitigation is Zero Noise Extrapolation (ZNE). In this method, multiple circuits are generated at different noise factors without losing functional equivalence with the original circuit. For example, since any quantum gate U is unitary, adding the inverse of U after it, followed by U again, keeps the functionality unchanged, but increases the noise by 3 folds (see Fig. 3). Once the expectation value is obtained at different noise factors, the ideal expectation value can be determined by fitting the expectation values at different noise factors and extrapolating to the zero-noise limit.
Recommended by LinkedIn
Previously, in case of hybrid quantum-classical algorithms, ZNE was applied on the final circuit having the optimal parameters. However, noise in each iteration of such algorithms produces incorrect expectation value, thus guiding the classical optimizer in a wrong direction. It is an obvious, and observed fact that the classical optimizer fails to return the best parameters when the expectation values in each iteration are incorrect. We, rather, propose using ZNE on every iteration of the algorithm. Thus, the resulting expectation value in each iteration is closer to the ideal one, leading to a better performance of the classical optimizer. Our results show that the estimated optimal eigenvalues or energy valuesusing this method are closer to the ideal ones.
Error correction for heavy hexagonal codes
While invested in the success of error suppression and mitigation, we should not forget the eventual goal which needs to be achieved - error correction. IBM Quantum devices follow a heavy hexagonal structure for their hardware where the data qubits are placed on the conjunction of two lines, and the blue, red and white regions denote flag and ancillary qubits required to fetch error information from the data qubits without disturbing their information content. Fig 4 shows a heavy hexagonal quantum error correcting code (QECC) having 9 “physical” data qubits to create a more secure “logical” qubit.
A general requirement of any QECC is to decode which data qubits are erroneous from the measurement outcomes of the flag and ancillary qubits, called syndrome. The performance measure of a decoder is determined by two metrics, namely pseudo-threshold and threshold which, broadly speaking, determine the probability of error on the secure logical qubit for a given probability of error for physical qubits. Naturally, higher the value of these two metrics, better is the decoder.
We propose a machine-learning (ML) based approach to decoding a heavy-hexagonal QECC. Here, error correction is mapped to a classification problem and, given a syndrome, the task of the classifier is to predict the error class. We also show that the structure of the heavy-hexagonal QECC can be exploited to find equivalence relation between multiple errors, thus achieving a quadratic reduction in the effective number of error classes, making classification easier. We propose two algorithms to find such equivalence relations - first a generalised method which will work for any gauge QECC (loosely speaking, any QECC for which flag qubits are defined along with ancillary qubits), and a second specialised algorithm which utilises the structure of the heavy-hexagonal QECC to find equivalence relation exponentially faster than the previous algorithm. Our proposed decoder achieved ~3x and ~8x increase in the threshold and pseudo-threshold of the heavy hexagonal QECC respectively [3].
Acknowledgement and reference
The error suppression and error correction researches are in collaboration with Debasmita Bhoumik and Prof. Susmita Sur Kolay , Indian Statistical Institute. The error suppression research was accepted as a talk at APS March Meeting, and the error correction research was accepted as a poster at Quantum Information Processing (QIP) 2023. The error mitigation research was in collaboration with Aakif Akhtar , IIT Guwahati, as part of IBM Research summer internship.
[1] Majumdar, R, Madan, D., Bhoumik, D., Vinayagamurthy, D., Raghunathan, S. and Sur-Kolay, S. Optimizing Ansatz Design in QAOA for Max-cut. arXiv:2106.02812
[2] Majumdar, R, Bhoumik, D., Madan, D., Vinayagamurthy, D., Raghunathan, S. and Sur-Kolay, S. Depth Optimized Ansatz Circuit in QAOA for Max-Cut. arXiv:2110.04637
[3] Bhoumik, D., Majumdar, R., Madan, D., Vinayagamurthy, D., Raghunathan, S. Sur-Kolay, S. Efficient machine learning based decoder for heavy hexagonal QECC. arXiv:2110.09730