Quantinuum researchers put forward a more efficient quantum algorithm to study quantum systems at finite temperature — with applications in machine learning, optimization, and material simulation.
Quantum computers are expected to be naturally better than classical calculators at studying the quantum matter. However, when a quantum system is coupled to the environment at a finite temperature, things get complicated. It all boils down to how to handle Gibbs states and compute their properties efficiently. Scientific preprint: https://arxiv.org/abs/2206.05302
Gibbs states describe quantum systems in thermal equilibrium with the environment.
Estimating the properties of a Gibbs state accurately and efficiently is not only important for quantum science but also for many applied computational problems: in industrial optimization, it is a pivotal step to achieving a quantum speedup in semi-definite programs; for generating high-quality synthetic data, it enables the scalable training of quantum Boltzmann machines; and for crafting high-performance materials, it provides new tools to study elusive but critical quantum effects at finite temperature — see for example the case of high-temperature superconductivity.
Preparing Gibbs states with current quantum computers is a daunting task. This is due to the double-sided nature of such states, both classical and quantum: on the one hand, quantum computers naturally encode the exponentially growing space of pure quantum states, which classical computers cannot do; on the other hand, Gibbs states are a classical mixture of pure quantum states due to the effects of the temperature and the environment — so they are not pure states themselves.
Preparing mixed states requires additional qubits –called ancillae– which, roughly speaking, are needed to account for the effect of the environment on the quantum system. This qubit overhead can be substantial, and known methods (e.g. purification) may result in a doubling of the qubits, compared to the qubits needed by the same quantum system if it were in a pure state, that is, at zero temperature.
While the additional qubit requirements may not matter that much for future generations of quantum computers, qubits are a precious resource today: current quantum computers could help to perform useful calculations with Gibbs states, but is there a better way of utilizing the scarce resources that we have available right now?
In recent work, Quantinuum’s quantum machine learning researchers introduced an efficient representation of the Gibbs state that we call pure thermal shadows: it allows a quantum computer to obtain many Gibbs expectation values with fewer measurements.
Pure thermal shadows avoid the explicit preparation of the Gibbs state, and consequently, hardware requirements are significantly reduced. As a point of comparison, for example, the purification method requires twice the number of qubits.
This new algorithm combines quantum signal processing with classical shadow tomography and random state preparation: this leads conveniently to a further reduction of hardware requirements such as the depth of some circuits and the number of shots compared to existing literature.
The algorithm can be summarized as follows;
The first step is the preparation of a random pure quantum state. We show that a 2-design with polynomial depth is sufficient for our purposes. This improves on previous proposals that required exponential depth.
The second step is the preparation of the thermal pure quantum (TPQ) state: it is obtained from the imaginary time evolution of the random pure state by the system Hamiltonian. Using quantum signal processing, we get a suitable quantum circuit implementation.
In the last step, we construct classical shadows of the TPQ state from outcomes of randomized measurements — we call them pure thermal shadows (PTS). This can be implemented with a shallow Clifford circuit, V, followed by a measurement in the computational basis and some classically efficient post-processing steps. Crucially, the PTS become equal to the shadows of the true Gibbs state as the system size increases.
The success of this algorithm is guaranteed by the mathematical proof of equivalence between the expectation values of Gibbs states and thermal pure quantum states, up to an error that falls off exponentially with system size. It is provided in the following theorem, which shows that only the order of Log(M) PTS are needed to predict M linear properties of the Gibbs state:
This is the main theoretical contribution of the paper.
Resorting to quantum signal processing to prepare the thermal pure state gives us a way forward towards implementing the algorithm in future gate-based quantum hardware — which will be less noisy and capable of executing deeper circuits.
For the time being, we can verify that our algorithm works by simulating all the circuits for a couple of relevant use cases.
Using a state-vectors simulator, first, we validate our framework on the well know Heisenberg-XXZ model. In the figure above, we verify that the PTS yield expectation that values are in excellent agreement with what the shadows of the Gibbs state predict.
Then we consider an exciting use case, the training of a Quantum Boltzmann Machine (QBM): this task is intractable for classical computers and important for industry-relevant applications of quantum machine learning in generative modelling. It also serves as further proof of the applicability of our algorithm to very generic systems described by arbitrary fully-connected quantum Hamiltonians.
Here (figure above), we show that a fully-connected QBM can be efficiently trained to model a target XXZ Gibbs state: the training is more sample efficient, thanks to the PTS, compared to hybrid variational approaches, which are non-scalable with system size. In fact, thanks to Theorem 1, we expect that bigger system sizes will help convergence to better models.
In addition, here (figure above), we show results where QBM is trained to model a classical salamander retina data set: the learned quantum model generates samples that closely match the empirical data distribution.