Quantum Approximate Optimization Algorithm
In this post, I am going to introduce a famous NISQ algorithm for solving combinatorial optimization (CO) problems, the Quantum Approximate Optimization Algorithm (QAOA), proposed by Edward Farhi in 2014. QAOA also follows the structure of VQEs, a quantum-classical feed back loop, but with a specific ansatz form related to the problem Hamiltonian. This algorithm is closely related to quantum adiabatic algorithm (QAA), and in my opinion, it can be seen as an approximate simulation of QAA on a gate-based quantum computer, which we optimize the time schedule to get the best performance at a given level of approximation.
QAOA ansatz
The ansatz in VQE is a fixed circuit structure with parameters we can optimize, and the QAOA ansatz is determined by the problem Hamiltonian (HC) and a mixer Hamiltonian (HB), which commutes with HP. Most CO problems can be formulated in Ising Hamiltonian, which contains only Z and ZZ operator, thus the original choice of the mixer Hamiltonian in the paper is the tensor product of PauliX operators on each qubit.
Forming the unitary operator w.r.t. these two Hamiltonian results in
with β and 𝛾 being trainable parameters. The ansatz is then formed by arranging these two operators alternatively with total p repetitions:
We can implement ZZ operator on a gate-based quantum computer with the circuit below, note that if there is a coefficient c in front of ZZ operator, it should be implemented in the rotation angle, i.e. c𝛾, but only 𝛾 is trainable.
The illustration of QAOA is similar to VQE
Quantum Adiabatic Algorithm
To understand the relation of QAOA with QAA, we first have to explain the basic idea of QAA. Intuitively, quantum adiabatic theorem states that :
A physical system remains in its instantaneous eigenstate if a given perturbation is acting on it slowly enough and if there is a gap between the eigenvalue and the rest of the Hamiltonian’s spectrum.
Thus, if we start with a ground state of a simple Hamiltonian, and slightly tune it to the problem Hamiltonian, it would end up at the state close to the ground state of the problem Hamiltonian, which is the optimal solution to of our optimization problem. This is also the main idea of quantum annealing, but we are now trying to do it on a gate-based universal quantum computer.
In quantum mechanics, the dynamic of a quantum state is governed by Schrödinger equation, and the state at time T is
where U(T,0)=exp(-iHT) is the time evolution operator for a time independent Hamiltonian. However, the Hamiltonian during the adiabatic process is time dependent,
thus the time evolution operator is
However, there is another problem. Since HC and HB don’t commute, thus
this is where we have to introduce the Trotter-Suzuki decomposition for [A,B] != 0 :
Therefore,
Lets take n=1 as an example and write βj = (1-s(j))Δt and 𝛾j = s(j)Δt, we have
Which is identical to the QAOA ansatz!
Note that if we pick a higher order n, and write things other than -iH being β and 𝛾, we end up with nothing other than a deeper layer of the ansatz, i.e. P became larger. However, since P is a tunable hyperparameter in QAOA, we don’t actually have to care about we are taking more approximation in time steps or in Trotter decomposition, we simply find the optimal schedule for a given P.
Performance
Although researches had tried to analyze QAOA performance by testing on solving many kinds of problems, and visualizing the optimization landscape, unfortunately, there is still no rigorous theoretical guarantee on the performance for p>1. I provided below some readings on this topic.
Extensions
Several variations of QAOA ansatz have also been proposed, some of them is by choosing a different mixer Hamiltonian, some are by iteratively changing the ansatz.
Thanks for reading, please correct me if there is any mistake.