# Simulating Molecules using VQE

In this tutorial, we introduce the Variational Quantum Eigensolver (VQE), motivate its use, explain the necessary theory, and demonstrate its implementation in finding the ground state energy of molecules.

## Introduction

In many applications it is important to find the minimum eigenvalue of a matrix. For example, in chemistry, the minimum eigenvalue of a Hermitian matrix characterizing the molecule is the ground state energy of that system. In the future, the quantum phase estimation algorithm may be used to find the minimum eigenvalue. However, its implementation on useful problems requires circuit depths exceeding the limits of hardware available in the NISQ era. Thus, in 2014, Peruzzo et al. proposed VQE to estimate the ground state energy of a molecule using much shallower circuits .

Formally stated, given a Hermitian matrix $H$ with an unknown minimum eigenvalue $\lambda_{min}$, associated with the eigenstate $|\psi_{min}\rangle$, VQE provides an estimate $\lambda_{\theta}$ bounding $\lambda_{min}$:

\begin{align*} \lambda_{min} \le \lambda_{\theta} \equiv \langle \psi(\theta) |H|\psi(\theta) \rangle \end{align*}

where $|\psi(\theta)\rangle$ is the eigenstate associated with $\lambda_{\theta}$. By applying a parameterized circuit, represented by $U(\theta)$, to some arbitrary starting state $|\psi\rangle$, the algorithm obtains an estimate $U(\theta)|\psi\rangle \equiv |\psi(\theta)\rangle$ on $|\psi_{min}\rangle$. The estimate is iteratively optimized by a classical controller changing the parameter $\theta$ minimizing the expectation value of $\langle \psi(\theta) |H|\psi(\theta) \rangle$.

## The Variational Method of Quantum Mechanics

### Mathematical Background

VQE is an application of the variational method of quantum mechanics. To better understand the variational method, some preliminary mathematical background is provided. An eigenvector, $|\psi_i\rangle$, of a matrix $A$ is invariant under transformation by $A$ up to a scalar multiplicative constant (the eigenvalue $\lambda_i$). That is,

\begin{align*} A |\psi_i\rangle = \lambda_i |\psi_i\rangle \end{align*}

Furthermore, a matrix $H$ is Hermitian when it is equal to its own conjugate transpose.

\begin{align*} H = H^{\dagger} \end{align*}

The spectral theorem states that the eigenvalues of a Hermitian matrix must be real. Thus, any eigenvalue of $H$ has the property that $\lambda_i = \lambda_i^*$. As any measurable quantity must be real, Hermitian matrices are suitable for describing the Hamiltonians of quantum systems. Moreover, $H$ may be expressed as

\begin{align*} H = \sum_{i = 1}^{N} \lambda_i |\psi_i\rangle \langle \psi_i | \end{align*}

where each $\lambda_i$ is the eigenvalue corresponding to the eigenvector $|\psi_i\rangle$. Furthermore, the expectation value of the observable $H$ on an arbitrary quantum state $|\psi\rangle$ is given by

\begin{align} \langle H \rangle_{\psi} &\equiv \langle \psi | H | \psi \rangle \end{align}

Substituting $H$ with its representation as a weighted sum of its eigenvectors,

\begin{align} \langle H \rangle_{\psi} = \langle \psi | H | \psi \rangle &= \langle \psi | \left(\sum_{i = 1}^{N} \lambda_i |\psi_i\rangle \langle \psi_i |\right) |\psi\rangle\\ &= \sum_{i = 1}^{N} \lambda_i \langle \psi | \psi_i\rangle \langle \psi_i | \psi\rangle \\ &= \sum_{i = 1}^{N} \lambda_i | \langle \psi_i | \psi\rangle |^2 \end{align}

The last equation demonstrates that the expectation value of an observable on any state can be expressed as a linear combination using the eigenvalues associated with $H$ as the weights. Moreover, each of the weights in the linear combination is greater than or equal to 0, as $| \langle \psi_i | \psi\rangle |^2 \ge 0$ and so it is clear that

\begin{align} \lambda_{min} \le \langle H \rangle_{\psi} = \langle \psi | H | \psi \rangle = \sum_{i = 1}^{N} \lambda_i | \langle \psi_i | \psi\rangle |^2 \end{align}

The above equation is known as the variational method (in some texts it is also known as the variational principle) . It is important to note that this implies that the expectation value of any wave function will always be at least the minimum eigenvalue associated with $H$. Moreover, the expectation value of state $|\psi_{min}\rangle$ is given by $\langle \psi_{min}|H|\psi_{min}\rangle = \langle \psi_{min}|\lambda_{min}|\psi_{min}\rangle = \lambda_{min}$. Thus, as expected, $\langle H \rangle_{\psi_{min}}=\lambda_{min}$.

### Bounding the Ground State

When the Hamiltonian of a system is described by the Hermitian matrix $H$ the ground state energy of that system, $E_{gs}$, is the smallest eigenvalue associated with $H$. By arbitrarily selecting a wave function $|\psi \rangle$ (called an ansatz) as an initial guess approximating $|\psi_{min}\rangle$, calculating its expectation value, $\langle H \rangle_{\psi}$, and iteratively updating the wave function, arbitrarily tight bounds on the ground state energy of a Hamiltonian may be obtained.

## The Variational Quantum Eigensolver

### Variational Forms

A systematic approach to varying the ansatz is required to implement the variational method on a quantum computer. VQE does so through the use of a parameterized circuit with a fixed form. Such a circuit is often called a variational form, and its action may be represented by the linear transformation $U(\theta)$. A variational form is applied to a starting state $|\psi\rangle$ (such as the vacuum state $|0\rangle$, or the Hartree Fock state) and generates an output state $U(\theta)|\psi\rangle\equiv |\psi(\theta)\rangle$. Iterative optimization over $|\psi(\theta)\rangle$ aims to yield an expectation value $\langle \psi(\theta)|H|\psi(\theta)\rangle \approx E_{gs} \equiv \lambda_{min}$. Ideally, $|\psi(\theta)\rangle$ will be close to $|\psi_{min}\rangle$ (where 'closeness' is characterized by either state fidelity, or Manhattan distance) although in practice, useful bounds on $E_{gs}$ can be obtained even if this is not the case.

Moreover, a fixed variational form with a polynomial number of parameters can only generate transformations to a polynomially sized subspace of all the states in an exponentially sized Hilbert space. Consequently, various variational forms exist. Some, such as Ry and RyRz are heuristically designed, without consideration of the target domain. Others, such as UCCSD, utilize domain specific knowledge to generate close approximations based on the problem's structure. The structure of common variational forms is discussed in greater depth later in this document.

### Simple Variational Forms

When constructing a variational form we must balance two opposing goals. Ideally, our $n$ qubit variational form would be able to generate any possible state $|\psi\rangle$ where $|\psi\rangle \in \mathbb{C}^N$ and $N=2^n$. However, we would like the variational form to use as few parameters as possible. Here, we aim to give intuition for the construction of variational forms satisfying our first goal, while disregarding the second goal for the sake of simplicity.

Consider the case where $n=1$. The U3 gate takes three parameters, $\theta, \phi$ and $\lambda$, and represents the following transformation:

\begin{align} U3(\theta, \phi, \lambda) = \begin{pmatrix}\cos(\frac{\theta}{2}) & -e^{i\lambda}\sin(\frac{\theta}{2}) \\ e^{i\phi}\sin(\frac{\theta}{2}) & e^{i\lambda + i\phi}\cos(\frac{\theta}{2}) \end{pmatrix} \end{align}

Up to a global phase, any possible single qubit transformation may be implemented by appropriately setting these parameters. Consequently, for the single qubit case, a variational form capable of generating any possible state is given by the circuit: Moreover, this universal 'variational form' only has 3 parameters and thus can be efficiently optimized. It is worth emphasising that the ability to generate an arbitrary state ensures that during the optimization process, the variational form does not limit the set of attainable states over which the expectation value of $H$ can be taken. Ideally, this ensures that the minimum expectation value is limited only by the capabilities of the classical optimizer.

A less trivial universal variational form may be derived for the 2 qubit case, where two body interactions, and thus entanglement, must be considered to achieve universality. Based on the work presented by Shende et al.  the following is an example of a universal parameterized 2 qubit circuit: Allow the transformation performed by the above circuit to be represented by $U(\theta)$. When optimized variationally, the expectation value of $H$ is minimized when $U(\theta)|\psi\rangle \equiv |\psi(\theta)\rangle \approx |\psi_{min}\rangle$. By formulation, $U(\theta)$ may produce a transformation to any possible state, and so this variational form may obtain an arbitrarily tight bound on two qubit ground state energies, only limited by the capabilities of the classical optimizer.

### Parameter Optimization

Once an efficiently parameterized variational form has been selected, in accordance with the variational method, its parameters must be optimized to minimize the expectation value of the target Hamiltonian. The parameter optimization process has various challenges. For example, quantum hardware has various types of noise and so objective function evaluation (energy calculation) may not necessarily reflect the true objective function. Additionally, some optimizers perform a number of objective function evaluations dependent on cardinality of the parameter set. An appropriate optimizer should be selected by considering the requirements of a application.

A popular optimization strategy is gradient decent where each parameter is updated in the direction yielding the largest local change in energy. Consequently, the number of evaluations performed depends on the number of optimization parameters present. This allows the algorithm to quickly find a local optimum in the search space. However, this optimization strategy often gets stuck at poor local optima, and is relatively expensive in terms of the number of circuit evaluations performed. While an intuitive optimization strategy, it is not recommended for use in VQE.

An appropriate optimizer for optimizing a noisy objective function is the Simultaneous Perturbation Stochastic Approximation optimizer (SPSA). SPSA approximates the gradient of the objective function with only two measurements. It does so by concurrently perturbing all of the parameters in a random fashion, in contrast to gradient decent where each parameter is perturbed independently. When utilizing VQE in either a noisy simulator or on real hardware, SPSA is a recommended as the classical optimizer.

When noise is not present in the cost function evaluation (such as when using VQE with a statevector simulator), a wide variety of classical optimizers may be useful. Two such optimizers supported by Qiskit Aqua are the Sequential Least Squares Programming optimizer (SLSQP) and the Constrained Optimization by Linear Approximation optimizer (COBYLA). It is worth noting that COBYLA only performs one objective function evaluation per optimization iteration (and thus the number of evaluations is independent of the parameter set's cardinality). Therefore, if the objective function is noise-free and minimizing the number of performed evaluations is desirable, it is recommended to try COBYLA.

### Example with a Single Qubit Variational Form

We will now use the simple single qubit variational form to solve a problem similar to ground state energy estimation. Specifically, we are given a random probability vector $\vec{x}$ and wish to determine a possible parameterization for our single qubit variational form such that it outputs a probability distribution that is close to $\vec{x}$ (where closeness is defined in terms of the Manhattan distance between the two probability vectors).

We first create the random probability vector in python:

import numpy as np
np.random.seed(999999)
target_distr = np.random.rand(2)
# We now convert the random vector into a valid probability vector
target_distr /= sum(target_distr)


We subsequently create a function that takes the parameters of our single U3 variational form as arguments and returns the corresponding quantum circuit:

from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
def get_var_form(params):
qr = QuantumRegister(1, name="q")
cr = ClassicalRegister(1, name='c')
qc = QuantumCircuit(qr, cr)
qc.u3(params, params, params, qr)
qc.measure(qr, cr)
return qc


Now we specify the objective function which takes as input a list of the variational form's parameters, and returns the cost associated with those parameters:

from qiskit import Aer, execute
backend = Aer.get_backend("qasm_simulator")
NUM_SHOTS = 10000

def get_probability_distribution(counts):
output_distr = [v / NUM_SHOTS for v in counts.values()]
if len(output_distr) == 1:
output_distr.append(0)
return output_distr

def objective_function(params):
# Obtain a quantum circuit instance from the paramters
qc = get_var_form(params)
# Execute the quantum circuit to obtain the probability distribution associated with the current parameters
result = execute(qc, backend, shots=NUM_SHOTS).result()
# Obtain the counts for each measured state, and convert those counts into a probability vector
output_distr = get_probability_distribution(result.get_counts(qc))
# Calculate the cost as the distance between the output distribution and the target distribution
cost = sum([np.abs(output_distr[i] - target_distr[i]) for i in range(2)])
return cost


Finally, we create an instance of the COBYLA optimizer, and run the algorithm. Note that the output varies from run to run. Moreover, while close, the obtained distribution might not be exactly the same as the target distribution, however, increasing the number of shots taken will increase the accuracy of the output.

from qiskit.aqua.components.optimizers import COBYLA

# Initialize the COBYLA optimizer
optimizer = COBYLA(maxiter=500, tol=0.0001)

# Create the initial parameters (noting that our single qubit variational form has 3 parameters)
params = np.random.rand(3)
ret = optimizer.optimize(num_vars=3, objective_function=objective_function, initial_point=params)

# Obtain the output distribution using the final parameters
qc = get_var_form(ret)
counts = execute(qc, backend, shots=NUM_SHOTS).result().get_counts(qc)
output_distr = get_probability_distribution(counts)

print("Target Distribution:", target_distr)
print("Obtained Distribution:", output_distr)
print("Output Error (Manhattan Distance):", ret)
print("Parameters Found:", ret)

Target Distribution: [0.51357006 0.48642994]
Obtained Distribution: [0.5182, 0.4818]
Output Error (Manhattan Distance): 0.0001401187388391789
Parameters Found: [1.59966854 0.66273002 0.28432001]


### Structure of Common Variational Forms

As already discussed, it is not possible for a polynomially parameterized variational form to generate a transformation to any state. Variational forms can be grouped into two categories, depending on how they deal with this limitation. The first category of variational forms use domain or application specific knowledge to limit the set of possible output states. The second approach uses a heuristic circuit without prior domain or application specific knowledge.

The first category of variational forms exploit characteristics of the problem domain to restrict the set of transformations that may be required. For example, when calculating the ground state energy of a molecule, the number of particles in the system is known a priori. Therefore, if a starting state with the correct number of particles is used, by limiting the variational form to only producing particle preserving transformations, the number of parameters required to span the new transformation subspace can be greatly reduced. Indeed, by utilizing similar information from Coupled-Cluster theory, the variational form UCCSD can obtain very accurate results for molecular ground state energy estimation when starting from the Hartree Fock state. Another example illustrating the exploitation of domain-specific knowledge follows from considering the set of circuits realizable on real quantum hardware. Extant quantum computers, such as those based on super conducting qubits, have limited qubit connectivity. That is, it is not possible to implement 2-qubit gates on arbitrary qubit pairs (without inserting swap gates). Thus, variational forms have been constructed for specific quantum computer architectures where the circuits are specifically tuned to maximally exploit the natively available connectivity and gates of a given quantum device. Such a variational form was used in 2017 to successfully implement VQE for the estimation of the ground state energies of molecules as large as BeH$_2$ on an IBM quantum computer .

In the second approach, gates are layered such that good approximations on a wide range of states may be obtained. Qiskit Aqua supports three such variational forms: RyRz, Ry and SwapRz (we will only discuss the first two). All of these variational forms accept multiple user-specified configurations. Three essential configurations are the number of qubits in the system, the depth setting, and the entanglement setting. A single layer of a variational form specifies a certain pattern of single qubit rotations and CX gates. The depth setting says how many times the variational form should repeat this pattern. By increasing the depth setting, at the cost of increasing the number of parameters that must be optimized, the set of states the variational form can generate increases. Finally, the entanglement setting selects the configuration, and implicitly the number, of CX gates. For example, when the entanglement setting is linear, CX gates are applied to adjacent qubit pairs in order (and thus $n-1$ CX gates are added per layer). When the entanglement setting is full, a CX gate is applied to each qubit pair in each layer. The circuits for RyRz corresponding to entanglement="full" and entanglement="linear" can be seen by executing the following code snippet:

from qiskit.aqua.components.variational_forms import RYRZ
entanglements = ["linear", "full"]
for entanglement in entanglements:
form = RYRZ(num_qubits=4, depth=1, entanglement=entanglement)
if entanglement == "linear":
print("=============Linear Entanglement:=============")
else:
print("=============Full Entanglement:=============")
# We initialize all parameters to 0 for this demonstration
print(form.construct_circuit( * form.num_parameters).draw(line_length=100))
print()

=============Linear Entanglement:=============
┌───────────┐┌───────┐                      ┌───────────┐  ┌───────┐                »
q_0: |0>┤ U3(0,0,0) ├┤ U1(0) ├──────────────────■───┤ U3(0,0,0) ├──┤ U1(0) ├────────────────»
├───────────┤├───────┤┌──────────────┐┌─┴─┐┌┴───────────┴─┐└───────┘ ┌───────────┐  »
q_1: |0>┤ U3(0,0,0) ├┤ U1(0) ├┤ U2(0,3.1416) ├┤ X ├┤ U2(0,3.1416) ├────■─────┤ U3(0,0,0) ├──»
├───────────┤├───────┤├──────────────┤└───┘└──────────────┘  ┌─┴─┐  ┌┴───────────┴─┐»
q_2: |0>┤ U3(0,0,0) ├┤ U1(0) ├┤ U2(0,3.1416) ├───────────────────────┤ X ├──┤ U2(0,3.1416) ├»
├───────────┤├───────┤├──────────────┤                       └───┘  └──────────────┘»
q_3: |0>┤ U3(0,0,0) ├┤ U1(0) ├┤ U2(0,3.1416) ├──────────────────────────────────────────────»
└───────────┘└───────┘└──────────────┘                                              »
«                                                     ░
«q_0: ────────────────────────────────────────────────░─
«     ┌───────┐                                       ░
«q_1: ┤ U1(0) ├───────────────────────────────────────░─
«     └───────┘ ┌───────────┐    ┌───────┐            ░
«q_2: ────■─────┤ U3(0,0,0) ├────┤ U1(0) ├────────────░─
«       ┌─┴─┐  ┌┴───────────┴─┐┌─┴───────┴─┐┌───────┐ ░
«q_3: ──┤ X ├──┤ U2(0,3.1416) ├┤ U3(0,0,0) ├┤ U1(0) ├─░─
«       └───┘  └──────────────┘└───────────┘└───────┘ ░

=============Full Entanglement:=============
┌───────────┐┌───────┐                                                               »
q_0: |0>┤ U3(0,0,0) ├┤ U1(0) ├──────────────────■────■────────────────────■──────────────────»
├───────────┤├───────┤┌──────────────┐┌─┴─┐  │  ┌──────────────┐  │                  »
q_1: |0>┤ U3(0,0,0) ├┤ U1(0) ├┤ U2(0,3.1416) ├┤ X ├──┼──┤ U2(0,3.1416) ├──┼──────────────────»
├───────────┤├───────┤├──────────────┤└───┘┌─┴─┐└──────────────┘  │  ┌──────────────┐»
q_2: |0>┤ U3(0,0,0) ├┤ U1(0) ├┤ U2(0,3.1416) ├─────┤ X ├──────────────────┼──┤ U2(0,3.1416) ├»
├───────────┤├───────┤├──────────────┤     └───┘                ┌─┴─┐└──────────────┘»
q_3: |0>┤ U3(0,0,0) ├┤ U1(0) ├┤ U2(0,3.1416) ├──────────────────────────┤ X ├────────────────»
└───────────┘└───────┘└──────────────┘                          └───┘                »
«      ┌───────────┐     ┌───────┐                                                              »
«q_0: ─┤ U3(0,0,0) ├─────┤ U1(0) ├──────────────────────────────────────────────────────────────»
«      └───────────┘     └───────┘                          ┌───────────┐     ┌───────┐         »
«q_1: ───────────────────────■──────────■───────────────────┤ U3(0,0,0) ├─────┤ U1(0) ├─────────»
«     ┌──────────────┐     ┌─┴─┐        │  ┌──────────────┐ └───────────┘     └───────┘         »
«q_2: ┤ U2(0,3.1416) ├─────┤ X ├────────┼──┤ U2(0,3.1416) ├──────────────────────────────────■──»
«     ├──────────────┤┌────┴───┴─────┐┌─┴─┐└──────────────┘┌──────────────┐┌──────────────┐┌─┴─┐»
«q_3: ┤ U2(0,3.1416) ├┤ U2(0,3.1416) ├┤ X ├────────────────┤ U2(0,3.1416) ├┤ U2(0,3.1416) ├┤ X ├»
«     └──────────────┘└──────────────┘└───┘                └──────────────┘└──────────────┘└───┘»
«                                            ░
«q_0: ───────────────────────────────────────░─
«                                            ░
«q_1: ───────────────────────────────────────░─
«      ┌───────────┐    ┌───────┐            ░
«q_2: ─┤ U3(0,0,0) ├────┤ U1(0) ├────────────░─
«     ┌┴───────────┴─┐┌─┴───────┴─┐┌───────┐ ░
«q_3: ┤ U2(0,3.1416) ├┤ U3(0,0,0) ├┤ U1(0) ├─░─
«     └──────────────┘└───────────┘└───────┘ ░



Assume the depth setting is set to $d$. Then, RyRz has $n\times (d+1)\times 2$ parameters, Ry with linear entanglement has $2n\times(d + \frac{1}{2})$ parameters, and Ry with full entanglement has $d\times n\times \frac{(n + 1)}{2} + n$ parameters.

## VQE Implementation in Qiskit

This section illustrates an implementation of VQE using the programmatic approach. Qiskit Aqua also enables a declarative implementation, however, it reveals less information about the underlying algorithm. This code, specifically the preparation of qubit operators, is based on the code found in the Qiskit Tutorials repository (and as of July 2019, may be found at: https://github.com/Qiskit/qiskit-tutorials ).

The following libraries must first be imported.

from qiskit.aqua.algorithms import VQE, ExactEigensolver
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
from qiskit.chemistry.aqua_extensions.components.variational_forms import UCCSD
from qiskit.aqua.components.variational_forms import RYRZ
from qiskit.chemistry.aqua_extensions.components.initial_states import HartreeFock
from qiskit.aqua.components.optimizers import COBYLA, SPSA, SLSQP
from qiskit import IBMQ, BasicAer, Aer
from qiskit.chemistry.drivers import PySCFDriver, UnitsType
from qiskit.chemistry import FermionicOperator
from qiskit import IBMQ
from qiskit.providers.aer import noise
from qiskit.aqua import QuantumInstance
from qiskit.ignis.mitigation.measurement import CompleteMeasFitter


### Running VQE on a Statevector Simulator

We demonstrate the calculation of the ground state energy for LiH at various interatomic distances. A driver for the molecule must be created at each such distance. Note that in this experiment, to reduce the number of qubits used, we freeze the core and remove two unoccupied orbitals. First, we define a function that takes an interatomic distance and returns the appropriate qubit operator, $H$, as well as some other information about the operator.

def get_qubit_op(dist):
driver = PySCFDriver(atom="Li .0 .0 .0; H .0 .0 " + str(dist), unit=UnitsType.ANGSTROM,
charge=0, spin=0, basis='sto3g')
molecule = driver.run()
freeze_list = 
remove_list = [-3, -2]
repulsion_energy = molecule.nuclear_repulsion_energy
num_particles = molecule.num_alpha + molecule.num_beta
num_spin_orbitals = molecule.num_orbitals * 2
remove_list = [x % molecule.num_orbitals for x in remove_list]
freeze_list = [x % molecule.num_orbitals for x in freeze_list]
remove_list = [x - len(freeze_list) for x in remove_list]
remove_list += [x + molecule.num_orbitals - len(freeze_list)  for x in remove_list]
freeze_list += [x + molecule.num_orbitals for x in freeze_list]
ferOp = FermionicOperator(h1=molecule.one_body_integrals, h2=molecule.two_body_integrals)
ferOp, energy_shift = ferOp.fermion_mode_freezing(freeze_list)
num_spin_orbitals -= len(freeze_list)
num_particles -= len(freeze_list)
ferOp = ferOp.fermion_mode_elimination(remove_list)
num_spin_orbitals -= len(remove_list)
qubitOp = ferOp.mapping(map_type='parity', threshold=0.00000001)
qubitOp = qubitOp.two_qubit_reduced_operator(num_particles)
shift = energy_shift + repulsion_energy
return qubitOp, num_particles, num_spin_orbitals, shift


First, the exact ground state energy is calculated using the qubit operator and a classical exact eigensolver. Subsequently, the initial state $|\psi\rangle$ is created, which the VQE instance uses to produce the final ansatz $\min_{\theta}(|\psi(\theta)\rangle)$. The exact result and the VQE result at each interatomic distance is stored. Observe that the result given by vqe.run(backend)['energy'] + shift is equivalent the quantity $\min_{\theta}\left(\langle \psi(\theta)|H|\psi(\theta)\rangle\right)$, where the minimum is not necessarily the global minimum.

When initializing the VQE instance with VQE(qubitOp, var_form, optimizer, 'matrix') the expectation value of $H$ on $|\psi(\theta)\rangle$ is directly calculated through matrix multiplication. However, when using an actual quantum device, or a true simulator such as the qasm_simulator with VQE(qubitOp, var_form, optimizer, 'paulis') the calculation of the expectation value is more complicated. A Hamiltonian may be represented as a sum of a Pauli strings, with each Pauli term acting on a qubit as specified by the mapping being used. Each Pauli string has a corresponding circuit appended to the circuit corresponding to $|\psi(\theta)\rangle$. Subsequently, each of these circuits is executed, and all of the results are used to determine the expectation value of $H$ on $|\psi(\theta)\rangle$. In the following example, we initialize the VQE instance with matrix mode, and so the expectation value is directly calculated through matrix multiplication.

Note that the following code snippet may take a few minutes to run to completion.

backend = BasicAer.get_backend("statevector_simulator")
distances = np.arange(0.5, 4.0, 0.1)
exact_energies = []
vqe_energies = []
optimizer = SLSQP(maxiter=5)
for dist in distances:
qubitOp, num_particles, num_spin_orbitals, shift = get_qubit_op(dist)
result = ExactEigensolver(qubitOp).run()
exact_energies.append(result['energy'] + shift)
initial_state = HartreeFock(
qubitOp.num_qubits,
num_spin_orbitals,
num_particles,
'parity'
)
var_form = UCCSD(
qubitOp.num_qubits,
depth=1,
num_orbitals=num_spin_orbitals,
num_particles=num_particles,
initial_state=initial_state,
qubit_mapping='parity'
)
vqe = VQE(qubitOp, var_form, optimizer, 'matrix')
results = vqe.run(backend)['energy'] + shift
vqe_energies.append(results)
print("Interatomic Distance:", np.round(dist, 2), "VQE Result:", results, "Exact Energy:", exact_energies[-1])

print("All energies have been calculated")

Interatomic Distance: 0.5 VQE Result: -7.039710219020506 Exact Energy: -7.039732521635202
Interatomic Distance: 0.6 VQE Result: -7.313344302334236 Exact Energy: -7.313345828761008
Interatomic Distance: 0.7 VQE Result: -7.500921095743192 Exact Energy: -7.500922090905936
Interatomic Distance: 0.8 VQE Result: -7.630976914468914 Exact Energy: -7.630978249333209
Interatomic Distance: 0.9 VQE Result: -7.7208107952020795 Exact Energy: -7.720812412134773
Interatomic Distance: 1.0 VQE Result: -7.782240655298441 Exact Energy: -7.782242402637011
Interatomic Distance: 1.1 VQE Result: -7.823597493320795 Exact Energy: -7.82359927636281
Interatomic Distance: 1.2 VQE Result: -7.850696622934822 Exact Energy: -7.850698377596024
Interatomic Distance: 1.3 VQE Result: -7.867561602181376 Exact Energy: -7.867563290110052
Interatomic Distance: 1.4 VQE Result: -7.876999876757721 Exact Energy: -7.877001491818373
Interatomic Distance: 1.5 VQE Result: -7.8810141736656405 Exact Energy: -7.881015715646992
Interatomic Distance: 1.6 VQE Result: -7.881070662952161 Exact Energy: -7.88107204403092
Interatomic Distance: 1.7 VQE Result: -7.878267162143656 Exact Energy: -7.878268167584993
Interatomic Distance: 1.8 VQE Result: -7.873440112155302 Exact Energy: -7.873440293132828
Interatomic Distance: 1.9 VQE Result: -7.86723366674701 Exact Energy: -7.8672339648160285
Interatomic Distance: 2.0 VQE Result: -7.860152327529411 Exact Energy: -7.86015320737878
Interatomic Distance: 2.1 VQE Result: -7.852595105536979 Exact Energy: -7.852595827876738
Interatomic Distance: 2.2 VQE Result: -7.844878726366329 Exact Energy: -7.844879093009722
Interatomic Distance: 2.3 VQE Result: -7.837257439448259 Exact Energy: -7.8372579676155025
Interatomic Distance: 2.4 VQE Result: -7.829935045088515 Exact Energy: -7.829937002623394
Interatomic Distance: 2.5 VQE Result: -7.823070191557451 Exact Energy: -7.82307664213409
Interatomic Distance: 2.6 VQE Result: -7.816782591999657 Exact Energy: -7.816795150472929
Interatomic Distance: 2.7 VQE Result: -7.8111534373726 Exact Energy: -7.811168284803366
Interatomic Distance: 2.8 VQE Result: -7.806218299266321 Exact Energy: -7.806229560089845
Interatomic Distance: 2.9 VQE Result: -7.801962397475152 Exact Energy: -7.8019736023325486
Interatomic Distance: 3.0 VQE Result: -7.798352412318197 Exact Energy: -7.7983634309151295
Interatomic Distance: 3.1 VQE Result: -7.795326815750017 Exact Energy: -7.795340451637537
Interatomic Distance: 3.2 VQE Result: -7.792800698225245 Exact Energy: -7.792834806738612
Interatomic Distance: 3.3 VQE Result: -7.790603799019874 Exact Energy: -7.790774009971014
Interatomic Distance: 3.4 VQE Result: -7.788715354695274 Exact Energy: -7.789088897991478
Interatomic Distance: 3.5 VQE Result: -7.787215781080283 Exact Energy: -7.787716973466144
Interatomic Distance: 3.6 VQE Result: -7.786080393658009 Exact Energy: -7.786603763673838
Interatomic Distance: 3.7 VQE Result: -7.785203497342158 Exact Energy: -7.785702912499886
Interatomic Distance: 3.8 VQE Result: -7.7844795319924325 Exact Energy: -7.784975591698873
Interatomic Distance: 3.9 VQE Result: -7.783853361693722 Exact Energy: -7.7843896116723315
All energies have been calculated

plt.plot(distances, exact_energies, label="Exact Energy")
plt.plot(distances, vqe_energies, label="VQE Energy")
plt.xlabel('Atomic distance (Angstrom)')
plt.ylabel('Energy')
plt.legend()
plt.show() Note that the VQE results are very close to the exact results, and so the exact energy curve is hidden by the VQE curve.

### Running VQE on a Noisy Simulator

Here, we calculate the ground state energy for H$_2$ using a noisy simulator and error mitigation.

First, we prepare the qubit operator representing the molecule's Hamiltonian:

driver = PySCFDriver(atom='H .0 .0 -0.3625; H .0 .0 0.3625', unit=UnitsType.ANGSTROM, charge=0, spin=0, basis='sto3g')
molecule = driver.run()
num_particles = molecule.num_alpha + molecule.num_beta
qubitOp = FermionicOperator(h1=molecule.one_body_integrals, h2=molecule.two_body_integrals).mapping(map_type='parity')
qubitOp = qubitOp.two_qubit_reduced_operator(num_particles)


Now, we load a device coupling map and noise model from the IBMQ provider and create a quantum instance, enabling error mitigation:

IBMQ.load_account()
provider = IBMQ.get_provider(hub='ibm-q')
backend = Aer.get_backend("qasm_simulator")
device = provider.get_backend("ibmqx4")
coupling_map = device.configuration().coupling_map
noise_model = noise.device.basic_device_noise_model(device.properties())
quantum_instance = QuantumInstance(backend=backend, shots=1000,
noise_model=noise_model,
coupling_map=coupling_map,
measurement_error_mitigation_cls=CompleteMeasFitter,
cals_matrix_refresh_period=30,)


Finally, we must configure the optimizer, the variational form, and the VQE instance. As the effects of noise increase as the number of two qubit gates circuit depth increase, we use a heuristic variational form (RYRZ) rather than UCCSD as RYRZ has a much shallower circuit than UCCSD and uses substantially fewer two qubit gates.

The following code may take a few minutes to run to completion.

exact_solution = ExactEigensolver(qubitOp).run()
print("Exact Result:", exact_solution['energy'])
optimizer = SPSA(max_trials=100)
var_form = RYRZ(qubitOp.num_qubits, depth=1, entanglement="linear")
vqe = VQE(qubitOp, var_form, optimizer=optimizer, operator_mode="grouped_paulis")
ret = vqe.run(quantum_instance)
print("VQE Result:", ret['energy'])

Exact Result: -1.86712097834127
VQE Result: -1.8220854070067132


When noise mitigation is enabled, even though the result does not fall within chemical accuracy (defined as being within 0.0016 Hartree of the exact result), it is fairly close to the exact solution.

## Problems

1. You are given a Hamiltonian $H$ with the promise that its ground state is close to a maximally entangled $n$ qubit state. Explain which variational form (or forms) is likely to efficiently and accurately learn the the ground state energy of $H$. You may also answer by creating your own variational form, and explaining why it is appropriate for use with this Hamiltonian.
2. Calculate the number of circuit evaluations performed per optimization iteration, when using the COBYLA optimizer, the qasm_simulator with 1000 shots, and a Hamiltonian with 60 Pauli strings.
3. Use VQE to estimate the ground state energy of BeH$_2$ with an interatomic distance of $1.3$Å. You may re-use the function get_qubit_op(dist) by replacing atom="Li .0 .0 .0; H .0 .0 " + str(dist) with atom="Be .0 .0 .0; H .0 .0 -" + str(dist) + "; H .0 .0 " + str(dist) and invoking the function with get_qubit_op(1.3). Note that removing the unoccupied orbitals does not preserve chemical precision for this molecule. However, to get the number of qubits required down to 6 (and thereby allowing efficient simulation on most laptops), the loss of precision is acceptable. While beyond the scope of this exercise, the interested reader may use qubit tapering operations to reduce the number of required qubits to 7, without losing any chemical precision.
1. Peruzzo, Alberto, et al. "A variational eigenvalue solver on a photonic quantum processor." Nature communications 5 (2014): 4213.
2. Griffiths, David J., and Darrell F. Schroeter. Introduction to quantum mechanics. Cambridge University Press, 2018.
3. Shende, Vivek V., Igor L. Markov, and Stephen S. Bullock. "Minimal universal two-qubit cnot-based circuits." arXiv preprint quant-ph/0308033 (2003).
4. Kandala, Abhinav, et al. "Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets." Nature 549.7671 (2017): 242.