The Case for Quantum

1. The Complexity of Adding

The case for quantum computers, simply put, is that they can solve certain problems that no classical computer ever could. To understand why this is, we first need to consider how much computational effort is required to solve certain problems.

To begin, we can revisit the algorithm considered in the first section: adding two numbers.

   9213
+  1854
=  ????

Adding two $n$-digit numbers can be done with a set of simple operations, each of which consists of just adding two single-digit numbers. To analyze the complexity of the procedure, we can think about how many of these basic additions are required and how this number depends on $n$. We'll refer to this number as $c(n)$.

In the easiest case, where we don't need to carry a 1 at any point, only $n$ basic additions are required. In the worst case, we will need to perform $n$ carry operations, each of which will require an extra basic addition. From these considerations, we can conclude that $n \leq c(n) \leq 2n$.

2. Big O Notation

We can summarize this result by saying that $c(n)$ grows linearly with $n$. More generally, we can say that a linear function of $n$ can be found which acts as an upper bound for $c(n)$ when $n$ is large. Since this is a long and wordy sentence, we won't actually want to say this very often. Instead, we can express it more compactly using 'big O notation'.

Definition: Big O notation (Click to expand) For some example functions $f(x)$ and $g(x)$ and parameter $x$, the statement $f(x) = O(g(x))$ means that there exist some finite numbers $M>0$ and $x_0$ such that $$ f(x) \leq M g(x) \forall x>x_0. $$

Big O notation is useful as it allows us to compare how the resources/runtime required by an algorithm scale with input size, independent of the specific platform and algorithm implementation under consideration. Below are examples of common scaling factors of a runtime $N$ as a function of input size $n$; it is clear that for a sufficiently large problem size the runtime of a $O(a^n)$ algorithm will exceed that of a $O(n^b)$ algorithm, where $a$ and $b$ are constants.

Drawing
Comparisons of different time complexities. n is the number of input bits, and N is the number of operations required. [5]

With this notation, the property described above is expressed simply as $c(n) = O(n)$. This captures the linear behavior without needing to dwell on the specifics. Therefore, independent of whether $c(n) = n$, $c(n) = 2n$, or something else, we can simply say that $c(n) = O(n)$.

There is a hidden assumption in what we have considered so far. By talking about the number of digits, we have assumed the use of a specific number system. However, the number of digits will depend on which number system we are using, be it decimal, binary, or something else. For example, the number of bits $n_2$ required to express a number is related to the number of decimal digits $n_{10}$ required to express the same number by

$n_2 = \left\lceil \frac{\log 10}{ \log 2} \, n_{10} \right\rceil \approx 3.3 \, n_{10}.$

Since this too is a linear relationship, it does not change how we express the complexity using big O notation. We can equally say that $c(n_2) = O(n_2)$, $c(n_{10}) = O(n_{10})$, or even $c(n_{10}) = O(n_{2})$. It is for this reason that we can often simply speak of the number of digits, $n$, without needing to specify what number system is used.

3. Complexity Theory

Complexity theory is the study of the computational effort required to run any algorithm. By considering the best possible algorithm to solve a given problem, we can also study the computational effort inherent in solving this problem. For addition we already know the optimal algorithm, and so know that it is a problem with $O(n)$ complexity.

Multiplication is not quite so simple. Algorithms you learned at school for multiplying two $n$-digit numbers will have required $O(n^2)$ basic operations, such as single-digit additions and multiplications. Though algorithms with lower asymptotic complexity have been found, it is widely regarded as impossible to perform multiplication with $O(n)$ complexity.

Even so, multiplication is far from being the most complex problem. An example of a problem with far greater complexity is factorization: taking an $n$-digit number and finding its prime factors. The best known algorithm in this case has a complexity that is worse than $O\left(e^{n^{1/3}}\right)$. The exponential here means that the complexity grows very quickly and makes factorization a very hard problem to solve.

To demonstrate this point using actual computation time, we can take a recent example.$^{1}$ Consider the following 829-digit number.

rsa_250 = 2140324650240744961264423072839333563008614715144755017797754920881418023447140136643345519095804679610992851872470914587687396261921557363047454770520805119056493106687691590019759405693457452230589325976697471681738069364894699871578494975937497937

If you try using your computer to add or multiply numbers of this size, you'll find that it can solve such problems very quickly. If you multiply the number of processors your computer has with the number of seconds it takes to get the number of core-seconds, you are sure to find that very much less than 1 core-second is required.

However, performing factorization on this number requires a supercomputer and around 2700 core-years, which eventually yields the following two factors.

p = 64135289477071580278790190170577389084825014742943447208116859632024532344630238623598752668347708737661925585694639798853367
q = 33372027594978156556226010605355114227940760344767554666784520987023841729210037080257448673296881877565718986258036932062711
p*q
2140324650240744961264423072839333563008614715144755017797754920881418023447140136643345519095804679610992851872470914587687396261921557363047454770520805119056493106687691590019759405693457452230589325976697471681738069364894699871578494975937497937

For the factorization of larger numbers, we easily get to a point where a planet-sized supercomputer would need to run for the age of the universe. Clearly, any such problem is practically impossible.

So far we have considered only mathematical operations on $n$-digit numbers, with the complexity expressed as the number of simple single-digit operations required. However, complexity theory can be used to analyze any computational method for any kind of problem, be it searching databases, rendering graphics, simulating dynamics, or traversing a dungeon in Legend of Zelda. In each case, we are able to find a parameter or set of parameters that serve as our input size and express the complexity in terms of this input size using big O notation. For searching a database of $N$ entries, for example, the complexity is $O(N)$.

Formally, defining the complexity of an algorithm depends on the exact theoretical model for computation we are using. Each model has a set of basic operations, known as primitive operations, with which any algorithm can be expressed. For Boolean circuits, as we considered in the first section, the primitive operations are the logic gates. For Turing machines, a hypothetical form of computer proposed by Alan Turing, we imagine a device stepping through and manipulating information stored on a tape. The RAM model has a more complex set of primitive operations and acts as an idealized form of the computers we use every day. All these are models of digital computation, based on discretized manipulations of discrete values. Different as they may seem from each other, it turns out that it is very easy for each of them to simulate the others. This means that in most cases the computational complexity does not significantly depend on which of these models is used. Rather than stating complexity specifically for the RAM model or Turing machines, we can therefore simply speak of the complexity for digital computers.

4. Beyond Digital Computation

Though digital computers are dominant now, they are not the only form of computation. Analog computers were also widely studied and used in the past. Unlike the discrete values of digital computers, these are based on precise manipulations of continuously varying parameters. It has sometimes been claimed that such devices could quickly solve problems that are intractable for digital computers. However, such claims have never been realized. A major stumbling block for analog computers is the inability to build devices with arbitrarily high precision. In digital computers, the discretization means that errors must be relatively large in order to be noticeable, and methods for detecting and correcting such errors can then be implemented. In analog computers, however, errors can be arbitrarily small and impossible to detect, but still their effects can build up to ruin a computation.

If one were to propose an ideal model of computation, it might seek to combine the robustness of a digital computer with the subtle manipulations of an analog computer. To achieve this we can look to quantum mechanics. We have already seen that qubits are a system with discrete outputs 0 and 1, and yet can exist in states that can only be described by continuous parameters. This is a particular instance of the well-known notion of 'wave-particle' duality that is typical of quantum systems. They cannot be fully described as either discrete or continuous, but rather a combination of the two. As Einstein said,$^{2}$

'It seems as though we must use sometimes the one theory and sometimes the other, while at times we may use either. We are faced with a new kind of difficulty. We have two contradictory pictures of reality; separately neither of them fully explains the phenomena...but together they do.'

A quantum computer, whose primitive operations are gates applied to qubits, is therefore neither analog nor digital, but something unique. In further chapters we will explore the consequences of this unique nature. We will see that quantum computers can solve problems with a radically different complexity to digital computers. In fact, quantum computing is the only known technology that can be exponentially faster than classical computers for certain tasks, potentially reducing calculation times from years to minutes. We will also explore how quantum error correction can remove the effects of any imperfections.

5. When to Use a Quantum Computer

With qubits and quantum gates, we can design novel algorithms that are fundamentally different from digital and analog classical ones. In this way, we hope to find solutions to problems that are intractable for classical computers.

One way in which this can be done is when we have some function for which we want to determine a global property. For example, if we want to find the value of some parameter $x$ for which some function $f(x)$ is a minimum, or the period of the function if $f(x)$ is periodic. An algorithm on a digital computer might use a process in which $f(x)$ is computed for a variety of different inputs in order to get sufficient information about the global property. With a quantum computer, however, the fact that we can create superposition states means that the function can be applied to many possible inputs simultaneously. This does not mean that we can access all possible outputs since measurement of such a state simply gives us a single result. However, we can instead seek to induce a quantum interference effect, which will reveal the global property we require.

This general description illustrates the workings of many of the quantum algorithms that have already been discovered. One prominent example is Grover's algorithm, which reduces the complexity of searching through $N$ items from $O(N)$ to $O(N^{1/2})$. This quadratic speedup could be useful in many applications with tasks that can be expressed as an unstructured search, such as optimization problems and machine learning.

Bokeh Application
Comparing performance of algorithms across different platforms is difficult. What we can tell (through big-O-notation) is despite the difference in speeds between our classical and quantum computers, for a large enough problem, the quantum search algorithm will always out-perform the classical search algorithm.

An even more impressive speedup is obtained with Shor's algorithm, which analyses periodic functions at the heart of the factorization problem. This allows a quantum solution for factoring $n$-digit numbers with complexity $O(n^3)$. This is a superpolynomial speedup compared with the complexity for digital computers, which is worse than $O\left(e^{n^{1/3}}\right)$.

Another approach towards quantum algorithms is to use quantum computers to solve quantum problems. As we will see in the next chapter, expressing a quantum state requires an amount of information that scales exponentially with the number of qubits. Just writing down the state of $n$ qubits therefore becomes an intractable task for digital computers as $n$ increases. However, for a quantum computer we just need $n$ qubits to do the same job. This natural capability to express and manipulate quantum states allows us to study and better understand quantum systems of interest, such as molecules and fundamental particles.

Applying and adapting quantum algorithms in different industries therefore has the promise of enabling disruptive use cases in business and science. These include breakthroughs in drug discovery, machine learning, materials discovery, option pricing, protein folding, and supply chain.$^{3}$ Particularly promising are those problems for which classical algorithms face inherent scaling limits and which do not require a large classical dataset to be loaded. For quantum advantage, a given problem's answers need to strongly depend on exponentially many entangled degrees of freedom with structure such that quantum mechanics evolves to a solution without having to go through all paths. Note, however, that the precise relationship between problems that are 'easy' for quantum computers (solvable in polynomial time) and other complexity-theoretic classes is still an open question.$^{4}$

This is just a taste of how quantum algorithms can perform computation in an unique way. More details on these approaches can be found in later chapters. But first we need to look beyond the single qubit and invest some time into understanding the full set of quantum gates that we will need. This is the focus of the next chapter.

6. References

  1. https://lists.gforge.inria.fr/pipermail/cado-nfs-discuss/2020-February/001166.html
  2. Albert Einstein, Leopold Infeld (1938). The Evolution of Physics: The Growth of Ideas from Early Concepts to Relativity and Quanta. Cambridge University Press.
  3. https://www.ibm.com/thought-leadership/institute-business-value/report/quantumstrategy
  4. https://www.cs.virginia.edu/~robins/The_Limits_of_Quantum_Computers.pdf
  5. Image: Cmglee / CC BY-SA (https://creativecommons.org/licenses/by-sa/4.0)