Insights

Quantum Advantage – Are We There Yet? (Part 1)

February 24, 2022
By Louis-Pierre Gravelle

Whenever I read a headline about quantum computing and the so-called advantage, I am reminded of long road trips with kids.  You’ve barely left the house, and already someone is asking “are we there yet”? And the question keeps being asked, over and over again.  

The notion of “quantum supremacy” was originally proposed by John Preskill, in 2012, to emphasize that the nascent field of quantum computing occurred at a privileged point in space and time for our planet.  The idea behind the expression conveyed the power of using quantum phenomena to fundamentally transform traditional approaches to problem solving, unlocking avenues of exploration that were inconceivable using traditional computers.  Over the past decade, the words themselves have become controversial, mostly because of the word “supremacy” and its association with racial politics. Most literature now refers to the concept as “quantum advantage”. 

Quantum advantage as a concept has also proved problematic, and the definition of the advantage seems to be a moving target.  Recent reports have pegged this advantage as having been achieved by Google, which was swiftly contested by IBM, then more recently achieved by researchers at a Chinese university. 

To fully appreciate what is this quantum advantage that purports to accelerate drug discoveries, render current cryptographic techniques obsolete, turbocharge AI and machine learning, and basically solve the world’s problems, this three-part series will present an understanding of how quantum computing differs from classical computing and the current state of development in quantum computing. 

What is a computer? 

A computer is, at its most basic level, an extremely powerful calculating machine.  Modern computers are for the vast majority organized around a central processing unit (a CPU).  The CPU is connected to input devices (such as a keyboard, a mouse, a microphone, sensors, etc.), computer memory, and an output device (a printer, a screen, even switches).  The significant difference between a calculator and a computer is the memory: you can input into the memory a set of instructions, a recipe if you like.  When you “start” the set of instructions, they are executed in sequence until all of the instructions have been completed. 

The instructions that are fed into the CPU need to be converted into a “language” that the CPU understands.  For example, “Add A plus B and output the result to a computer screen” is not something that a CPU understands.  The instructions need to be translated into machine language, a string of alternating 1s and 0s.  Each of these digits is a “bit”.  In classical computers, the bit is limited to a binary range – thus the one or the zero.  Classical computers use transistors to perform operations on these 1s and 0s, which are represented by the presence or absence of a voltage. 

The list of instructions is a computer program, which tells the CPU which calculations to perform in what order, using information stored in the memory.  The first computers were gigantic – they filled the whole room, and weren’t, by today’s standards, particularly fast. 

Looking back in time, not to the original vacuum tube computers (think Eniac[1]), but more recently to the early 1970s, Intel developed a microprocessor (a chip) that could perform tens of thousands of instructions per second.  

Today, processors are measured in Million Instructions per Second (MIPS).  The Intel microprocessor of the early 1970s, the Intel 4004 could thus process 0,092 MIPS.  The 2014 Intel Core i7 can process 238,310 MIPS – in other words, today’s processors can perform 2.5 MILLION more operations per second than the first microprocessors[2].   

The above focuses only on general microprocessors, found in, for example, laptops and desktop computers.  Parallel processing (i.e., breaking down those instructions stored in the memory into parts that can be performed in parallel), and the emergence of specialized microprocessors such as graphical processing units (GPUs) can impact the calculation of MIPS, which makes comparing microprocessors the subject of passionate debates online. 

What is clear is that today’s processors can perform complex calculations much faster than processors could 50 years ago.  Even in the event of a breakdown in Moore’s law[3], we can expect continued improvements in processor capabilities, meaning that a problem that is difficult or even impossible to compute in a reasonable amount of time today with classical computers may be within reach tomorrow.  Improvements in processing power mean that the theory underlying “artificial intelligence” and machine learning, impracticable to implement 20 years ago, is now being deployed to mainstream consumers in the form of image recognition in photographs, and voice recognition.  

What’s all this got to do with quantum? 

What is a Quantum Computer?

Quantum mechanics is the field of study of the very small things – and by very small, I mean atoms and the particles that make up atoms.  Quantum mechanics concerns the behaviour and interaction of matter at an atomic and subatomic scale. 

A quantum computer is a computing machine, which relies on properties of quantum mechanics to perform calculations.  Using properties of quantum mechanics enables quantum computers to perform an increased number of operations at once, resulting in a computing machine that is inherently more powerful than classical computers for some problems. However, the algorithms required to operate quantum computers are also highly complex. 

Quantum advantage

The theory behind the concept of the quantum advantage is the ability to prove that a quantum computer can perform operations (i.e., solve a problem) that a classical computer could not perform, because it would simply take too long.  Although it has been shown that theoretically, quantum computers have an advantage over classical computers for a class of problems, the real-world demonstration remains not so easy to implement. 

Currently, the power of quantum computers has been shown primarily by solving experimental problems. These results are, for lack of a better word, useless.  They are proof of concepts, not actually calculating a function with a usable result. A commercially viable quantum computer able to solve problems that have a useful result remains some ways off in terms of development.

The second part of this series will provide an in-depth exploration of the properties of quantum mechanics used to implement quantum computers. The third and final part of this series will examine the current state of development of quantum computing technologies.

Subscribe to our newsletter

You can unsubscribe at any time. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

This site is registered on wpml.org as a development site.