Spooky Quantum Action Passes Test

Recent experiments quash the hope that the unsettling phenomenon of quantum entanglement can be explained away

Kenn Brown and Chris Wren, Mondolithic Studios

Not all revolutions start big. In the case of quantum mechanics, a quiet one began in 1964, when physicist John Bell published an equation. This equation, in the form of a mathematical inequality, proposed a test to address deep philosophical questions that troubled many of the early founders of quantum mechanics.

The issue was whether particles separated by vast distances could retain a connection so that measurements performed on one would affect the other. According to classical physics, this should be impossible. But under quantum theory, it happens all the time. Through his equation, Bell proposed a way to determine whether the universe could actually be that strange.

Over the past half a century his simple equation has profoundly changed the way we think about quantum theory. Today many of the quantum technologies that physicists are inventing owe their beginnings to Bell’s test. Yet it was not until 2015, more than 50 years after Bell proposed his inequality, that scientists were able to verify the predictions of Bell’s theorem in the most complete manner possible. These experiments close a quest that has spanned generations and mark the start of a new era in developing quantum technologies.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Hidden Variables

To understand Bell’s equation, we must go back to the roots of quantum mechanics. This set of rules describes the behavior of light and matter at the smallest scales. Atoms, electrons, photons and other subatomic particles act differently from things we experience in our everyday lives. One of the major deviations is that these small particles exist in uncertain states. Take an electron’s spin, for example. If an electron whose spin is sideways passes through a magnetic field oriented up and down, half the time it will veer upward and half the time it will veer downward, but the outcome is truly random. Compare this to a coin flip. We might think that all coin flips are equally random, but if we knew precisely the mass of the coin, how much force was used to flip it and all the details about the air currents hitting it, we would be able to predict exactly how the coin would land. Electron spin is different, however. Even if we have perfect knowledge about all the properties of the electron and its spin before it passes through the magnetic field, quantum fuzziness prevents us from knowing which way it will go (we can, however, calculate the probability of its going up or down). When scientists actually measure a quantum system, though, all these possibilities cease to exist somehow, and a single outcome is decided—the electron ends up having a spin that is oriented either up or down.

When physicists formulated quantum theory in the early 20th century, some of its founding members, such as Albert Einstein and Erwin Schrödinger, felt uncomfortable with the fuzziness of quantum states. Perhaps, they thought, nature is not really fuzzy, and a theory that goes beyond quantum mechanics could exactly predict the behavior of particles. Then it would be possible to foresee the outcome of a measurement of the spin of an electron in the same way it is possible to know exactly how a coin will land if you have enough information.

Schrödinger introduced the idea of entanglement (Verschränkung in German) to describe quantum fuzziness spread across two or more particles. According to quantum theory, properties of particles can be entangled such that their joint value is precisely known, but the individual values remain completely uncertain. An analogy would be two dice that, when rolled, would each yield a random result but together always add up to 7. Schrödinger used the idea of entanglement in a famous thought experiment in which the fuzziness of the state of an atom becomes entangled with a cat being dead or alive. Surely any cat is either dead or alive and not in an absurd limbo in between, Schrödinger reasoned, and therefore we should question the notion that atoms can be fuzzy at all.

Einstein, with his collaborators Boris Podolsky and Nathan Rosen (known together as EPR), took the argument a step further by analyzing two entangled electrons that are far apart. Imagine that the spins of the particles are entangled such that when they are measured along the same orientation, opposite values will always result. For instance, if scientists measure one electron spin and find it to be pointing up, the other will point down. Such correlations are certainly surprising when the electrons are far enough apart that it is impossible for them to communicate at the speed of light before their individual spins are measured. How does the second particle know that the first one was up? Einstein famously called this synchronization “spooky action at a distance.”

The EPR analysis of this case, published in 1935 in a now classic paper, started from two very reasonable assumptions. First, if scientists can predict a measurement outcome with certainty, there must be some property in nature that corresponds to this outcome. Einstein named these properties “elements of reality.” For example, if we know that an electron’s spin is up, we can predict with certainty that if it travels through an appropriate magnetic field it will always be deflected upward. In this situation, the electron’s spin would be an element of reality because it is well defined and not fuzzy. Second, an event in one place cannot instantaneously affect a faraway event; influences cannot travel faster than the speed of light.

Taking these assumptions, let us analyze two entangled electrons held at distant places by two people, Alice and Bob. Suppose Alice measures her electron spin along the z direction. Because of the perfect anticorrelation, she immediately knows what the outcome will be if Bob measures his electron spin along z as well. According to EPR, the z component of Bob’s electron spin would thus be an element of reality. Similarly, if Alice decides to measure the spin along the x direction, she would know with certainty the outcome of a measurement on Bob’s electron spin along x. In this case, the x component of Bob’s electron spin would be an element of reality. But because Alice and Bob are far apart, Alice’s decision to measure along the z direction or the x direction cannot influence what happens at Bob’s. Therefore, to account for the perfect anticorrelations predicted by quantum theory, the value of Bob’s electron spin must be perfectly predictable along both the z direction and the x direction. This appears to contradict quantum theory, which states, through the so-called Heisenberg uncertainty principle, that the spin can have a well-defined value along a single direction only and must be fuzzy along the others.

This conflict led EPR to conclude that quantum theory is incomplete. They suggested that it might be possible to resolve the contradiction by supplementing the theory with extra variables. In other words, there might be a deeper theory that goes beyond quantum mechanics in which the electrons possess extra properties that describe how they will behave when jointly measured. These extra variables might be hidden from us, but if we had access to them, we could predict exactly what would happen to the electrons. The apparent fuzziness of quantum particles is a result of our ignorance. Physicists call any such successor to quantum mechanics that contains these hidden variables a “local hidden variable theory.” The “local” here refers to the hidden signals not being able to travel faster than the speed of light.

Bell’s Twist

Einstein did not question the predictions of quantum mechanics itself; rather he believed there was a deeper truth in the form of hidden variables that govern reality. After the 1935 EPR paper, interest in these foundational issues in quantum mechanics died down. The possibility of hidden variables was seen as a philosophical question without any practical value—the predictions of theories with and without hidden variables appeared to be identical. But that changed in 1964, when Bell startlingly showed that in certain circumstances hidden variable theories and quantum mechanics predict different things. This revelation meant it is possible to test experimentally whether local hidden variable theories—and thus Einstein’s hoped-for deeper truth of nature—can really exist.

Bell analyzed the EPR thought experiment but with one twist: he let Alice and Bob measure their electron spins along any possible direction. In the traditional experiment, Alice and Bob must measure along the same direction and therefore find that their results are 100 percent correlated—if Alice measures her spin as up, then Bob always measures down. But if Alice and Bob are sometimes measuring along different axes, sometimes their outcomes are not synchronized, and that is where the differences between quantum theory and hidden variable theories come in. Bell showed that for certain sets of directions, the correlations between the outcomes of Alice’s and Bob’s measurements would be stronger according to quantum theory than according to any local hidden variable theory—a difference known as Bell’s inequality. These differences arise because the hidden variables cannot influence one another faster than the speed of light and therefore are limited in how they can coordinate their efforts. In contrast, quantum mechanics allows the two electrons’ spins to exist jointly in a single entangled fuzzy state that can stretch over vast distances. Entanglement causes quantum theory to predict correlations that are up to 40 percent stronger.

Bell’s theorem completely changed physicists’ thinking. It showed a mathematical conflict between Einstein’s view and quantum theory and outlined a powerful way for experimentally testing the two. Because Bell’s theorem is a mathematical inequality that limits how high correlations can be under any local hidden variable theory, experimental data that exceed these bounds—in other words, that “violate” Bell’s inequality—will show that local hidden variable theories cannot describe nature.

Soon after Bell’s publication, physicists John Clauser, Michael Horne, Abner Shimony and Richard Holt (known as CHSH) found similar inequalities that were easier to test in experiments. Researchers performed the first trials in the late 1960s, and since then experiments have come closer and closer to the ideal of Bell’s proposed setup. The experiments have found correlations that violate Bell’s inequality and seemingly cannot be explained by local hidden variable theories. Until 2015, though, all experiments necessarily relied on one or more additional assumptions because of imperfections in the setups. These assumptions provide loopholes that local hidden variable theories could in principle use to pass the test.

In virtually all such experiments in the 20th century, scientists generated entangled photons at a source and sent them to measurement stations (standing in for Alice and Bob). The Alice and Bob stations each measured their respective photon along one of two orientations, noting its polarization—the direction in which the photon’s electric field oscillates (polarization can be thought of as the spin of a photon). The scientists then calculated the average correlations between the two stations’ outcomes and plugged those into Bell’s equation to check whether the results violated the inequality.

Credit: Matthew Twombly; Source: Modified from TU Delft—A Loophole-Free Bell Test (based on text by Michel van Baal and graphics by Scixel). TU Delft, 2015

Tests with Caveats

The first series of experiments used fixed measurement directions. In these cases, there is ample time for hidden variables (using knowledge of the measurement directions on either side) to influence the outcomes. That is, hidden signals could tell Bob which direction Alice used to measure her photon without traveling faster than light. This so-called locality loophole means that a hidden variable theory could match the quantum correlations. In 1982 French physicist Alain Aspect and his co-workers performed a test where the photons were sent to opposite ends of a large room and their polarization was measured. While these entangled photons were in flight, the polarization angle of the measurement device changed periodically. In the late 1990s Anton Zeilinger, now at the University of Vienna, and his colleagues further improved this strategy by using truly random (as opposed to periodic) polarization-measurement directions. In addition, these measurement directions were determined very shortly before the measurements took place, so hidden signals would have had to travel faster than light to affect this experiment. The locality loophole was firmly closed.

These experiments had one drawback, however: photons are hard to work with. Most of the time the tests got no answer at all, simply because the photons were not created in the first place or were lost along the way. The experimenters were forced to assume that the trials that worked were representative of the full trial set (the “fair sampling assumption”). If this assumption were dropped, the results would not violate Bell’s inequality. It is possible that something different was happening in the trials where photons were lost, and if their data were included, the results would not be in conflict with local hidden variable theories. Scientists were able to close this so-called detection loophole in this century by giving up photons and using matter, such as trapped ions, atoms, superconducting circuits and nuclei in diamond atoms, which can all be entangled and measured with high efficiency. The problem is that in these cases the particles were all located extremely close to one another, leaving the locality loophole open. Thus, although these Bell tests were ingenious, they could all, at least in principle, be explained by a local hidden variable theory. A Bell test with all the loopholes closed simultaneously became one of the grandest challenges in quantum science.

Thanks to rapid progress in scientists’ ability to control and measure quantum systems, it became possible in 2015, 80 years after the EPR paper and 51 years after Bell’s equation, to carry out a Bell test in the ideal setting, often referred to as a loophole-free Bell test. In fact, within a short span of time, four different groups found results that violated Bell’s inequality with all loopholes closed—providing ironclad evidence against local hidden variable theories.

Closing the Loopholes

One of us (Hanson) and his collaborators performed the first experiment to close all loopholes at the Delft University of Technology in the Netherlands using a setup [see box graphic above] that closely resembles the original EPR concept. We entangled the spins of two electrons contained inside a diamond, in a space called a defect center, where a carbon atom should have been but was missing. The two entangled electrons were in different laboratories across campus, and to make sure no communication was possible between them, we used a fast random-number generator to pick the direction of measurement. This measurement was finished and locally recorded on a hard drive before any information from the measurement on the other side could have arrived at light speed. A hidden signal telling one measuring station which direction the other had used would not have had time to travel between the labs, so the locality loophole was firmly closed.

These strict timing conditions required us to separate the two electrons by more than a kilometer, about two orders of magnitude farther apart than the previous world record for entangled matter systems. We achieved this separation by using a technique called entanglement swapping, in which we first entangle each electron with a photon. We then send the photons to meet halfway between the two labs on a semitransparent mirror where we have placed detectors on either side. If we detect the photons on different sides of the mirror, then the spins of the electrons entangled with each photon become entangled themselves. In other words, the entanglement between the electrons and the photons is transferred to the two electrons. This process is prone to failure—photons can be lost between the diamonds and the mirror, just as in the earlier photon-based experiments. But we start a Bell trial only if both photons are detected; thus, we deal with photon loss beforehand. In this way, we close the detection loophole because we do not exclude the findings of any Bell test trials from our final results. Although the photon loss related to the large separation in our case does not limit the quality of the entanglement, it does severely restrict the rate at which we can conduct Bell trials—just a few per hour.

After running the experiment nonstop for several weeks in June 2015, we found Bell’s inequality was violated by as much as 20 percent, in full agreement with the predictions of quantum theory. The probability that such results could have arisen in any local hidden variable model—even allowing the devices to have maliciously conspired using all available data—was 0.039. A second experimental run conducted in December 2015 found a similar violation of Bell’s inequalities.

In the same year, three other groups performed loophole-free Bell tests. In September physicists at the National Institute of Standards and Technology (NIST) and their colleagues, led by one of us (Shalm), used entangled photons, and in the same month Zeilinger’s group did so as well. Not too long after, Harald Weinfurter of Ludwig Maximilian University of Munich and his team used rubidium atoms separated by 400 meters in a scheme similar to that of the Hanson group (the results were published in 2017).

Both the NIST and Vienna teams entangled the polarization state of two photons by using intense lasers to excite a special crystalline material. Very rarely, about one in a billion of the laser photons entering the crystal underwent a transformation and split into a pair of daughter photons whose polarization states were entangled. With powerful enough lasers, it was possible to generate tens of thousands of entangled photon pairs per second. We then sent these photons to distant stations (separated by 184 meters in the NIST experiment and 60 meters in the Vienna experiment) where we measured the polarization states. While the photons were in flight toward the measurement stations, our system decided in which direction to measure their polarization such that it would be impossible for any hidden variables to influence the results. The locality loophole was therefore closed. The most challenging aspect of using photons is preventing them from being lost because we must detect more than two thirds of the photons we create in our setup to avoid the detection loophole. Most conventional single-photon detectors operate at around 60 percent efficiency—a nonstarter for this test. But at NIST we developed special single-photon detectors, made of cold superconducting materials, capable of observing more than 90 percent of the photons that reach it. Thus, we closed the detection loophole as well.

Repeating these polarization measurements on many different entangled photon pairs more than 100,000 times per second, we were able to quickly accumulate statistics on the correlations between the photon polarization states. The correlations observed in both experiments were much stronger than those predicted by hidden variable theories. In fact, the probability that the NIST results could have arisen by chance is on the order of one in a billion (even less likely than winning the Powerball lottery), and the chances are even smaller for the Vienna experiment. Today our NIST group regularly uses an improved version of our setup to violate Bell’s inequalities to a similar degree in less than a minute, and future improvements will speed this up by two orders of magnitude.

Harnessing Entanglement

These experiments force us to conclude that any local hidden variable model, such as those Einstein advocated, is incompatible with nature. The correlations between particles we have observed defy our intuition, showing that spooky action does indeed take place.

Our results also hint at the remarkable power contained in entanglement that we may be able to put to use. A near-term application where loophole-free Bell tests can be useful is in generating randomness. Random numbers are a critical resource in many cryptographic and security techniques. If you can predict the next number a random-number generator will produce, you can hack many financial and communications systems. A good source of randomness that cannot be predicted is therefore of vital importance. Two of the most common ways to generate randomness are through mathematical algorithms and by using physical processes. With mathematical algorithms, if you know the conditions used as a “seed,” you can often predict the output perfectly. With physical processes, a detailed understanding of the underlying physics of the system is required. Miss even a single detail, and a hacker can exploit or control the randomness. The history of cryptography is littered with examples of both types of random-number generators being broken.

Quantum mechanics has handed us a gift, though. It is possible to “extract” the randomness inherent in quantum processes to produce true randomness. The correlations measured in a loophole-free Bell test can be distilled into a certifiably random string. Remarkably, it is possible to hand part of the experimental apparatus (the generation of the entangled particles) to a potential hacker to control. Even in this extreme case, it is possible to produce numbers that are as random as nature allows. In early 2018 our team at NIST was able to use our loophole-free Bell setup to extract 1,024 truly random bits from 10 minutes of experimental data. These bits were certified as random to better than a part in one trillion. In contrast, it would take a conventional random-number generator several hundred thousand years to acquire enough data to directly measure the quality of their randomness to this level. We are working now to incorporate our random-number generator into a public randomness beacon. This tool could act as a time-stamped source of random numbers that is broadcast over the Internet at fixed intervals and can be used in security applications by anyone who needs it.

On a more general level, the techniques developed in loophole-free Bell experiments may enable fundamentally new types of communications networks. Such networks, often referred to as a quantum Internet, can perform tasks that are out of reach of classical information networks. A quantum Internet could enable secure communication, clock synchronization, quantum-sensor networks, and access to remote quantum computers in the cloud. Another goal is “device-independent cryptography,” in which (in close analogy to the randomness beacon) users can validate the secrecy of a shared key through a violation of Bell’s inequalities.

The backbone of a future quantum Internet will be formed by entanglement links precisely like the setups used to test Bell’s inequalities with diamond defect centers, trapped atoms and photons. In 2017 our team at Delft demonstrated a method to boost the quality of remote entangled spins, and in 2018 we improved the entangling rates by three orders of magnitude. Based on this progress, researchers began working toward a first rudimentary version of a quantum Internet.

Eight decades ago when quantum theory was being written, skeptics chafed at its apparent contradiction of the centuries of physical intuition that had been developed; now four experiments have dealt the final blow to that intuition. At the same time, these results have opened the door to exploit nature in ways that Einstein and Bell could not have foreseen. The quiet revolution that John Bell kicked off is now in full swing.

MORE TO EXPLORE

Loophole-Free Bell Inequality Violation Using Electron Spins Separated by 1.3 Kilometres. B. Hensen et al. in Nature, Vol. 526; pages 682–686; October 29, 2015.

Significant-Loophole-Free Test of Bell’s Theorem with Entangled Photons. Marissa Giustina et al. in Physical Review Letters, Vol. 115, Article No. 250401. Published online December 16, 2015.

Strong Loophole-Free Test of Local Realism. Lynden K. Shalm et al. in Physical Review Letters, Vol. 115, Article No. 250402. Published online December 16, 2015.

Ronald Hanson is a physicist at the Delft University of Technology and scientific director of its QuTech research center, a collaboration with the Netherlands Organization for Applied Scientific Research (TNO), focused on quantum computing and quantum Internet technology.

More by Ronald Hanson

Krister Shalm is a physicist at the National Institute of Standards and Technology and the University of Colorado Boulder, where he develops tools to test foundational issues in quantum mechanics.

More by Krister Shalm
Scientific American Magazine Vol 319 Issue 6This article was originally published with the title “Spooky Action” in Scientific American Magazine Vol. 319 No. 6 (), p. 58
doi:10.1038/scientificamerican1218-58