
I recently finished Quantum Computing for Everyone, which is a great intro to the basic ideas and math behind quantum computing. I particularly enjoyed its account of Bell’s inequality, which offers a great lesson in the history of science.
In the early 20th century, physicists worked out the core of quantum mechanics. They developed a theory that explained experimental evidence and made useful predictions. However, it also seemed to predict that particles could get entangled and behave in a correlated way when measured without apparently being physically connected. Albert Einstein in particular found this implication implausible and unaesthetic. He dubbed the phenomenon “spooky action at a distance” and believed a proper accounting of quantum mechanics would explain away the apparent spookiness.
This was not Einstein’s first rodeo with debunking spookiness. In the 1600s, Newton came up with the concept of gravity and derived a theory of mechanics that accurately predict how objects moved. Although Newton’s equations worked, people didn’t understand why. We could see that objects move towards each other with a force proportional to the product of their masses and inversely proportional to the square of the distance between them. But, no one could explain how objects come to “know” the masses of other objects and thereby how to move. For hundreds of years, it seemed physics permitted some spookiness. Einstein finally solved the mystery through his discovery of general relativity. General relativity shows that objects don’t act on each other from a distance; rather they warp the spacetime around themselves, which changes the path of other objects through spacetime.
Recalling this experience, Einstein was convinced that we would debunk the apparent spooky action at a distance in quantum mechanics just as he had in Newtonian mechanics. Perhaps entangled particles “pre-agree” on how to behave when measured later when apart, giving the appearance of spooky action at a distance without violating our intuitions about how physics should work.
A physicist named John Stewart Bell eventually came up with a way to test this hypothesis. He argued that, if particles merely “pre-agreed” on how to behave when measured, there would be a strict upper-bound to how correlated they could be when randomly measured. But, in practice, physicists found correlations that exceed the limit that any pre-agreement strategy would allow. This result suggests that Einstein was wrong. Quantum mechanics is just spooky.
The story is fun in part because Bell’s experiment is so satisfying to trace through (the actual math is much more interesting than my highly anthropomorphized and condensed recounting). But what I find really fascinating are the broader lessons. Einstein had an incredibly elegant intuition that entanglement would be debunked. The pattern matching to gravity seemed so promising. And yet he was totally wrong.
Einstein’s historical understanding helped him ask the exact right question, even though it led him astray on the substance. Bell’s inequality nicely illustrates how science advances through a balance of a priori hypotheses and empirical feedback. And maybe even more importantly, it highlights that the individuals best positioned to have transformative ideas have an understanding of what’s come before while staying open to total discontinuities in how things might progress.
Leave a reply to sari mandel Cancel reply