Why have so many physicists shrugged off the paradoxes of quantum mechanics?
No other scientific theory can match the depth, range, and accuracy of quantum mechanics. It sheds light on deep theoretical questions — such as why matter doesn’t collapse — and abounds with practical applications — transistors, lasers, MRI scans. It has been validated by empirical tests with astonishing precision, comparable to predicting the distance between Los Angeles and New York to within the width of a human hair.
And no other theory is so weird: Light, electrons, and other fundamental constituents of the world sometimes behave as waves, spread out over space, and other times as particles, each localized to a certain place. These models are incompatible, and which one the world seems to reveal will be determined by what question is asked of it. The uncertainty principle says that trying to measure one property of an object more precisely will make measurements of other properties less precise. And the dominant interpretation of quantum mechanics says that those properties don’t even exist until they’re observed — the observation is what brings them about.
“I think I can safely say,” wrote Richard Feynman, one of the subject’s masters, “that nobody understands quantum mechanics.” He went on to add, “Do not keep saying to yourself, if you can possibly avoid it, ‘But how can it be like that?’ because you will get ‘down the drain,’ into a blind alley from which nobody has yet escaped.” Understandably, most working scientists would rather apply their highly successful tools than probe the perplexing question of what those tools mean.
The prevailing answer to that question has been the so-called Copenhagen interpretation, developed in the circle led by Niels Bohr, one of the founders of quantum mechanics. About this orthodoxy N. David Mermin, some intellectual generations removed from Bohr, famously complained, “If I were forced to sum up in one sentence what the Copenhagen interpretation says to me, it would be ‘Shut up and calculate!’” It works. Stop kvetching. Why fix what ain’t broke? Mermin later regretted sounding snotty, but re-emphasized that the question of meaning is important and remains open. The physicist Roderich Tumulka, as quoted in a 2016 interview, is more pugnacious: “Ptolemy’s theory” — of an earth-centered universe — “made perfect sense. It just happened not to be right. But Copenhagen quantum mechanics is incoherent, and thus is not even a reasonable theory to begin with.” This, you will not be surprised to learn, has been disputed.
In What Is Real? the physicist and science writer Adam Becker offers a history of what his subtitle calls “the unfinished quest for the meaning of quantum physics.” Although it is certainly unfinished, it is, as quests go, a few knights short of a Round Table. After the generation of pioneers, foundational work in quantum mechanics became stigmatized as a fringe pursuit, a career killer. So Becker’s well-written book is part science, part sociology (a study of the extrascientific forces that helped solidify the orthodoxy), and part drama (a story of the ideas and often vivid personalities of some dissenters and the shabby treatment they have often received).
The publisher’s blurb breathlessly promises “the untold story of the heretical thinkers who dared to question the nature of our quantum universe” and a “gripping story of this battle of ideas and the courageous scientists who dared to stand up for truth.” But What Is Real? doesn’t live down to that lurid black-and-white logline. It does make a heartfelt and persuasive case that serious problems with the foundations of quantum mechanics have been persistently, even disgracefully, swept under the carpet. (...)
At the end of the nineteenth century, fundamental physics modeled the constituents of the world as particles (discrete lumps of stuff localized in space) and fields (gravity and electromagnetism, continuous and spread throughout space). Particles traveled through the fields, interacting with them and with each other. Light was a wave rippling through the electromagnetic field.
Quantum mechanics arose when certain puzzling phenomena seemed explicable only by supposing that light, firmly established by Maxwell’s theory of electromagnetism as a wave, was acting as if composed of particles. French physicist Louis de Broglie then postulated that all the things believed to be particles could at times behave like waves.
Consider the famous “double-slit” experiment. The experimental apparatus consists of a device that sends electrons, one at a time, toward a barrier with a slit in it and, at some distance behind the barrier, a screen that glows wherever an electron strikes it. The journey of each electron can be usefully thought of in two parts. In the first, the electron either hits the barrier and stops, or it passes through the slit. In the second, if the electron does pass through the slit, it continues on to the screen. The flashes seen on the screen line up with the gun and slit, just as we’d expect from a particle fired like a bullet from the electron gun.
But if we now cut another slit in the barrier, it turns out that its mere existence somehow affects the second part of an electron’s journey. The screen lights up in unexpected places, not always lined up with either of the slits — as if, on reaching one slit, an electron checks whether it had the option of going through the other one and, if so, acquires permission to go anywhere it likes. Well, not quite anywhere: Although we can’t predict where any particular shot will strike the screen, we can statistically predict the overall results of many shots. Their accumulation produces a pattern that looks like the pattern formed by two waves meeting on the surface of a pond. Waves interfere with one another: When two crests or two troughs meet, they reinforce by making a taller crest or deeper trough; when a crest meets a trough, they cancel and leave the surface undisturbed. In the pattern that accumulates on the screen, bright places correspond to reinforcement, dim places to cancellation.
We rethink. Perhaps, taking the pattern as a clue, an electron is really like a wave, a ripple in some field. When the electron wave reaches the barrier, part of it passes through one slit, part through the other, and the pattern we see results from their interference.
There’s an obvious problem: Maybe a stream of electrons can act like a wave (as a stream of water molecules makes up a water wave), but our apparatus sends electrons one at a time. The electron-as-wave model thus requires that firing a single electron causes something to pass through both slits. To check that, we place beside each slit a monitor that will signal when it sees something pass. What we find on firing the gun is that one monitor or the other may signal, but never both; a single electron doesn’t go through both slits. Even worse, when the monitors are in place, no interference pattern forms on the screen. This attempt to observe directly how the pattern arose eliminates what we’re trying to explain. We have to rethink again.
At which point Copenhagen says: Stop! This is puzzling enough without creating unnecessary difficulties. All we actually observe is where an electron strikes the screen — or, if the monitors have been installed, which slit it passes through. If we insist on a theory that accounts for the electron’s journey — the purely hypothetical track of locations it passes through on the way to where it’s actually seen — that theory will be forced to account for where it is when we’re not looking. Pascual Jordan, an important member of Bohr’s circle, cut the Gordian knot: An electron does not have a position until it is observed; the observation is what compels it to assume one. Quantum mechanics makes statistical predictions about where it is more or less likely to be observed.
That move eliminates some awkward questions but sounds uncomfortably like an old joke: The patient lifts his arm and says, “Doc, it hurts when I do this.” The doctor responds, “So don’t do that.” But Jordan’s assertion was not gratuitous. The best available theory did not make it possible to refer to the current location of an unobserved electron, yet that did not prevent it from explaining experimental data or making accurate and testable predictions. Further, there seemed to be no obvious way to incorporate such references, and it was widely believed that it would be impossible to do so (about which more later). It seemed natural, if not quite logically obligatory, to take the leap of asserting that there is no such thing as the location of an electron that is not being observed. For many, this hardened into dogma — that quantum mechanics was a complete and final theory, and attempts to incorporate allegedly missing information were dangerously wrongheaded.
But what is an observation, and what gives it such magical power that it can force a particle to have a location? Is there something special about an observation that distinguishes it from any other physical interaction? Does an observation require an observer? (If so, what was the universe doing before we showed up to observe it?) This constellation of puzzles has come to be called “the measurement problem.”
Bohr postulated a distinction between the quantum world and the world of everyday objects. A “classical” object is an object of everyday experience. It has, for example, a definite position and momentum, whether observed or not. A “quantum” object, such as an electron, has a different status; it’s an abstraction. Some properties, such as electrical charge, belong to the electron abstraction intrinsically, but others can be said to exist only when they are measured or observed. An observation is an event that occurs when the two worlds interact: A quantum-mechanical measurement takes place at the boundary, when a (very small) quantum object interacts with a (much larger) classical object such as a measuring device in a lab.
Experiments have steadily pushed the boundary outward, having demonstrated the double-slit experiment not only with photons and electrons, but also with atoms and even with large molecules consisting of hundreds of atoms, thus millions of times more massive than electrons. Why shouldn’t the same laws of physics apply even to large, classical objects?
Enter Schrödinger’s cat...
by David Guaspari, The New Atlantis | Read more:
Image: Shutterstock
No other scientific theory can match the depth, range, and accuracy of quantum mechanics. It sheds light on deep theoretical questions — such as why matter doesn’t collapse — and abounds with practical applications — transistors, lasers, MRI scans. It has been validated by empirical tests with astonishing precision, comparable to predicting the distance between Los Angeles and New York to within the width of a human hair.
And no other theory is so weird: Light, electrons, and other fundamental constituents of the world sometimes behave as waves, spread out over space, and other times as particles, each localized to a certain place. These models are incompatible, and which one the world seems to reveal will be determined by what question is asked of it. The uncertainty principle says that trying to measure one property of an object more precisely will make measurements of other properties less precise. And the dominant interpretation of quantum mechanics says that those properties don’t even exist until they’re observed — the observation is what brings them about.
“I think I can safely say,” wrote Richard Feynman, one of the subject’s masters, “that nobody understands quantum mechanics.” He went on to add, “Do not keep saying to yourself, if you can possibly avoid it, ‘But how can it be like that?’ because you will get ‘down the drain,’ into a blind alley from which nobody has yet escaped.” Understandably, most working scientists would rather apply their highly successful tools than probe the perplexing question of what those tools mean.
The prevailing answer to that question has been the so-called Copenhagen interpretation, developed in the circle led by Niels Bohr, one of the founders of quantum mechanics. About this orthodoxy N. David Mermin, some intellectual generations removed from Bohr, famously complained, “If I were forced to sum up in one sentence what the Copenhagen interpretation says to me, it would be ‘Shut up and calculate!’” It works. Stop kvetching. Why fix what ain’t broke? Mermin later regretted sounding snotty, but re-emphasized that the question of meaning is important and remains open. The physicist Roderich Tumulka, as quoted in a 2016 interview, is more pugnacious: “Ptolemy’s theory” — of an earth-centered universe — “made perfect sense. It just happened not to be right. But Copenhagen quantum mechanics is incoherent, and thus is not even a reasonable theory to begin with.” This, you will not be surprised to learn, has been disputed.
In What Is Real? the physicist and science writer Adam Becker offers a history of what his subtitle calls “the unfinished quest for the meaning of quantum physics.” Although it is certainly unfinished, it is, as quests go, a few knights short of a Round Table. After the generation of pioneers, foundational work in quantum mechanics became stigmatized as a fringe pursuit, a career killer. So Becker’s well-written book is part science, part sociology (a study of the extrascientific forces that helped solidify the orthodoxy), and part drama (a story of the ideas and often vivid personalities of some dissenters and the shabby treatment they have often received).
The publisher’s blurb breathlessly promises “the untold story of the heretical thinkers who dared to question the nature of our quantum universe” and a “gripping story of this battle of ideas and the courageous scientists who dared to stand up for truth.” But What Is Real? doesn’t live down to that lurid black-and-white logline. It does make a heartfelt and persuasive case that serious problems with the foundations of quantum mechanics have been persistently, even disgracefully, swept under the carpet. (...)
At the end of the nineteenth century, fundamental physics modeled the constituents of the world as particles (discrete lumps of stuff localized in space) and fields (gravity and electromagnetism, continuous and spread throughout space). Particles traveled through the fields, interacting with them and with each other. Light was a wave rippling through the electromagnetic field.
Quantum mechanics arose when certain puzzling phenomena seemed explicable only by supposing that light, firmly established by Maxwell’s theory of electromagnetism as a wave, was acting as if composed of particles. French physicist Louis de Broglie then postulated that all the things believed to be particles could at times behave like waves.
Consider the famous “double-slit” experiment. The experimental apparatus consists of a device that sends electrons, one at a time, toward a barrier with a slit in it and, at some distance behind the barrier, a screen that glows wherever an electron strikes it. The journey of each electron can be usefully thought of in two parts. In the first, the electron either hits the barrier and stops, or it passes through the slit. In the second, if the electron does pass through the slit, it continues on to the screen. The flashes seen on the screen line up with the gun and slit, just as we’d expect from a particle fired like a bullet from the electron gun.
But if we now cut another slit in the barrier, it turns out that its mere existence somehow affects the second part of an electron’s journey. The screen lights up in unexpected places, not always lined up with either of the slits — as if, on reaching one slit, an electron checks whether it had the option of going through the other one and, if so, acquires permission to go anywhere it likes. Well, not quite anywhere: Although we can’t predict where any particular shot will strike the screen, we can statistically predict the overall results of many shots. Their accumulation produces a pattern that looks like the pattern formed by two waves meeting on the surface of a pond. Waves interfere with one another: When two crests or two troughs meet, they reinforce by making a taller crest or deeper trough; when a crest meets a trough, they cancel and leave the surface undisturbed. In the pattern that accumulates on the screen, bright places correspond to reinforcement, dim places to cancellation.
We rethink. Perhaps, taking the pattern as a clue, an electron is really like a wave, a ripple in some field. When the electron wave reaches the barrier, part of it passes through one slit, part through the other, and the pattern we see results from their interference.
There’s an obvious problem: Maybe a stream of electrons can act like a wave (as a stream of water molecules makes up a water wave), but our apparatus sends electrons one at a time. The electron-as-wave model thus requires that firing a single electron causes something to pass through both slits. To check that, we place beside each slit a monitor that will signal when it sees something pass. What we find on firing the gun is that one monitor or the other may signal, but never both; a single electron doesn’t go through both slits. Even worse, when the monitors are in place, no interference pattern forms on the screen. This attempt to observe directly how the pattern arose eliminates what we’re trying to explain. We have to rethink again.
At which point Copenhagen says: Stop! This is puzzling enough without creating unnecessary difficulties. All we actually observe is where an electron strikes the screen — or, if the monitors have been installed, which slit it passes through. If we insist on a theory that accounts for the electron’s journey — the purely hypothetical track of locations it passes through on the way to where it’s actually seen — that theory will be forced to account for where it is when we’re not looking. Pascual Jordan, an important member of Bohr’s circle, cut the Gordian knot: An electron does not have a position until it is observed; the observation is what compels it to assume one. Quantum mechanics makes statistical predictions about where it is more or less likely to be observed.
That move eliminates some awkward questions but sounds uncomfortably like an old joke: The patient lifts his arm and says, “Doc, it hurts when I do this.” The doctor responds, “So don’t do that.” But Jordan’s assertion was not gratuitous. The best available theory did not make it possible to refer to the current location of an unobserved electron, yet that did not prevent it from explaining experimental data or making accurate and testable predictions. Further, there seemed to be no obvious way to incorporate such references, and it was widely believed that it would be impossible to do so (about which more later). It seemed natural, if not quite logically obligatory, to take the leap of asserting that there is no such thing as the location of an electron that is not being observed. For many, this hardened into dogma — that quantum mechanics was a complete and final theory, and attempts to incorporate allegedly missing information were dangerously wrongheaded.
But what is an observation, and what gives it such magical power that it can force a particle to have a location? Is there something special about an observation that distinguishes it from any other physical interaction? Does an observation require an observer? (If so, what was the universe doing before we showed up to observe it?) This constellation of puzzles has come to be called “the measurement problem.”
Bohr postulated a distinction between the quantum world and the world of everyday objects. A “classical” object is an object of everyday experience. It has, for example, a definite position and momentum, whether observed or not. A “quantum” object, such as an electron, has a different status; it’s an abstraction. Some properties, such as electrical charge, belong to the electron abstraction intrinsically, but others can be said to exist only when they are measured or observed. An observation is an event that occurs when the two worlds interact: A quantum-mechanical measurement takes place at the boundary, when a (very small) quantum object interacts with a (much larger) classical object such as a measuring device in a lab.
Experiments have steadily pushed the boundary outward, having demonstrated the double-slit experiment not only with photons and electrons, but also with atoms and even with large molecules consisting of hundreds of atoms, thus millions of times more massive than electrons. Why shouldn’t the same laws of physics apply even to large, classical objects?
Enter Schrödinger’s cat...
by David Guaspari, The New Atlantis | Read more:
Image: Shutterstock