The Classic Quantum Experiments

Discussions of the history and development of quantum mechanics often mention key experiments that led physicists to believe that the fundamental framework of classical physics needed to be abandoned. However, most treatments only mention one or two of these experiments, and often different ones in different treatments, so here I want to collect discussions of all the major ones in one place. Also, discussions of these experiments usually talk about how quantum mechanics explains the results, but they usually don't talk about just how classical physics failed to explain them; here I have included expositions of what classical physics predicted about the outcomes of these experiments, so that it's clear how wrong it was. I have put the discussions in rough chronological order according to when the experiments were first used to argue for adopting one or more key features of quantum mechanics.

The Blackbody Radiation Spectrum: A "black body" is an object that absorbs all frequencies of electromagnetic radiation with equal ease, and when it is heated, it also emits all frequences with equal ease. (Yes, the "with equal ease" there is a bit vague, and that's the point--it will turn out that the classical meaning of that term is quite different than the quantum one.) There are no absolutely perfect black bodies in nature, and most ordinary substances are very far from being black bodies, but there are some objects that come extremely close, close enough to test theoretical predictions about what the radiation emitted by one should look like. Physicists did that towards the end of the nineteenth century, and what they found provided the first opportunity for a quantum concept to enter physics.

When physicists observed the spectrum of radiation from a black body (the intensity of the radiation as a function of frequency), they found that it had a peak at a certain frequency that depended on the absolute temperature of the body. As the temperature went up, the peak frequency went up too. This fit in perfectly well with common experience: we see heated objects emit first red light as they get just hot enough to radiate in the visible range, and then orange, yellow, green, blue, and violet as they heat up further. Red light is the lowest frequency of visible light and violet is the highest, and Maxwell's theory of electromagnetism made it clear that there are also frequencies lower than those of light (infrared, microwaves, and radio waves) and higher (ultraviolet, X-rays, and gamma rays), and all of these were also observed in the appropriate intensities as black bodies were heated and their spectra were measured. These observations also fit in with measurements of the surface temperatures and spectra of stars--stars actually come pretty close to emitting radiation like black bodies (this is one reason why the name "black body" is not a very good one, but unfortunately physicists are stuck with it).

The problem was that Maxwell's theory of electromagnetism predicted something completely different, and in fact absurd. It predicted that any black body, even one cooled down to almost absolute zero, should radiate arbitrarily high amounts of high-frequency radiation--ultraviolet, X-rays, and gamma rays. Any object we see should blind us, destroy the retinas of our eyes, and give us skin cancer with its radiation. Every star, including the Sun, should look like a blue-violet star. Physicists called this disastrous prediction the "ultraviolet catastrophe".

Physicists tried mightily to find some way around the ultraviolet catastrophe within the framework of classical physics, but with no success. Then, in 1900, Max Planck saw that one particular feature of classical physics was causing the trouble: that the energy carried by radiation could come in arbitrarily small amounts, regardless of the frequency. This meant that, when a black body emitted radiation, it was equally likely to emit any frequency--and since there are many more high frequencies than low ones, they should dominate the spectrum.

Planck's hypothesis then was simple: suppose that the energy carried by radiation at a given frequency cannot take any value at all, but must be an integral multiple of a particular value, which gets larger as the frequency gets higher. That is, the energy in radiation comes in little "packets", which have a definite size that goes up with the frequency--in fact, the energy per packet is simply a new physical constant, called "Planck's constant", multiplied by the frequency. With this hypothesis, it would no longer be true that a black body was equally likely to emit radiation at any frequency, because the emitting would have to be done by the atoms in the black body, and the average energy per atom in any object depends on its temperature; cool objects have less energy per atom, on average, than hot ones do. So it would be much harder for a cool black body to emit high-frequency radiation than a hot one, because it would be much less likely that an atom in the body would have enough energy to supply a quantum (the word that Planck used to describe the little packets of energy) of radiation at high frequency.

By a suitable choice of value for his new constant, Planck was able to derive a formula for the spectrum of a black body at a given temperature that matched the experimental results. This was the first success of what came to be known as the "quantum hypothesis" in physics.

The Photoelectric Effect: For five years after Planck's solution of the black body radiation problem, nobody paid much attention to the quantum hypothesis; it was considered an odd little fact about black body radiation, and that was all. Then, in 1905, Albert Einstein published a paper which used exactly the same hypothesis to explain the results of experiments on the photoelectric effect, which was the observation of an electric current in metals when light was shining on them. The problem, until Einstein came along, was that, although classical physics did predict that light could cause electric currents in metals, its predictions for how the current should behave were just the opposite of what was actually observed.

Maxwell's theory of electromagnetism was again the classical theory being used, and it modeled light as electromagnetic waves. Electric currents are caused by electrons escaping from the atoms in a substance and moving freely through it, and the basic picture of the photoelectric effect in Maxwell's theory was that the waves of light swept the electrons out of the atoms in much the same way as a wave over a sand bar would sweep off objects that might be resting on it. Once the electrons were free of the atoms, they could move freely and would cause an electric current.

This was all very well, but problems began to arise when more detailed predictions were made, about how the current would behave as a function of the frequency and intensity of the light. Think of the objects on the sand bar again, and suppose that each one is embedded in the sand to some extent, so it will take a certain amount of effort to dislodge it before it can be swept away by a wave. Electrons in atoms also can't be dislodged without a certain amount of effort, called the ionization energy. A wave sweeping over a sand bar will find it easier to dislodge objects if it is taller--that is, if its amplitude, or intensity, is higher. And once dislodged, the objects will have more energy if the wave is of higher amplitude. Conversely, the number of objects dislodged in a given time should depend on the number of waves passing the sand bar--in other words, on the frequency of the wave.

Translating the above into the language of Maxwell's theory and its predictions, physicists expected that the voltage observed when light was shone on a substance producing the photoelectric effect would depend on the intensity of the light, because voltage is the energy carried by each electron. And they expected that the current observed would depend on the frequency of the light (its color), because that would determine how many electrons got dislodged from atoms per unit time.

Unfortunately, the actual experiments showed the exact opposite: the voltage observed depended on the frequency of the light, and the current depended on its intensity. In fact, if the light was below a certain "cutoff" frequency, whose exact value depended on the substance being used, there would be no current at all, and no voltage observed; but as soon as the light went the least bit above the cutoff frequency, the current would suddenly jump to a value that depended on the intensity of the light, and the voltage would slowly rise as the frequency rose. Everybody was stumped: there was simply no way to get any such prediction out of Maxwell's theory.

Then Einstein realized that adopting Planck's quantum hypothesis would fix this problem too. If light comes in little "packets", each with a definite energy that depends on the frequency, then the electrons in the atoms can only pick up energy one packet at a time. If the packet doesn't have enough energy to dislodge the electron from the atom (i.e., the light frequency is below the cutoff), then the electron remains bound and there is no current. If the packet does have enough energy to dislodge the electron, then the energy left over after the electron leaves the atom and can move freely (the voltage) will depend on the frequency (above the cutoff). And since the intensity of the light is just the number of packets arriving per unit time, obviously this will determine the current, which is just the number of electrons flowing per unit time. By using the same value of Planck's constant as Planck had used, Einstein was able to predict quantitatively the frequency-voltage and intensity-current relations that had been observed (and he was also able to check that the observed cutoff frequencies, when converted into energies using Planck's formula, matched the observed ionization energies of the substances used).

There is a historical irony here. When Einstein published his paper, he went on to take Planck's hypothesis to its logical conclusion: the little "packets" of energy in light are not just hypothetical entities used to explain certain effects; they are real particles, which Einstein called "photons", and they can knock electrons out of atoms just the way one billiard ball can knock another across a table. Many physicists were skeptical, and it took another twenty years before quantum mechanics had really overthrown the assumptions of classical physics. Yet during that same period Einstein, who had led the way, decided that he didn't like the implications of quantum mechanics, and became a skeptic himself--not necessarily about photons per se, but about whether quantum mechanics, however well it might describe the results of experiments, was really a satisfactory theory of ultimate reality. The philosophical issues involved in that question are still not fully resolved today.

Spectroscopy and the Stability of Atoms: When Planck first proposed his quantum hypothesis to explain the spectrum of black body radiation, heating things up and looking at the light they emitted was by no means a new idea. People had been doing that for the better part of the nineteenth century, and they had found, as I mentioned above, that most ordinary substances were very far from being black bodies. In fact, most ordinary substances, when heated, gave off light only at a few specific frequencies. When the light was looked at in spectroscopes (which are high-powered versions of the prisms with which Newton first showed that "white" light is composed of various colors), most of the spectrum was dark--there were just a few thin bright lines at specific points in the spectrum that were like a "signature" of the specific substance being heated.

When the light from stars was looked at in spectroscopes, the opposite was found: most of the spectrum was bright (with the intensity depending on the frequency roughly according to the black-body formula that Planck would later derive, as mentioned above) but with some thin, dark lines in it. It was soon realized that the frequencies of these dark lines were the same as the frequencies of the bright lines in the spectra obtained when various chemical elements were heated in laboratories on Earth. The obvious explanation was that there were atoms of those elements in the outer envelopes of gas at the surfaces of stars, and these atoms were absorbing the light from the stars at the specific frequencies characteristic of the atoms. The crowning achievement of spectroscopy during this period was to discover dark lines in the Sun's spectrum that did not correspond to any known chemical element; it was then claimed that the lines were due to a new element, named "helium" (from the Greek Sun god Helios), and before long scientists had found a very light gas on Earth and, on heating it, showed that its spectrum was that of helium.

Scientists soon had catalogues of the spectral line patterns of many substances, and were able to find various formulas that matched the patterns, at least some of the time; but nobody really understood why the lines should be there. Classical physics predicted that, whatever the frequency spectrum of a substance, it ought to at least be a "smooth" one, with relatively gentle variation of intensity with frequency. Nobody had the least idea how to get something like sharp spectral lines from classical physics--much less how to try to predict the detailed patterns of lines that were observed from different elements.

In 1913 Niels Bohr came up with a way to use the quantum hypothesis to solve this problem. Bohr was also trying to solve another, related problem: according to classical physics, atoms shouldn't even be stable! By this time physicists knew that atoms consisted of a positively charged nucleus (whose composition was still poorly understood) surrounded by negatively charged electrons. The electrons would be attracted to the nucleus by the ordinary electrostatic force between charged particles. The classical equations for this situation were well known: they were Maxwell's equations governing the electric and magnetic fields produced by charged matter, and the Lorentz force law for how charged particles would move given the electric and magnetic fields. These equations predicted that the electrons would orbit the nucleus in much the same way as the planets orbit the Sun. However, they also predicted that the orbiting electrons would radiate electromagnetic waves, giving up energy and moving closer and closer to the nucleus, and causing atoms to collapse.

Of course, the experimental evidence (and common observation) was that atoms did not collapse; they appeared to be stable indefinitely at an approximate size which was known to be on the order of a hundred millionth of a centimeter. Nor did atoms radiate electromagnetic waves indiscriminately, as we have already seen. Bohr's hypothesis was that this couldn't happen because, just like the photons that made up light, electrons in atoms could only have specific, discrete energies. The difference was that, whereas photon energies were integer multiples of Planck's constant times the light frequency, in Bohr's model the angular momentum of each electron in an atom could only be an integer multiple of Planck's constant divided by 2π. This led to a more complicated formula for the allowed energies, which could not be directly compared with experiment, because there was no way to measure the energy of an electron in an atom in a single, stable state. However, it was quickly realized that, by Bohr's formula, the differences in energies between the various levels could be calculated for the hydrogen atom, and agreed with the photon energies corresponding to the various spectral lines for hydrogen.

Bohr's model, then, said that atoms could absorb photons of certain definite energies, which would kick electrons from lower to higher energy levels; conversely, electrons in higher energy levels could emit photons of the same definite energies and drop back to lower energy levels. The photon energies for absorption and emission depended on the energy differences between the levels and were therefore the same; and electrons in the lowest energy level (the one with angular momentum equal to Planck's constant divided by 2π) could not emit any photons at all, because there was no lower energy level for them to go to. So both spectral lines and the stability of atoms were explained.

I should emphasize that Bohr's model was a very crude one, what a computer programmer would call a kludge. It "explained" why atoms were stable and why they could only emit or absorb light of certain definite frequencies; but it did so in a way that seemed to be largely ad hoc and not very coherent physically. Also, it did not work for any atoms other than hydrogen, and it did not predict anything else about the spectrum of hydrogen except the frequencies of the spectral lines. Over the next decade, additional experimental evidence would be collected that could not be handled by Bohr's model, and eventually it would be supplanted by the "new quantum theory" of Schrodinger, Heisenberg, and Born in 1925-1926. However, Bohr's model was still the first use of the quantum hypothesis to explain the properties of atoms as well as those of light.

Spin Quantization: One particular feature of electrons in atoms that Bohr's model did not comprehend was their intrinsic angular momentum, or "spin", and their consequent magnetic moments (which we will meet again below). By "magnetic moment" we mean that the electrons act like tiny bar magnets, with north and south poles oriented along a particular direction. It was expected that the electrons in an atom would have angular momentum due to their orbital motion about the nucleus and would consequently create a magnetic moment whose direction would be along the axis of the orbit; the fact that circulating electric charges created magnetic fields had been discovered by Ampere a century before. However, that was thought to be the only angular momentum in the picture; it was not yet understood that electrons also possess "intrinsic" spin, something like that of the Earth spinning on its axis in addition to orbiting about the Sun. So when experimenters tried to measure the magnetism created by orbiting electrons, the results were not quite what they had expected.

The motivation of Stern and Gerlach for doing the first of these experiments on the magnetism of electrons in atoms was to determine whether the magnetism due to the electrons' orbital motions was quantized, as would be expected on the Bohr model of the atom where the electrons' angular momentum was quantized. Stern and Gerlach used electrically neutral silver atoms for their experiment, so that there would be no net electric charge to provide confounding forces, and because silver had one extra electron over and above a series of filled energy "shells" (essentially the energy levels that Bohr had predicted for the hydrogen atom, but gradually filled level by level as the atomic number of the atom, and hence the number of electrons orbiting the nucleus, increased). The electrons in the shells came in pairs with opposite orbital angular momentum, so that their magnetic moments would all cancel, leaving only the odd electron to contribute to any externally measurable effect.

Classical physics, in calculations done by Larmor, predicted that if a beam of such silver atoms were fired at an inhomogeneous magnetic field (one whose strength varied along a single direction--up/down, left/right, etc.), with the beam direction at right angles to the direction of field variation, the single beam should spread out along the direction in which the magnetic field varied, with the peak of intensity in the center. This was because the directions of the net magnetic moments of each atom would be expected to be random with respect to the direction of field variation, and the angle between the two directions for any given atom would determine the angle by which that atom would be deflected by the field. So one would expect to see a random distribution of deflections around a mean of more or less zero.

Sommerfeld, who had worked with Bohr on his model of the hydrogen atom, said that, in contrast to Larmor's calculation, what should actually be observed would be a simple split of one beam into two. This was because, according to the Bohr-Sommerfeld model, the component of the odd electron's orbital magnetic moment along the direction of field variation would be quantized: it could have only two values, which we can just think of as "+" and "-", or "up" and "down". Any given atom in the beam would have essentially a 50-50 chance for either outcome, and so the beam would split into two, an "up" beam and a "down" beam, each with approximately half the intensity of the original beam. The split between the two beams would be proportional to the magnetic field strength.

Stern and Gerlach's first experiment in 1920 did not conclusively show a split of the beam, but it did show a broadening of the original beam, with what looked like an intensity minimum in the center. This was not enough to decisively confirm Sommerfeld's predictions, but it was enough to cast strong doubt on the classical theory and Larmor's calculations, since those quite clearly required an intensity maximum in the center. Refinements were made in the experimental apparatus, and in 1922 a repeat of the experiment was done which clearly showed a split into two beams. Sommerfeld's theory was apparently confirmed, and the quantum model of the atom with it.

Unfortunately, everyone had overlooked something. Sommerfeld, and Stern and Gerlach, had assumed that the silver atoms in their beam were in a particular state of overall angular momentum called L = 1. On the Bohr-Sommerfeld model this had seemed to be the only possibility, but when that model was overthrown by the "new quantum theory" of Heisenberg and Schrodinger in 1925-1926, it became apparent that the silver atoms actually would not have been in the L = 1 state, but in a state called L = 0, with no overall angular momentum. (On the Bohr-Sommerfeld model this was impossible with an odd electron like that of silver, but refinements in the new quantum theory showed that some of the possible states of atoms like silver could have values for quantities like angular momentum that were incompatible with the "classical" viewpoint of electrons orbiting the nucleus like planets.) In the L = 0 state, the beam of silver atoms should not have split at all! Furthermore, even if the atoms had been in the L = 1 state, on the new quantum theory this would have resulted in a split into three beams, not two!

Suddenly the results of the Stern-Gerlach experiment, instead of confirming quantum theory, were a problem for it; but a way out was soon proposed. What if the electrons in the silver atom all had an intrinsic spin with a magnitude equal to Planck's constant divided by 4π? This would give the electron an intrinsic magnetic moment over and above that due to its orbital motion about the nucleus. It turned out that, in the new quantum theory, all of the paired electrons would have opposite intrinsic spins, just as their orbital angular momenta had been opposite in the Bohr model, and so their intrinsic magnetic moments would cancel; but the odd electron left over would have an uncancelled magnetic moment that, lo and behold, would act just the way that Sommerfeld's theory had predicted for the orbital magnetic moment: it would cause the beam of atoms to split into two beams, just as Stern and Gerlach had observed. And the result was still quite different from what the classical theory would have predicted.

It should be emphasized that the proposal that electrons have intrinsic spin was not made solely to solve the problem with the results of the Stern-Gerlach experiment. As we noted above, by that time there were a number of problems with the old Bohr model of the atom, and the idea of particles like the electron having intrinsic spins was an essential part of the new quantum theory's correct prediction of a number of observed effects that the Bohr model could not predict. The key feature of the Stern-Gerlach experiment is really the simple and obvious contradiction of the classical prediction of a simple spreading of the beam, with a discrete split into separate beams, each of which does not spread at all, being observed instead. This means that, although it is sometimes useful to think of electrons and other particles as little spinning balls, the way classical physics would have liked to view them, the Stern-Gerlach experiment and its quantized result shows that that cannot be the way the particles really are.

The Compton Effect: As we have seen, Einstein, after using Planck's quantum hypothesis to explain the photoelectric effect, claimed that this showed that light was composed of real particles, called photons, but this claim was resisted by many physicists. The position defended by these skeptics was basically that, while the quantum hypothesis might help to explain light being emitted or absorbed by atoms, it had nothing to say about light passing freely through space--that, on their view, was still fully satisfactorily explained by Maxwell's theory. However, in 1923, Arthur Compton showed that light could interact with the electrons in atoms without being emitted or absorbed, and his results looked for all the world like particles of light bouncing off electrons like little billiard balls.

What Compton did was to shine X-rays at atoms of various light elements and measure their frequencies and their directions before and after they passed through the atoms. On the classical Maxwell theory of light, the direction of the incident X-rays might possibly change, but the frequency should remain the same, just as for, say, water waves going around an obstacle. However, Compton found that both the direction and the frequency of the X-rays was changed by passing through the atoms, and using the quantum formula for the energy of photons as a function of frequency, he could show that the electrons and the X-ray photons obeyed the same equations as two billiard balls colliding, with the X-ray photons acting like cue balls and the electrons acting like stationary balls hit by the cue balls and deflecting them as they recoiled.

After Compton's discovery, the reality of photons as particles of light was regarded as firmly established. It is also noteworthy that Compton used not just quantum mechanics but special relativity in deriving his equation for the frequency shift of the X-rays as a function of scattering angle. This was one of the first attempts to combine both special relativity and the quantum hypothesis to explain an experimental result.

Electron Diffraction: Of course once you have shown that light, which everyone thought consisted of waves, is actually particles, the obvious next question is whether particles of matter like the electron can have wave properties. This hypothesis was made by De Broglie in 1924; he published a paper suggesting that associated with every particle of matter would be a wave whose wavelength would be equal to Planck's constant divided by its momentum. (He chose this formula because it is the same as the formula for photons that we saw above, once you use the speed of light to convert from photon energy to photon momentum.) He then showed that Bohr's condition for the angular momentum of electrons in atoms could be derived from his own equation for the wavelength of the electron.

De Broglie's hypothesis seemed just as ad hoc as the Bohr model to many physicists, but it had consequences that could be tested. After all, if electrons and other particles of matter have wave properties, they should exhibit the appropriate wave phenomena under the right conditions, just as light does. For example, if a beam of electrons is fired past an obstacle that is small enough compared to the electron wavelength, the beam should diffract around it--instead of the obstacle casting a single sharp "shadow", there should be a characteristic pattern of light and dark bands called an "interference pattern". A similar phenomenon should be observed if a beam of electrons was fired through two sufficiently narrow and closely spaced slits in a barrier--on the other side, instead of two bright spots where the electrons went through the slits, there should again be an interference pattern whose amplitude variation in space could be calculated and compared with experiment.

More than a century before, Thomas Young had done experiments with light that showed just these effects, and it was those experiments that had convinced scientists that light consisted of waves. Within a few years of De Broglie's paper, scientists had done similar experiments with electron beams, and indeed, they showed the same effects. The difficulty in doing the experiments, and the reason why such effects had not been noticed before, was that electron waves were much smaller in wavelength than visible light waves, so the obstacles or slits used to diffract the electrons had to be much smaller as well. However, experiments on X-ray diffraction by crystals had been done a decade before, and it was known that the X-ray wavelengths were similar to those predicted for electrons by De Broglie's formula; so experimenters fired electron beams through crystals of similar materials as were used in the X-ray experiments, and it worked.

By the time the experiments were done confirming De Broglie's hypothesis, it had already been fitted into a much more general scheme with the full formulation of quantum mechanics by Heisenberg, Schrodinger, and Born. That formulation made it clear that every quantum system would exhibit both wave-like and particle-like properties under the appropriate conditions. By now this has been tested for many other quantum systems besides electrons and photons, and it has always been confirmed.

The Discovery of Antiparticles: Once the basic theory of quantum mechanics had been formulated, the next step was to try to make it consistent with Einstein's special theory of relativity. Schrodinger had originally tried to formulate his fundamental equation of quantum mechanics as a relativistic equation, but had run into difficulties, and his 1926 paper had published the non-relativistic version of his equation only because he could not figure out how to solve the problems with the relativistic one. Over the next few years there were several attempts to come up with relativistic versions of the Schrodinger equation.

Part of the problem was that, as we saw above, particles like the electron were known by this time to have a property called "intrinsic spin", which added an additional "quantum number" to those that had to be kept track of in the equations describing their behavior. Klein and Gordon managed to come up with a relativistic equation for a quantum particle without spin, which is still called the Klein-Gordon equation; but they could not figure out how to add spin to it. Then, in 1928, Dirac published his famous wave equation for the electron, which included the effects of spin and was fully consistent with special relativity as well.

But Dirac's equation had an unusual feature: there were two sets of solutions. One set described ordinary electrons; the other set appeared to describe electrons that had negative energy. However, Dirac soon came up with a hypothesis, called "hole theory", which said that, in fact, these negative energy states are almost always filled in the real world, and don't interact with anything else, so we don't see them. This was possible because electrons obey the Pauli exclusion principle, which says that no two fermions (particles like the electron whose spins come in odd multiples of 1/2 of Planck's constant divided by 2π) can be in the same state. Thus, a real positive energy electron like those we see can't fall into one of these negative energy states because they are already filled.

However, Dirac said, sometimes one of these negative energy states isn't filled; there is a "hole" there which is not occupied by a negative energy electron. He then realized that, to us, such a "hole" would look like a particle that was identical to the electron in every respect, except that it would have a unit positive charge instead of negative. This "antiparticle" to the electron would not last for very long, because sooner or later a positive energy electron would fall into the "hole"; to us, this would look like the electron and the "antiparticle" coming together and annihilating each other, turning into photons. But it was possible that an "antielectron" could still be detected experimentally, so Dirac predicted that such particles exist and might be observed.

In 1932, Anderson observed particle tracks in a cloud chamber that looked like electron tracks, except that they bent the wrong way in a magnetic field. He realized that he had discovered Dirac's antiparticle to the electron, which he called the "positron". By this time, Dirac's equation was being reformulated in the context of quantum field theory, and it was becoming clear that, when quantum mechanics and special relativity were combined in a consistent theory, it predicted that every particle should have an antiparticle. Dirac's "hole theory" turned out to be nothing more than a curiosity; antiparticles were not just "holes" in a sea of negative energy states (for one thing, quantum field theory predicted that bosons, which do not obey the Pauli exclusion principle so that Dirac's "hole theory" would not work for them anyway, would have antiparticles) but particles every bit as real (though much rarer, at least in the universe we can see) as their "particle" counterparts.

In terms of trying to compare the predictions of classical physics with those of quantum physics, antiparticles are not so much incorrectly dealt with by classical physics as completely unexpected. It was only the clue supplied by the presence of two sets of solutions to the Dirac equation that suggested that antiparticles might exist. By this time in the development of quantum theory, physicists had pretty much accepted that the world of elementary particles was a quantum world.

The Lamb Shift and the Electron Magnetic Moment: During the 1930's, many physicists worked on developing quantum field theory and trying to use it to calculate various quantities. Many of these calculations involved small corrections to values that had been previously calculated and measured, as experiments got more accurate and revealed finer layers of structure in things like atomic spectra. For example, as the spectrum of hydrogen was examined in greater and greater detail, it was discovered that in many places, what had once appeared as a single spectral line now turned out to be multiple lines with very closely spaced frequencies. (This was distinct from the multiplicity of lines that was already known to appear when atoms were put in magnetic or electric fields--these multiple lines appeared even when there were no external fields present.) The only way to account for these multiple lines was to suppose that certain energy levels of hydrogen which had been thought to be "degenerate" (meaning of equal energy) actually differed slightly in energy. The actual frequency differences were very difficult to measure accurately, so this effect was not given a lot of attention at the time.

It had also been observed, as we noted earlier, that fundamental particles like the electron which were electrically charged had tiny magnetic moments; they acted like very small bar magnets, which was why they could be deflected by magnetic fields, such as in the cloud chamber experiments that were used to discover the positron. Dirac had calculated a value for the electron's magnetic moment that matched the experimentally measured one fairly well, but as the measurements became more accurate it was found that the actual value was just a bit higher than Dirac's.

Phenomena like these were explained by quantum field theory in the following way. Quantum field theory uses fields, not particles, as its fundamental entities; what we see as "particles" are just bundles of the energy contained in the fields, in much the same way as the photon was viewed as a "packet" of the energy in the electromagnetic field. But now these are quantum fields, which means that we cannot exactly pin down just how much energy is in the field at a given point of spacetime, because of the uncertainty principle. The energy in any quantum field is constantly fluctuating around an average value which reflects how many actual particles are present. Since bundles of energy of the quantum field appear to us as particles, these fluctuations of quantum fields can act like "virtual particles".

What this means is that any quantum system we observe acts like it is surrounded by a "cloud" of virtual particles of various types. For the case of electrons in atoms or in magnetic fields in cloud chambers, the important quantum fields involved are the electron field (the field whose energy appears to us in bundles as electrons) and the electromagnetic field (whose energy bundles appear to us as photons). Then quantum field theory says that every electron and every photon act like they are surrounded by a cloud of virtual electrons and photons, which are constantly winking in and out of existence as the field energies fluctuate within the limits of the uncertainty principle. While the virtual particles are in existence, however, they can interact with any real particles present and with other virtual particles, and these interactions can have measurable effects. These effects are the source of the small corrections to quantities like the energy levels of electrons in hydrogen or the electron magnetic moment.

Qualitatively, the above picture was agreeable to quantum field theorists; but the trouble came when they tried to calculate quantitative answers in order to compare them with the measured values. Instead of giving the desired small corrections, the calculations started coming out with absurd answers like infinity! The problem was that, in principle, there was no limit to the size of the energy fluctuations in the quantum fields, as long as they lasted for a short enough time; the shorter the time, the larger the energy fluctuation could be. This meant that, when taking into account the corrections from virtual particle interactions, the calculations had to include contributions from virtual particles of unlimited energy, and this made the sums or integrals involved diverge. For some calculations, one could get around this by imposing an arbitrary cutoff in the virtual particle energies considered; but this was not considered acceptable as a fundamental theory.

Because of these problems with infinities, and because the experiments in the 1930's were still fairly crude, the issues involved were pretty much ignored, while physicists turned their attention to problems that could be solved with the techniques available. Then World War II came along and basic research was temporarily abandoned. Not until after the war, in 1947, did physicists come together again to talk about quantum field theory. But at the Shelter Island Conference in that year, Lamb revealed that he had been able to experimentally measure the "shift" in energy between two important excited states of hydrogen, which showed up as multiple spectral lines where Bohr's original model had predicted only one. Under the pressure of Lamb's result, other physicists finally came up with ways to get rid of the infinities in quantum field theory, at least for the theory of electrons and photons, and came up with a calculated value for the Lamb shift that was close to the experimental one. (Later calculations and experiments have refined both values, and they are in excellent agreement.) For a while after this, referring to the fact that before Lamb's experiments everyone had simply ignored the energy shift because they couldn't calculate it, physicists went around joking that "just because something is infinite does not mean it is zero."

Soon after the calculation of the Lamb shift using the new techniques, Schwinger used them to calculate the lowest order correction to the electron's magnetic moment, and found that it agreed closely with the current measured value. (Both this calculation and the experimental value have also been refined a number of times since then, and currently are in agreement to about eleven decimal places.) At the same time, Feynman developed his famous diagrams to help keep track of calculations using the new techniques by pictorially representing the different possible classes of virtual particle interactions. Feynman diagrams are now ubiquitous in particle physics, and are visual testimony to how much quantum experiments have changed our ideas about physics: we now take for granted that these unobservable virtual particles can affect all sorts of measured results, and in that sense they are as real as anything we see.

These phenomena are not, strictly speaking, evidence against classical physics the way the black body radiation spectrum or the stability of atoms or the electron diffraction experiments were. But they were, and are, important in forcing physicists to overturn classical notions that still linger on even after the basic necessity of quantum physics has been accepted. Since the measurement and calculation of the Lamb shift, this overturning of classical concepts has continued to expand--at this point, in fact, it is fair to say that no classical concept has survived untouched by it. Not just properties of particles, like position or spin, but the very existence of particles has become "quantized", in the sense that one can no longer speak in general of states of a system as having a definite number of particles, any more than we can in general speak of them as having a definite position or a definite spin.

In fact, on the current view of quantum mechanics, it is difficult to say anything definite about a quantum system's state. Sometimes it is possible to pin down the state to a particular "vector" in the abstract space in which quantum states live; more often it is only possible to pin it down to some subspace of the full abstract space. And there is no definite intuitive picture of what a quantum state vector, or a subspace of the state space, "is". All of the intuitive pictures I gave above, such as the picture of fluctuations of quantum fields as virtual particles, are only approximations, analogies which are useful in important contexts but which break down when stretched too far. It is in this sense that, as Feynman once commented, "Nobody understands quantum mechanics." What we do understand, though, is that we do not know of any way of explaining what is going on in the experiments discussed in this article, and many others besides, without quantum mechanics, because whenever we try to do so, we get predictions that don't match what we actually observe. So it looks like we're stuck with it. :)


One of the repeating patterns in quantum mechanics has been the way that it allows us to experimentally test propositions that many people claimed we would never be able to experimentally test. Bell's Theorem provides a good example: it gave us a way to experimentally test "locality", which had previously been thought to be a completely philosophical or metaphysical proposition, not something accessible to experiment.

Another example is the question "Are two elementary particles of the same kind (e.g., electrons) really identical?" Once again, this has long been claimed to be a purely philosophical or metaphysical question; but quantum mechanics gives us a way to actually test it. The answer, by the way, is "yes, they are", which has implications for lots of other "philosophical" questions, such as questions about the meaning of identity.

So another moral of the story, besides what we saw above ("everything ends up being quantized, even stuff you never thought would be"), might be: "Never say that a proposition cannot, in principle, be experimentally tested." Sooner or later, the odds are that someone will figure out a way to test it. I'll refrain from the obvious comment about philosophy's claims to prove things a priori, since we're in the science section of this website. :)


I've collected links here on the phenomena discussed in this article rather than put them in the individual entries.

The Planck Law -- Eric Weisstein's World of Physics. Also see the entry here on the Rayleigh-Jeans Law, which was the classical physics prediction for black body radiation.

Black Body Radiation -- Wikipedia. There are also Wikipedia entries on the Rayleigh-Jeans Law and the Ultraviolet Catastrophe.

The Photoelectric Effect -- Eric Weisstein's World of Physics.

The Photoelectric Effect -- Wikipedia.

The Photoelectric Effect -- Physics 2000.

Spectroscopy -- Wikipedia. There is also an entry here on the Bohr Model.

The Bohr Model of the Hydrogen Atom -- Eric Weisstein's World of Physics.

Spectral Lines -- Physics 2000.

The Stern-Gerlach Experiment -- Wikipedia.

The Stern-Gerlach Experiment -- Stanford Encyclopedia of Philosophy.

Spin at John Baez' Stuff: A good introduction to the basic quantum mechanics of electron spin, including a number of other experimental effects that I don't discuss here.

The Compton Effect -- Eric Weisstein's World of Physics.

Compton Scattering -- Wikipedia.

The Compton Effect -- Boston University Physics site; has a nice little applet to help visualize the effect.

Electron Diffraction -- Wikipedia. There is also a Wikipedia entry on the De Broglie hypothesis.

Electron Interference -- Physics 2000.

The Feynman Double Slit -- an online discussion based on Feynman's treatment of the double slit experiment in The Character of Physical Law.

The Discovery of Antimatter -- from the same site as the previous link.

Antimatter -- Wikipedia. There are also Wikipedia articles on the Dirac Equation and the Klein-Gordon Equation.

The Lamb Shift -- Eric Weisstein's World of Physics.

The Lamb Shift -- Wikipedia.

The Electron Magnetic Moment -- Wikipedia.

The Electron Gyromagnetic Ratio -- Eric Weisstein's World of Physics.