Quantum Paradigms Intuitive Models for the Quantum Wave Function Y 1. Historic Roots of Quantum Physics. Quantum physics can trace its roots at least as far back as the mid 18^{th} century. In 1752 Melville showed that an incandescent gas radiates at only discrete wavelengths. This behavior of the relatively isolated atoms or molecules in a gas contrasted mysteriously with the radiation emitted by a solid. In the latter case energy is radiated at all wavelengths. Following Melville’s discovery, it was found that gas atoms/molecules also absorb radiant energy only at discrete wavelengths. Such relatively isolated atoms appear to be oblivious to radiation at other wavelengths. No one could explain why gases should be so selective about emitted and absorbed wavelengths, and a full century slid by. Then in 1859 Kirchoff picked up the thread and showed that, for a collection of samekind gas atoms (i.e. for a given gaseous element), the emission and absorption wavelengths are identical. Still, the explanation for this selectivity remained a mystery. In 1885 Balmer worked out the first formula for a set of Hydrogen wavelengths. His discovery and similar formulas (subsequently worked out by others) suggested that there must be underlying reasons for such discrete wavelength sets. Yet the reasons remained elusive. As mentioned above, the situation for solids (or for large numbers of atoms bound to one another) was different. For it was known that any solid, at a temperature above absolute zero, radiates energy at all wavelengths (the percentage of total emitted power in a given small range of wavelengths being dependent upon the solid’s temperature). One class of solids of particular interest was the socalled "black bodies." A black body by definition absorbs all of the radiation incident upon it. (And of course it also radiates energy if its temperature remains constant.) In 1899 Lummer and Pringsheim measured the emissive power for black bodies. (The emissive power, R(l,T)dl, is defined to be the power radiated per unit of surface area in wavelength range l to l+dl and at temperature T.) They found that virtually no power is radiated at very short wavelengths. But as one looks at longer and longer wavelengths, the emitted power increases to some maximum. For wavelengths longer than that, the power then falls off again. The curves of emissive power vs. wavelength, obtained by Lummer and Pringsheim (and others), resembled the curves for molecular speeds in a gas (worked out earlier by Maxwell and Boltzmann). In one sense such curves made good sense. Quite as one might expect, the integral (over all wavelengths) of a LummerPringsheim curve … i.e. the total power emitted … is perfectly finite. The theorists went to work attempting to explain the black body emissive spectrum. Rayleigh and Jeans made the plausible assumption that the radiation within a cavity must be standing waves. (The energy density in such a cavity is proportional to the emissive power of the cavity’s walls. Thus the energy density must also be perfectly finite in a cavity.) Rayleigh and Jeans demonstrated mathematically that the number of possible electromagnetic radiation modes per unit volume, in wavelength range l to l+dl, must be n(l) dl = (8p / l^{4}) dl. (1.1) This number presumably correlated with the number of possible dipole oscillator modes in the cavity walls. However, Boltzmann’s work and the equipartition theorem held that energy in a connected system (such as the dipole oscillators in the cavity walls) is equally divided among all of its modes, with kT/2 of energy in each mode. Thus the actual emissive spectrum for black bodies implied some very disquieting results. For in the RayleighJeans formula (Eq. 1.1) the number of possible modes increases without bound as l approaches zero (or as the radiation moves into the ultraviolet end of the spectrum and beyond). According to classical theory, therefore, it seemed that the cavity energy at any T>0 should be infinite! But this was clearly not the case. This disconnect between classical theory and physical reality was so disconcerting that it was dubbed "the ultraviolet catastrophe!" Several attempts (including that of Rayleigh and Jeans) were made to work out a formula that fit the LummerPringsheim curves. But each of them was successful only over a portion of the curve. In some cases no theoretical explanation accompanied a formula. The formula was simply suggested, a la Balmer, as a mathematical curvefitting exercise. In 1900 Planck (a) advanced a formula that fit the LummerPringsheim curves, and (b) provided a physical basis for the formula. Like Rayleigh and Jeans, he assumed that the radiation in a black body cavity was absorbed and emitted by vibrating atomic dipoles in the cavity walls. But he further assumed that any given dipole cannot absorb or emit radiation in any amount. Rather he theorized that energy could only be absorbed and emitted in small but finite amounts, dubbed "quanta" (plural for "quantum"). Other considerations indicated that the energy of a quantum must be proportional to the vibrating system’s frequency: E = hn. (1.2) The constant of proportionality (h) in Eq. 1.2 is very tiny, and is called Planck’s constant. The relevance of Planck’s "quantum" hypothesis to the ultraviolet catastrophe was clear: the energy of extremely short wavelength (or extremely high frequency) modes of radiation could only be absorbed and emitted in relatively large amounts … an unlikely scenario for oscillating atomic dipoles. While Planck was busy working out a formula and theoretical basis for black body radiation, Lenard was investigating the manner in which electrons are ejected from an irradiated metallic surface. This socalled photoelectric effect had been known for some time. But in 1900 Lenard reported three rather "unclassical" behaviors. First he found that, when irradiated with light below a certain threshold frequency, a surface ejects no electrons, no matter how intense the radiation is! Secondly, for shorter wavelengths (or higher frequencies) he found that the kinetic energy of ejected electrons increases linearly with the frequency, and does not depend on the intensity of the incident radiation! (Increasing the radiation’s intensity only causes more electrons to be ejected.) Lastly, experimenting with very low intensities, he found that some photoelectrons are ejected practically the instant a metal is irradiated. Classically it might be expected in such low intensity cases that some time might elapse before an electron could absorb enough energy to be ejected. In 1905 Einstein extended Planck’s hypothesis from absorbing/radiating matter to light itself. He suggested that radiation of frequency n interacts with matter in discrete quanta, again of energy E=hn. Lenard’s observations indicated that these interactions occur practically instantaneously … an assumption suggesting that radiant energy is particulate and not spread out continuously in space (as classical Maxwellian theory supposes). In the case of light, the particles were dubbed "photons." Einstein’s insight not only provided a theoretical basis for Lenard’s discoveries, but helped explain subsequent experimental results, including (1) the demonstration (by Franck and Hertz in 1914) that freeflowing electrons give up their kinetic energy, when they collide with gas atoms, only in discrete and finite energy quanta, and (2) Compton’s demonstration (in 1923) that high frequency light scatters loosely bound electrons in a way that can best be described by collision theory among particles. In 1907 Einstein retrenched, extending Planck’s hypothesis about how matter only absorbs/emits radiant energy in quanta to the absorption/emission of heat energy. He thus provided a theoretical basis for the experimentally measured specific heats of various solids. The newly discovered behaviors of nature on atomic scales were beginning to make some sort of sense. Quantum theory and its fundamental tenet, that matter and radiation (and heat and …) interact instantaneously in discrete (if tiny) quanta of energy, was taking shape. In 1913 Bohr used the Planck/Einstein ideas to explain the longpuzzling wavelength spectra of Hydrogen atoms. He used the atomic model suggested by Rutherford, that the Hydrogen atom resembles a tiny star/planet system, with the planet orbiting the much more massive, practically resting star. Of course in the case of an atom, the "planet" is an electron, which carries an electric charge. Now classically a charge going in a circle radiates energy and, lacking a tangential driving force, an atomic electron should quickly lose energy and spiral in toward the nucleus. Furthermore, the emitted radiation should have the same frequency as the orbiting electron. Since atoms don’t collapse to the size of their nuclei but persist indefinitely, and since there were excellent reasons to believe that the Rutherford atomic model was more or less correct, Bohr concluded that classical theory isn’t right in the case of atoms. Orbiting electrons clearly do not radiate constantly. Furthermore, when they do radiate, the frequency is not the electronic orbital frequency called for by Newtonian mechanics. Bohr postulated that the radiation frequency must be proportional to the difference between the energies of the two quasistable electron orbitals: n = (E_{b} – E_{a}) / h. (1.3) He deduced that in order for this to be so (and for the radiated frequencies to agree with the known discrete spectra for Hydrogen), the electron’s angular momentum around the nucleus can only assume discrete values: L = nh / (2p), n=1,2,… (1.4) There was great excitement about Bohr’s ability to explain so many empirical results. In 1923 deBroglie advanced a sweeping generalization. He suggested that some sort of wave is associated with every particle (whether that particle be a photon of light or a material particle like an electron). And he proposed that the wavelength and frequency of this new kind of wave are related to the particle’s momentum (p) and its energy (E) in the following ways: p = h/l, (1.5) E = hn. (1.6) It turned out that deBroglie didn’t have things quite right. His new kind of wave would prove not to be an attribute of any given particle, but rather to be an attribute of an ensemble of particles (e.g., a very large number of particles with a common momentum). In 1927 Davisson and Germer provided a stunning corroboration of deBroglie’s ideas. In the course of scattering ensembles of electrons off the layers of atoms in a crystal, they discovered that the electrons scatter in particular directions. These directions are fully consistent with the interference of waves being scattered by the crystal’s atomic planes … a phenomenon previously investigated for light by Bragg. Light had of course long been known to be a wave phenomenon. (The new development with light was that it also exhibits particulate behavior.) The discovery that material particles (e.g. electrons) exhibit wavelike behavior took the physics world by storm! Among other things, electrons seem to go where deBroglie’s waves interfere constructively, and to shun those places where destructive interference occurs. Meanwhile, Heisenberg was busy reflecting upon how accurately certain complementary pairs of physical quantities can be determined. He deduced that a particle’s position and momentum (for example) cannot simultaneously be determined with absolute accuracy. Now classically it had always been assumed that such quantities could be perfectly known at some selected instant in time. For example, it was believed that an observer could precisely determine a system’s initial state and, having done this, that the observer could perfectly predict the system’s future states using the classical laws. However, Heisenberg concluded that the more accurate our knowledge of a particle’s position is at some moment, the less accurate our knowledge of its momentum must be (and viceversa). According to Heisenberg’s Uncertainty Principal the classical assumption, that a system’s initial state can be precisely known at some time t=0, is flawed. In the case of systems containing macroscopic (massive) particles it seems that we can determine the initial states precisely because the uncertainties are relatively insignificant. But rigorously speaking the imprecision is there, and this imprecision becomes more and more significant as the system’s constituent particles become microscopic. All we can say is that the momenta and/or positions of the particles, in an ensemble of "identically prepared" systems, have values within certain ranges. Of course this means that the ensemble of "identically prepared" systems are not truly all identical at t=0, and their states at later times can be expected to be prone to the same uncertainties. Heisenberg, Born and Jordan developed a new kind of mechanics that took these uncertainties into account. Mathematically their theory was based on matrices and is known as matrix mechanics. Meanwhile Schroedinger was pondering the demonstrated wave behavior of microscopic particles. In 1925 he advanced a differential equation for deBroglie’s new "quantum wave function" (dubbed "psi" or Y). The statistical behaviors predicted by this differential equation were quickly dubbed "wave mechanics." One of the distinguishing features of Schroedinger’s Y is that it is complex, with real and imaginary parts. In 1926 Schroedinger demonstrated that the new wave mechanics is equivalent to the matrix mechanics of Heisenberg et al. Dirac subsequently went on to show (in 1930) that matrix mechanics and wave mechanics are two forms of a more general formulation of quantum mechanics. 2. Born’s Interpretation of Y(r,t). While collaborating with Heisenberg and Jordan in the formulation of matrix mechanics, Born was also closely following developments in wave mechanics. Everyone wondered about the nature of the new quantum wave function, Y, and what its value at a given point in space and time might signify. Born was well aware of the fact (courtesy of Davisson and Germer) that particles are detected where quantum waves interfere constructively, and are more rarely detected in regions of destructive interference. Since wave amplitudes can typically assume positive and negative values (and/or since a complex quantity cannot itself map to a real physical quantity) he suggested in 1926 that the amplitude of Y squared (i.e., Y(r,t)Y*(r,t) or Y(r,t)^{2}) is proportional to the probability that a particle will be detected in the volume element dV, centered on position r, in the time interval dt, centered on time t.. It is important … very important … to bear in mind that the quantum wave function does not refer to a particular particle. Rather it refers to an ensemble of "identically prepared" systems containing a particular kind of particle (say an electron). In the determinate world of classical physics the focus was usually on a particular system. By applying physical laws, it was believed that the future state of any system could be absolutely predicted, given a precise knowledge of its initial state. But thanks to Heisenberg, it became clear that the initial state of a system cannot be precisely known. More generally, the "identically prepared" systems comprising an ensemble necessarily vary somewhat because of intrinsic uncertainties in pairs of complementary variables (such as position and momentum). Thus it made sense that the positions and momenta of the particles in such an ensemble would vary if the systems were observed at some later time. One can at best only say, a priori, that there is a certain probability a particle will be found in a particular volume element during a particular increment of time, when we examine one of the systems in an ensemble (and similarly for the particle’s momentum). Let us take a closer look at what Born meant by "the probability that a particle will be detected…". We shall consider an ensemble of systems, each containing a single particle (say an electron). We start out at time t=0 with an ensemble of N such "identically prepared" systems, where N is arbitrarily large (or infinite). At some later time t_{1}>0 we look for the particle in each member of an arbitrarily large subset of the ensemble (containing, say, n<N systems, where n is also very large). Let us for openers suppose that we successfully locate the particle in each and every system we look at. "At" time t_{1} we find the particle "at" r_{1} (i.e. in volume element dV centered on r_{1}) F(r_{1},t_{1}) times, and we find the particle "at" r_{2} F(r_{2},t_{1}) times, etc. Then by definition, the probability of finding the particle "at" r_{1}, "at" time t_{1}, is P(r_{1},t_{1})dVdt = F(r_{1},t_{1}) / n (2.1) And, according to Born, P(r_{1},t_{1})dVdt = Y(r_{1},t_{1})^{2}dVdt. (2.2) We can repeat the procedure at times t_{2}, t_{3}, etc., using additional systems. Thus we can in principal work up values of P(r,t) for all r and all t>0. Of course we never actually do things this way! Rather we use quantum theory to determine Y(r,t) and hence P(r,t). Rigorously speaking we would have to perform the abovedescribed experiment for infinite N and n (i.e. over all of space and for all times > 0) in order to exhaustively test our theoretical function Y. It should be borne in mind that P(r,t) may be found to be zero throughout much of space. For example, if we know that our ensemble is initially contained in volume V_{initial}, then at times t>0 we can count on the particle not being outside of some expanded, finite volume (assuming the speed of every particle in the ensemble is < c). Outside of that expanded volume we should find P(r,t) to be zero, whether we calculate P a priori (using quantum theory) or actually check all of infinite space out (not likely!). There is a second, somewhat mysterious case where P(r,t)dVdt will turn out to be zero. This occurs within the volumes that can contain the particle at time t>0. Classically we might be inclined to expect that, within any one of these volumes, the particle will be found at least once in every volume element dV (provided only that n is large enough). However, in reality this may not turn out to be the case, owing to the fact that particles may not invariably interact with their environment (and specifically with us) at each and every position in space and time. Let us say that, sometime during the period t to t+dt, we find the particle (somewhere) in m of the systems, where m<n. Then the probability of finding the particle somewhere "at" time t is m/n. And of course the probability of not finding the particle anywhere "at" time t is (nm)/n. In these cases we again have (after all the results have been tallied) P(r,t)dVdt = F(r,t)/n. But once all the results are in, there may be values of (r,t)where the particle wasn’t undetected a single time (i.e. there may be values of (r,t) where F(r,t) remains unchanged from its initial setting of zero). At one of these positions P(r,t)dVdt=0 … a result consistent with the possibility that Y(r,t) may equal zero at certain values of (r,t). Despite the discoveries of Franck, Hertz et al, there are some who suggest that a particle is never where Y(r,t)=0. We shall hold to the less radical view that, in some cases, a particle simply will not interact with its environment (including us) where and when Y(r,t)=0. 3. Spinning Spirals. In this section we address the question of how Y might be visualized in space and time. Let us begin by supposing that at time t=0 we have an ensemble of systems, each containing a single electron, and that all of the electrons have a common, constant velocity that we know exactly. For convenience we shall suppose that v<<c and that v is in the positive xdirection. Since we also know that v_{y} and v_{z} precisely equal zero, Heisenberg implies that, in any given system, the electron could initially be found anywhere in space. What can we postulate about Y, the single wave function for this ensemble? It is reasonable to suppose that Y’s direction of propagation is also in the positive xdirection (though not necessarily with v_{phase} equaling v). Indeed Y must be a plane wave (with samephase planes parallel to the yzplane). Thus Y is a function only of x and t. In considering a form for Y(x,t) we might be inclined to consider known physical waves. For example, we might consider a plane polarized electromagnetic wave. Might Y be like a sine wave at some given instant, oscillating sinusoidally in time as it passes some fixed point in space? If so, then the probability of finding an electron, "at" some observation time t, varies with x. And "at" some fixed x the probability varies in time. A priori, however, there is no reason to suppose that some values of x or t should be more heavily weighted than others. Let us next consider a circularly polarized wave. In this case the wave amplitude is the same at all x. If we move along a ray (parallel to the xaxis) at some instant in time, with the tail of a field vector anchored to the ray, then the vector head rotates around the ray in a spiral. And if we observe things over a period of time, at some fixed x, then the vector spins around the ray. A circularly polarized wave thus satisfies our expectation that P(x,t)dx be the same at all x and t. The projection of Y, on the xy and xz planes, produces the "components" of Y. If Y is a vector, then we have vector components. But there is a second possibility. Suppose that, at any given x, we picture a complex plane parallel to the yzplane, with its imaginary axis parallel to the yaxis and its real axis parallel to the zaxis. Then Y might be complex, with Y_{y} = Im Y and Y_{z} = Re Y. Of course Y^{2} = YY* is real (quite as, by definition, the probability of finding the particle "at" a particular point in spacetime is real). We begin, therefore, by supposing that Y can be graphically pictured in space by using a spiral. For simplicity we consider points on the xaxis, with the spiral curling around that axis. Let us position the origin (or Y) such that, at time t=0, Y is real at x=0. Looking down the xaxis at that moment, in the positive xdirection, there are two possibilities: a clockwise (or righthanded) and a counterclockwise (or lefthanded) spiral. Since a particle can move "forward" in space (positive xdirection) and "backward" in space (negative xdirection), we can use the clockwise spiral for one direction and the counterclockwise for the other. How about time? We shall suppose that a particle can only move forward in time. If both spirals spin around the axis in a clockwise direction, with their angular speed correlating to particle energy, then the counterclockwise (lefthanded) spiral’s threads move in the positive xdirection, and the clockwise (or righthanded) spiral’s threads advance in the negative xdirection. The stage is now set for relating spiral geometry and kinematics to mechanical variables using deBroglie’s relations. According to him, the spiral wavelength (i.e. the distance over which a given value of Y repeats) is related to the magnitude of a particle’s momentum by p = h / l. Similarly the particle’s energy is related to the spiral’s frequency of rotation by E=hn. Or, letting k=2p/l and w=2pn, deBroglie says that E = hw/2p (3.1) and p = hk/2p. (3.2) (Like p, k is a vector. Its magnitude is often referred to as the wave number.) We can deduce how l (the spiral wavelength) relates to v, the particle’s speed, as follows: p = mv, (3.3) But from deBroglie, p = h / l. (3.4) Thus l = h / mv, (3.5) or k = 2pmv / h. (3.6) In the case of particle energy and spiral spin frequency, we shall say that E is the particle’s kinetic energy. Then mv^{2} / 2= hn, (3.7) or w = pmv^{2} / h = pp^{2} / mh (3.8) Now the spiral phase velocity has magnitude v_{phase} = ln = w/k. (3.9) Therefore v_{phase} = v/2. (3.10) What is the wave equation for the counterclockwise (or lefthanded) spiral? Its threads advance in the positive xdirection as it spins clockwise around the xaxis. Thus Y_{+}(x,t) = Y_{o+} cos(kx  wt) (3.11)
where Y_{o+} = Y_{+}(0,0) can conveniently be assumed to be real and positive, and the "+" subscript signifies a spiral (and particles) moving in the positive xdirection. Similarly, an ensemble of particles moving in the negative xdirection would be represented by the righthanded spiral, with formula Y_{}(x,t) = Y_{o} cos(kx + wt) (3.12)  iY_{o} sin(kx + wt). Or, in slightly more compact notation (see Appendix A if needed), Y_{+}(x,t) = Y_{o+}e^{i(kx  }^{wt)}, (3.13) Y_{}(x,t) = Y_{o}e^{i(kx + }^{wt)}. (3.14) It should be borne in mind that, in the case of an ensemble of particles moving either in the positive or negative xdirection with precisely known velocity, a single wave function specifies the entire ensemble. There are no interference effects of this single wave function with itself. (In the next section we will consider cases where two Y’s interfere constructively and destructively.) Before leaving the present case, we should consider Y_{o}, the amplitude of Y_{+} or Y_{}. Any time we look for and find the particle, it could a priori be anywhere. That is, we expect Y_{+} or Y_{} to have the same amplitude everywhere. Since YY* > 0 at all x (i.e., since it is a certainty that we’ll find the particle somewhere every time we look for it), we have the normalization requirement that (3.15) In view of Eqs. 3.11 and 3.12, Y_{o} must be infinitesimal in the present case. Thus even in some finite interval, Dx, the probability of finding the particle, when we check any given system in the ensemble, is infinitesimal: Y(x,t)Y*(x,t)Dx ~ 0. (3.16) It is only when we look everywhere (i.e. when Dx is infinite) that YY*Dx becomes nonzero (namely unity). 4. Interference. Davisson and Germer were first to demonstrate the effects of interfering deBroglie "matter waves." (G.P.Thomson demonstrated the same thing, using an apparatus patterned after one used by Debye and Scherrer to investigate Xray diffraction.) An ideal setup would have been one like Young’s double slit experiment with light (performed way back in 1803). But owing to the extremely short wavelengths of "matter waves," no one could at first engineer adequately narrow slits. In 1961 Jonsson solved the problem using minuscule slits in copper foil. Instead of shining a plane electromagnetic wave on the foil (a la Young), however, he used a stream of monoenergetic electrons. On the incident side of the foil this stream approximated a single ensemble, represented by a single wave function. However, when the electrons pass through the slits, two new ensembles are created (say Y_{1} and Y_{2}), each with its own wave function. These two new wave functions have common wavelengths and frequencies. The beauty of it is that these two new Y waves are in phase at the foil (or at least their phase difference remains constant in time). Owing to the tiny slit widths, each new wave function spreads out in circular wave fronts and the two Y’s begin to interfere beyond the copper screen. Using an array of electron detectors, in a plane parallel to the copper sheet, Jonsson obtained the same patterns of constructive and destructive interference as Young had obtained with light so many years earlier. If one of the slits in the copper sheet is covered, then the interference pattern at the detectors is lost. For example, with both slits open the major peak in the interference pattern typically occurs on a horizontal line midway between the slits and perpendicular to the copper foil. But when one slit is covered this peak diminishes measurably. Indeed one could alternately cover and uncover one of the slits to communicate with the central detector in dot/dash fashion. It is interesting to speculate about how long, after a slit is covered, an attenuation is sensed at the central detector. Since there is a direct connection between the number of electrons sensed per unit time and Y_{1} + Y_{2}^{2}, it is plausible that the time lapse is L/v_{phase}, where L is the distance from either slit to the central detector. Classically we might expect the interference pattern to vanish after a time lapse of L/v. But if v_{phase}=v/2, then wave mechanics indicates that the time lapse will be twice as long!
5. Reflected Y.
Fig. 5.1 depicts the case where a monoenergetic stream of electrons is reversed upon encountering an infinite potential energy barrier at x=0. Figure 5.1 Reflected Matter Waves The electrons moving toward negative x constitute one ensemble, and those moving toward positive x constitute another. Each ensemble is characterized by a wave function, say Y_{} and Y_{+}. It is a simple matter to calculate Y_{}(x,t) + Y_{+}(x,t), provided one knows what phase shift (if any) occurs at the barrier. In the case of light, there is a phase reversal upon reflection. And indications are that this is also the case with matter waves. Assuming Y_{}(0,0) = Y_{+}(0,0) = Y_{o}, where Y_{o} is real and positive, we have Y(0,0) = Y_{}(0,0) + Y_{+}(0,0) = 0. (5.1) Thus Y(0,0) Y*(0,0) = 0. (5.2) Furthermore, Y_{}(0,0) and Y_{+}(0,0) both rotate clockwise at the same rate. Thus more generally, Y(0,t) Y*(0,t) = 0. (5.3) Evidently we will never find electrons right at x=0 (although they may be there at certain discrete instants). At values of x other than zero, we have (at time t=0) Y_{}(x,0) = Y_{o}[cos(kx) – i sin(kx)], (5.4) Y_{+}(x,0) = Y_{o}[cos(kx) + i sin(kx)]. (5.5) Thus Y(x,0) = Y_{}(x,0) +Y_{+}(x,0) (5.6)
Y(x,0) is purely imaginary at all x. It lies all in a plane, and does not spiral around the xaxis. For P(x,0) we have (courtesy of Born) P(x,0) = Y(x,0) Y*(x,0) (5.7)
Fig 5.2 illustrates. Figure 5.2 P(x), Infinite Barrier at x=0 The probability of finding an electron, which is zero at x=0, rises to a maximum value of 4Y_{o}^{2} at x=l/4. It then drops back to zero at x=l/2, rises again to 4Y_{o}^{2} at x=3l/4, etc. This behavior is certainly not what we would classically expect! As in the case of electrons that all move with one common velocity, we might classically expect P(x,0) to be singlevalued. But in quantum theory P(x,0) rises and falls as sin^{2}(kx). We do not see this behavior in macroscopic cases because l is so small (typically less than the diameter of an atomic nucleus). It is only when we are dealing with microscopic particles (and longer wavelengths) that the fluctuations in P(x,0) become significant. It is worth pointing out that Y_{} and Y_{+} rotate clockwise at every x (quite as they do at x=0). Thus in general, P = 4Y_{o}^{2} sin^{2}(kx). (5.8) Rigorously speaking, P depends only on x. Such cases are referred to as "stationary states" (i.e., stationary in time). The result that P(x) is not singlevalued, but repeatedly rises from zero to a maximum and back again to zero, is the stunning new implication of wave mechanics. It doesn’t have anything to do with interactions between the electrons (or neutrons, or …). Indeed we can let just one electron through to the barrier at a time. After enough have been reflected and subsequently located, we still get the rise and fall in P(r,t). This rise and fall is evidently attributable to the way Y_{} and Y_{+} interfere. The ramifications are farreaching. It was only when physicists had developed techniques for observing nature on a microscopic scale that wave function interference effects became manifestly evident. 6. The Infinite Square Well. In Sect. 5 we could as well have had plane wave Y_{+} incident on the left side of an infinite barrier. We still would have obtained the sin^{2}(kx) form for Y_{} + Y_{+}^{2}. In Fig. 6.1 we have two infinite barriers, with plane matter waves reflecting off both barriers. This situation corresponds to an ensemble of bound electrons (i.e., bound to the region 0 < x < L). Figure 6.1 Bound Matter Waves In effect, if Y_{} is propagating to the left and Y_{+} is propagating to the right, then we again have interference. (We might think of there being a single wave, reversing phase at each reflection and winding around back on itself.) Such setups are well known in the case of electromagnetic waves. And there is an important constraint: the wavelength of the contained radiation must be such that there are electric field nodes at x=0 and x=L. In other words, l must be such that nl / 2 = L, n = 1, 2, … (6.1) Wavelengths that satisfy Eq. 6.1 are said to be resonant. The same constraint holds in the case of matter waves. Thus (according to deBroglie) only discrete particle momenta and energies are resonant. Here again, classically we might expect that a particle of any energy could be contained. However, quantum theory requires resonant momenta/energies. Macroscopically it might seem that a continuum of energies can be bound, owing to the fact that the differences between the resonant energies are too small to detect. In microscopic cases, however, the differences between resonant energies is much more significant, and the fact that only discrete energies can be bound is of fundamental importance. 7. Wave Groups. In previous sections we considered the highly idealized cases where all of the particles in an ensemble of single particles had a singlevalued speed (or momentum magnitude). In more realistic situations the particles in a "grand" ensemble of "identically prepared" systems will have a range of momenta. We might do our best always to give the particles the same initial momentum. But the fact that we prepare our systems in some finite volume of space, if nothing else, ensures that the particles’ momenta will take on a spread of initial values. Now the set of particles with any particular momentum magnitude in the range p to p+dp can be considered to be an ensemble in its own right, with its own, simple wave function. This means that the "grand" ensemble’s wave function will be a composite (or sum) of the many simple, constituent wave functions (quite as Y_{ }equaled Y_{ }+ Y_{+} in previous sections). Ordinarily the amplitudes of the constituent wave functions will vary as the numbers of particles with different momenta vary. For simplicity, however, let us assume that all of the constituent wave functions have one common amplitude, say unity. In order to sum all of the constituent wave functions, over some range of x and at times t>0, we must know their phase relationships at some x and at initial time t=0. In the case of the infinite potential barrier we opted for Y_{} to be real at the barrier, at time t=0. And we invoked the phase reversal rule for Y_{+} at the same position and time. In the case of the square well we made the same assumption. (And the constraint that L = nl/2 gave us the same phase relationship at the other wall.) In the present case, of particles all moving in the positive xdirection with a range of momenta, we shall assume that each constituent Y wave is real and positive at x=0 and t=0. (Other sets of phase relations at x=0 and t=0 may not produce a recognizable wave group in the range of x considered.) Having adopted this rule, the constituent wave functions can be summed at that moment to get the resultant wave function at other values of x. After summing all constituent wave functions to get the resultant Y(x,0), P(x,0) = Y(x,0)Y*(x,0) can be computed. This probability amplitude is plotted in Fig. 7.1. Figure 7.1 P(x,0), Range of Equally Weighted Momenta The perhaps not altogether unexpected result in Fig. 7.1 is that P(x,0) is not a simple gaussian. There are several values of x where P(x,0) goes to zero, but then rises above zero again. Does this mean that there aren’t any particles initially at the "zero" x’s? Or does it mean that any particles at these points won’t interact with their environment (including us)? Of course if our uncertainty in initial particle position is limited to .0006<x(0)<.0006, then P(x,0) is expected to be zero at x>.0006. The "quantum surprise" is the zero probabilities of finding a particle at certain points in the range .0006<x<.0006. In previous sections we have seen how wave functions for monoenergetic particles, traveling in opposite directions, interfere to produce stationary states. In Fig. 7.1 it is clear that multiple constituent waves, for particles with a range of energies and all traveling in the same direction, also interfere. However, in this case P(x,t) is not stationary … it will evolve in time. It might be expected that the center of the group will move in the positive xdirection with a group velocity equal to the mean particle velocity. Particles with velocities less than this mean will tend to fall behind, and those with velocities greater than the mean will tend to move out further in front. Thus we expect the group in Fig. 7.1 to both translate to the right and to broaden as time passes. Of course the shape of the plot will depend on the details of how the constituent wave functions (each with its own phase velocity and rotation rate) combine. Figs. 7.2 – 7.6 show the computed P(x,t) for successively greater times. ("v" in the captions stands for the mean speed.) Note how the group translates to the right and broadens out with the passage of time. Also noteworthy is how the curve for P(x,t) changes shape as time progresses. Figure 7.2 P(x, h/2mv^{2}) Figure 7.3 P(x, h/mv^{2}) Figure 7.4 P(x, 2h/mv^{2}) Figure 7.5 P(x, 5h/mv^{2}) Figure 7.6 P(x, 10h/mv^{2}) 8. Nonzero Potential Energy. Up until now we have implicitly considered cases where particle ensembles traveled through regions of constant, zero potential energy (U=0). When the potential energy changed from zero, it went to infinity and constituted an impenetrable barrier. In classical mechanics the concept of a potential energy that depends only on position is particularly useful. The force experienced by a particle is then related (in one dimension) to the spatial rate of change of the potential energy: F_{x} =  dU/dx. (8.1) U(x) can have many forms. The potential energy of a stretched spring varies as x^{2} (x being the amount the spring is stretched or compressed): U(x) = kx^{2} / 2. (8.2) (In Eq. 8.2 "k" is the spring constant.) The potential energy of a particle of mass m, in a constant gravitational field of strength g, is U(h) = mgh. (8.3) (In Eq. 8.3 "h" is the distance above a point where U is defined to be zero.) Quite often the potential energy is considered to be an attribute of the particle experiencing the force. For example, U(x) in Eq. 8.2 is often said to be the potential energy of a particle on the end of a spring. When U is nonzero, the particle’s energy is usually expanded to include U: E = mv^{2}/2 + U. (8.4) The utility of Eq. 8.4 lies in the idea that, when U depends only on position, then E is constant. Thus a knowledge of U(x) allows us to calculate v as a function of x. The point (or region) where U=0 is usually left up to the discretion of the analyst. For example, we implicitly chose the region inside the infinite square well (Sect. 6) to be the zone where U equaled zero. The rate at which the spiral for Y spun was then set proportional solely to the kinetic energy: w = pmv^{2} / h. (8.5) Of course we might have chosen the top of the well to be the region where U equaled zero. That would have meant that, inside the well, U would have been infinitely negative. Would that have impacted Y? If we include U in a particle’s energy, then it certainly would have impacted the constituents, Y_{} and Y_{+}. For each of them would presumably have rotated counterclockwise at an infinite rate (looking down the positive xaxis). And it would then have been the clockwise (or righthanded) spiral that correlated to the particle moving in the positive xdirection, etc. Would an infinite, negative w have affected the magnitude of Y = Y_{} + Y_{+}? No! The phase difference between Y_{} and Y_{+}, at any given x, is constant in time (assuming both spirals spin at a common rate). P still turns out to be stationary in time. How about the wave group discussed in Sect. 7? There again U was implicitly taken to be zero so that, for particles with speed v_{i}, we had w_{i} = pmv_{i}^{2} / h. (8.6) What if we had assumed that U had some constant, nonzero value? Then we would have found w_{i} = pmv_{i}^{2} / h + 2pU/h. (8.7) Here again we can compute the sum of all the Y_{i}’s. Fig. 8.1 plots P vs. x at the same instant used in Fig. 7.6. (The constant potential energy was set to more than 10^{10} times the maximum kinetic energy.) The two plots are virtual overlays. Evidently Y is unaffected by a constant U offset. Figure 8.1 P(x, 10h/mv^{2}), U=10^{15} 9. When U Varies with x and t. It might be said that one of the goals in theoretical physics is to come up with differential equations that describe broad classes of phenomena. For example, Newton’s Second Law theorizes (in one dimension) that F_{x} = m (d^{2}x / dt^{2}). (9.1) To the extent we can specify the force acting on a particle, we theoretically know its acceleration and can compute or analytically determine x(t), etc. Maxwell’s Equations are also differential equations. By applying some vector algebra to them, their author found that in chargefree space, ¶ ^{2}E_{y} / ¶ x^{2} = m_{o}e_{o} (¶ ^{2}E_{y} / ¶ t^{2}) (etc.), (9.2) where m_{o} and e_{o} were constants known from experiment. Maxwell realized that Eq. 9.2 resembles the general equation for waves (propagating along the xaxis): ¶ ^{2}f / ¶ x^{2} = (1/v^{2}_{phase}) ¶ ^{2}f / ¶ t^{2} . (9.3) v_{phase} in Eq. 9.3 is the wave’s phase velocity, and f is "the thing that undulates." Maxwell also noted that m_{o}e_{o} equals 1/c^{2}, c being the measured speed of light. Thus Eq. 9.2 was seen to be a particular form of Eq. 9.3, and he concluded that light is an electromagnetic wave. At the time, this insight caused a great stir in the scientific community! Most of "the things that undulated" in classical physics were real physical quantities. Thus a general solution to Eq. 9.3 would be f(x,t) = f_{o }cos(kx +wt) (9.4)
Naturally when the wave properties of material particles were discovered, the theorists went to work trying to figure out a general differential equation that governs Y. In previous sections we have seen that, when U is constant, Y_{+}(x,t) can conveniently be visualized as a spinning spiral, with equation Y_{+}(x,t) = Y_{o} cos(kx wt) (9.5)
By inspection ¶ ^{2}Y_{+} / ¶ x^{2} = k^{2} Y_{o} cos(kx wt) (9.6a)
and ¶ ^{2}Y_{+} / ¶ t^{2} = w^{2} Y_{o} cos(kx wt) (9.6b)
Therefore ¶ ^{2}Y_{+} / ¶ x^{2} = (k^{2}/w^{2} ) ¶ ^{2}Y_{+} / ¶ t^{2} (9.7) or, since v_{phase} = w/k, ¶ ^{2}Y_{+} / ¶ x^{2} = (1/v^{2}_{phase} ) ¶ ^{2}Y_{+} / ¶ t^{2}. (9.8) Eq. 9.8 is the wave equation for photons of light traveling in the positive xdirection. In the case of light, E and p are proportional. For if, as deBroglie suggested, E = hw / 2p (9.9a) and p = hk / 2p. (9.9b) then (since w/k = c = v_{phase}) we have, in the case of photons, E = pc. (9.10) In the case of material particles the kinetic energy (say T) is proportional to the square of the particle’s speed: T = mv^{2} / 2 = p^{2} / 2m. (9.11) If U=0 (so that E=T) then we have w = hk^{2} / 4pm. (9.12) Evidently (in the case of material particles) it must be the first time derivative of Y_{+} that is proportional to the second space derivative. For example, differentiating Eq. 9.5 produces ¶ Y_{+} / ¶ t = w Y_{o} sin(kx wt) (9.13a)
and ¶ ^{2}Y_{+} / ¶ x^{2} = k^{2} Y_{o} cos(kx wt) (9.13b)
Eqs. 9.13a and b are consistent with Eq. 9.12 provided (ih / 2p)¶ Y_{+}/¶ t=(h^{2}/8p^{2}m)¶ ^{2}Y_{+}/¶ x^{2}. (9.14) Eq. 9.14, first worked out by Schroedinger, is the wave equation for an ensemble of material particles, traveling in the positive xdirection, when U=0. When U is constant and nonzero, then E = p^{2} / 2m + U, (9.15) and w = hk^{2} / 4pm + 2pU / h. (9.16) Eq. 9.16 is satisfied provided (ih / 2p) ¶ Y_{+} / ¶ t = (9.17)
Eq. 9.17 is the Schroedinger wave equation for material particles traveling in the positive xdirection when U is constant. Having worked out Eq. 9.17, Schroedinger made an intuitive leap. When U is a function of x and t, he theorized that the wave equation for Y_{+} is (ih / 2p) ¶ Y_{+} / ¶ t = (9.18)
Of course only experiment could validate such an intuitive leap. Indications to date are that Schroedinger was right! Eq. 9.18 put atomic physics on a firm footing for the first time. Granted, given some U(x,t), finding Y_{+}(x,t) solutions can be mathematically challenging. And of course the task is even more difficult when U is a function of all three coordinates in space, as well as time. Often computers must be used to approximate a solution. In previous sections we have considered cases where Y_{} and Y_{+} are both present. In these cases presumably Y=Y_{}+Y_{+}. An important feature of the Schroedinger equation (Eq. 9.18) is that it is linear. That is, if Y_{} and Y_{+} are solutions, then so is Y = Y_{} + Y_{+}. Actually Schroedinger wrote his equation for Y, and not for Y_{} and/or Y_{+}. We have chosen to start with the "fundamental" constituents, Y_{} and Y_{+}, and to build from there. It is interesting to visualize Y = Y_{} + Y_{+} in the case of an infinite well. Although Y_{} and Y_{+} are visualized as spirals at any given moment, their sum lies all in a plane. In the case where we opted to have Y_{}(0,0) = Y_{+}(0,0) be real (Sect. 6), Y(x,0)=Y_{}(x,0)+ Y_{+}(x,0) is purely imaginary at all x > 0 and <L. Like the spirals for Y_{} and Y_{+}, this plane rotates around the xaxis. At time t = p / 2w, Y(x,t) will be purely real at all x>0 and <L, etc. Of course as far as P(x,t) is concerned, it is irrelevant whether Y_{}(x,t)+ Y_{+}(x,t) is purely imaginary, purely real, or complex at time t. P(x,t)=Y(x,t) Y*(x,t) is always real. 10. The TimeIndependent Schroedinger Equation. In Sect. 5 we first met a stationary state, where P depends only on x. True enough, Y(x,t) was timedependent, spinning around the xaxis at rate w. But at any given x, Y (the magnitude of Y) depended only upon x. We encountered the same phenomenon in Sect. 6, where the infinite square well was introduced. In the lowest energy state, where the whole distance between the well walls (L) is occupied by Y_{+} and Y_{} (with a common wavelength of l = 2L), the resultant Y (i.e. Y_{+} + Y_{}) can be visualized as a playground jump rope, being swung around and around at frequency n. Indeed we could express Y(x,t) as Y(x,t) = y(x) e^{i}^{wt}, (10.1) where y(x) is a solution of the socalled timeindependent Schroedinger equation at time t=0. Stationary states typically occur where a particle is bound in a symmetrical potential that depends only on position (U = U(x)). They are of such practical interest that it is worth deriving the timeindependent Schroedinger equation and solving it for a simple case. Since the two terms, to the right of the equal sign in Eq. 10.1, depend only on x and t respectively, the Schroedinger equation (Eq. 9.18 applied to Y=Y_{}+Y_{+}), in a potential that depends only on x, becomes (ih / 2p) y(x) (i) w e^{i}^{wt} = (10.2)
+U(x) y(x) e^{i}^{wt}. Or, dividing through by y(x) e^{i}^{wt}, hw / 2p = (h^{2} / 8p^{2}m) (d^{2}y/dx^{2}) (1/y)(10.3)
Since E = hw / 2p, we may rephrase Eq. 10.3 as Ey = (h^{2} / 8p^{2}m) d^{2}y / dx^{2} + Uy. (10.4) Eq. 10.4 is the timeindependent Schroedinger equation for y(x). The solution generally depends on the form of U(x). Once we have found it, we have Y(x,t) = y(x) e^{i}^{wt}. (10.5) And conveniently enough, YY* = ye^{i}^{wt} y*e^{i}^{wt} = yy*. (10.6) Clearly P is independent of time. Let us solve Eq. 10.4 for y using the infinite square well. U is constant everywhere inside the well, and we lose no rigor if we set U equal to zero. In this case h^{2}/8p^{2}m = E/k^{2} and Eq. 10.4 simplifies to d^{2}y/dx^{2} = k^{2}y. (10.7) Two solutions to Eq. 10.7 are y_{+}(x) = N^{1/2}e^{ikx}, (10.8a) y_{}(x) = N^{1/2}e^{ikx}, (10.8b) where (since y_{+}(0) + y_{}(0) must equal zero) we have set y_{}(0) equal to –N^{1/2}. (N^{1/2} is a normalization constant to be determined.) The resultant y is just y_{+} + y_{}: y(x) = N^{1/2} ( e^{ikx} – e^{ikx} ). (10.9) And, as shown in Appendix A, e^{ikx} = cos(kx) + i sin(kx), (10.10a) e^{ikx} = cos(kx)  i sin(kx). (10.10b) Thus y(x) = 2 i N^{1/2} sin(kx). (10.11) In this particular case, y(x) is purely imaginary at all x in the range 0<x<L. For the probability amplitude we have P = yy* = 4N sin^{2}(kx). (10.12) Normalization requires that P dx integrate to unity in the range 0 to L (or in the range 0 to p/k). That is, normalization requires that (4N)(p/2k) = 1, (10.13) and thus N = 1/l. (10.14) Substituting back in Eq. 10.11, and bearing in mind that l = 2L, y(x) = 2i (1 / 2L)^{1/2} sin(kx) (10.15) = i (2/L)^{1/2} sin(kx). Eq. 10.15 is the solution to the timeindependent Schroedinger equation (Eq. 10.4) in a square well of width L, where U=0. The magnitude of y(x) is of course y(x) = (2/L)^{1/2} sin(kx), (10.16) and the integral of y(x)^{2} dx, from zero to L, is unity. Knowing y(x), we may immediately write down the timedependent wave function: Y(x,t) = y(x) e^{i}^{wt} (10.17) =i (2/L)^{1/2} sin(kx) e^{i}^{wt}. Y(x,t) has the same magnitude as y(x), and thus YY* dx also integrates to unity. The Real and Imaginary parts of Y(x,t) can be represented as surfaces above the xt plane. The timedependent Schroedinger equation is a partial differential equation, whereas the timeindependent Schroedinger equation is an ordinary differential equation (containing only whole derivatives). In general, ordinary differential equations are easier to solve than partial differential equations. Thus the timeindependent Schroedinger equation is of great utility in cases where particles are bound in symmetrical potentials. 11. A Finite Potential Barrier. Fig. 11.1 depicts a stream of monoenergetic electrons, moving with a common speed v_{1} in the positive xdirection and encountering a finite potential barrier at x=0. Since mv_{1}^{2}/2 is less than U_{2}, all of these electrons would classically have their momentum reversed at x=0. No electron would penetrate beyond x=0, and Zone 2 is often called "the (classically) forbidden zone." Figure 11.1 Y_{+} Waves Incident Upon a Finite Barrier Experiment indicates that electrons (and other microscopic particles) do not behave classically. Some of them penetrate into the forbidden zone. The number found in x to x + dx (where x > 0) drops off with increasing x. It is not unreasonable to suppose that some of the electrons are reflected between 0 and dx, and the rest keep penetrating beyond dx, with some of these being reflected between dx and 2 (dx), etc. Let us assume that the timeindependent Schroedinger equation (Eq. 10.4) applies in both Zone 1 and Zone 2. As with the infinite barrier (Sect. 5), in Zone 1 we have y_{+(1)}(x) = y_{+(1)}(0) e^{ikx} (11.1) and y_{(1)}(x) = y_{(1)}(0) e^{ikx}. (11.2) Strictly speaking the magnitudes of y_{+(1)}(0) and y_{(1)}(0) are infinitesimal. But we shall scale them up to finite value y_{o} for practical purposes. In the case of the infinite barrier, we had y_{+(1)}(0) = y_{(1)}(0), and thus y_{(1)}(0) and P_{(1)}(0) were zero. However, in the case of a finite barrier P_{(2)}(x) is greater than zero, and it attenuates with increasing x. Therefore P(0) and y(0) must be greater than zero. In order for this to be so, y_{+(1)}(0) and y_{(1)}(0) cannot be completely out of phase. Fig. 11.2a shows trial values for y_{+(1)}(0) and y_{(1)}(0). Note that y_{(1)}(0) is completely imaginary for this choice: y_{(1)}(0) = 2i y_{o} sin(q). (11.3) where y_{o} = y_{+(1)}(0) = y_{(1)}(0). Figure 11.2a Suggested y_{(1)}(0) and y_{+(1)}(0), Finite Well Now it is clear in Fig. 11.2a that y_{(1)}(0) = y_{+(1)}(0) e^{2i}^{q}. (11.4) Or, since y_{+(1)}(0) = y_{o} e^{i}^{q}, (11.5) it follows that y_{(1)}(0) = y_{o} e^{i}^{q}. (11.6) Substituting in Eq. 11.2, and adding the result to Eq. 11.1, we obtain y_{(1)}(x) = y_{o} (e^{i(}^{q + kx)} – e^{i(}^{q + kx)}) (11.7) = 2i y_{o} sin(q + k_{(1)}x). Hence P_{(1)}(x) = 4 y_{o}^{2} sin^{2}(q + k_{(1)}x), (11.8a) dP_{(1)}(x)/dx = (11.8b)
= 4 k_{(1)} y_{o}^{2} sin[2(q + k_{(1)}x)]. Let us now turn our attention to Zone 2. Since no particles are turned back right at x=0, we require that y_{+(2)}(0) y*_{+(2)}(0) = y_{+(1)}(0) y*_{+(1)}(0) (11.9a) and y_{(2)}(0) y*_{(2)}(0) = y_{(1)}(0) y*_{(1)}(0). (11.9b) The correct way to accomplish this (i.e. the way that doesn’t lead to mathematical impasses) is to specify that y_{+(2)}(0) =  y_{+(1)}(0) (11.10a) and y_{(2)}(0) =  y_{(1)}(0). (11.10b)
In other words, y_{+} and y_{} undergo complete phase reversals in going from Zone 1 to Zone 2. Fig. 11.2b repeats Fig. 11.2a, with y_{+(2)}(0) and y_{(2)}(0) included. Figure 11.2b y_{+}(0) and y_{}(0) in Zones 1 and 2 Now since in general mv^{2}/2 = EU, we see in Zone 2 that mv_{(2)}^{2}/2 is negative: mv_{(2)}^{2}/2 = EU <0. (11.11) Evidently v_{(2)} is imaginary: v_{(2)} = +i v_{(2)}. (11.12) The momentum and wave number are thus also imaginary in Zone 2: mv_{(2)} = +im v_{(2)}, (11.13) k_{(2)} = +i 2pm v_{(2)} / h. (11.14) Substituting in Eqs. 11.1 and 11.2, we have in Zone 2: y_{+(2)}(x) = y_{+(2)}(0) e^{ix[i2}^{pmv/h]}, (11.15a) y_{(2)}(x) = y_{(2)}(0) e^{ix[i2}^{pmv/h]}, (11.15b) where the exponent signs have been chosen so that both y_{+(2)}(x) and y_{(2)}(x) attenuate in Zone 2. It is clear in Eqs. 11.15a and b that y_{+(2)}(x) and y_{(2)}(x) do not spiral in Zone 2; they each lie all in a plane: y_{+(2)}(x) = y_{o} e^{i}^{q}e^{}^{ax}, (11.16a) y_{(2)}(x) = y_{o} e^{i}^{q}e^{}^{ax}, (11.16b) where a = 2pmv_{(2)} / h (11.16c) = 2pm [2(UE) / m]^{1/2} / h. Thus y_{(2)}(x) = 2i y_{o} sin(q) e^{}^{ax}, (11.17a) P_{(2)}(x) = 4 y_{o}^{2} sin^{2}(q) e^{2}^{ax}, (11.17b) dP_{(2)}(x)/dx = 8a y_{o}^{2} sin^{2}(q) e^{2}^{ax}. (11.17c) Eqs. 11.8a and 11.17b show that P_{(2)}(0) = P_{(1)}(0). (11.18) And requiring that dP_{(1)}(0)/dx (see Eq. 11.8b) equal dP_{(2)}(0)/dx (see Eq. 11.17c) results in sin(2q) / sin^{2}(q) = 2 [(UE)/E]^{1/2}, (11.19a) or tan(q) =  [E/(UE)]^{1/2} (11.19b) and q = tan^{1}{[E/(UE)]^{1/2}}. (11.19c) (Note that q=0 when U is infinite.) The relative size of y_{(1)}(0) = y_{(2)}(0) is seen in Fig. 11.2b to be y_{(1 or 2)}(0) = 2 y_{o} sin(q). (11.20) But y_{(1)}(0) and y_{(2)}(0) are p out of phase. (i.e., like y_{+(1)}(0) and y_{(1)}(0), y_{(1)}(0) undergoes a full phase reversal on going from Zone 1 to Zone 2.) Rigorously speaking y_{(1 or 2)}(0) and y_{o} are infinitesimal in the present case. The physics lies in how q depends on (UE). Let us summarize: q = tan^{1}{[E/(UE)]^{1/2}}, (11.21a) y_{(1)}(x) = 2i y_{o} sin(q + k_{(1)}x), (11.21b) k_{(1)} = 2p(2mE)^{1/2} / h (11.21c) y_{(2)}(x) = 2i y_{o} sin(q) e^{}^{ax}, (11.21d) a = 2p[2m(UE)]^{1/2} / h. (11.21e) Fig. 11.3a shows the case for electrons (mass = 9.11E31 kg), with v_{(1)}=10^{4} m/sec (or E = 4.555E23 joules) encountering a barrier of height 2E. Fig. 11.3b shows the case for 10^{5} kg particles, again with v_{(1)}=10^{4} m/sec (or E = 500 joules) encountering a barrier of height 2E. Note the quantum behavior (i.e. the 3E8 meter penetration into the forbidden zone) of the microscopic particles in Fig. 11.3a, and the essentially classic behavior (i.e. the 3E33 meter penetration into the forbidden zone) of the macroscopic particles in Fig. 11.3b. To put these two numbers into perspective, an atom’s diameter is approximately 10^{10} meters, and an atomic nuclear diameter is approximately 10^{15} meters. Figure 11.3a Microscopic Particles Penetrating a Finite Barrier Figure 11.3b Macroscopic Particles Behaving Classically 12. The Finite Square Well. Fig. 12.1 shows a finite square well. Since E = mv_{(1)}^{2}/2 < U, a particle in the well is bound. Classically its momentum would be reversed at x = L and at x = 0; it would never penetrate into the "forbidden" zones. However, experiment indicates, in the case of microscopic particles, that there is a finite probability of finding the particle at x > 0 (and at x < L). Figure 12.1
A Bound Particle in a Finite Square Well As in the finite barrier case (Sect. 11), P_{(2)}(x) is maximum at x = 0, and falls off with increasing x. Thus here again P_{(1)}(0) and y_{(1)}(0) must be greater than zero. q (see Fig. 12.2) depends only on the relative difference between E and U: q = tan^{1}{[E/(UE)]^{1/2}}. (12.1) Figure 12.2 y_{+(1 and 2)}, Finite Square Well In the finite barrier case E could assume any value. However, in the square well case E is limited to discrete values. In determining the ground state energy for a given relative difference of (UE)/E, it will be instructive to reconsider the infinite square well. In that case, q (see Eq. 12.1 and Fig. 12.2) is zero. Let us start out with y_{+}(L) and translate to x=0. We shall suppose that y_{+}(L) is purely real and positive. As we move along from x = L to x = 0, the "tip" of y_{+} moves along a lefthanded spiral. At x = 0, y_{+}(0) has a full phase reversal to y_{}(0) and travels back to x = L along a righthanded spiral. At x = L, y_{}(L) has another full phase reversal, back to y_{+}(L), and the process repeats. The longest wavelength (and lowest energy) that fits this scenario satisfies: l/2 = L. (12.2) Fig. 12.3 shows the situation at x = L and at x = 0 (looking down from "above"). Figure 12.3 y_{+} at Infinite Square Well Walls Now in Fig. 12.2 we have the same sequence at the finite square well walls. The difference is that y_{+} does not rotate a full p radians in going from x = L to x = 0 (or back from x = 0 to x = L). In the finite well case, y_{+} rotates only (p  2q) radians. Thus L corresponds to only a partial halfwavelength: L = [(p  2q) / (p)] (l / 2). (12.3) Or, l = [2p / (p  2q)] L. (12.4) And since l = 2p/k, we find (in the ground state) k = (p  2q) / L. (12.5) Furthermore, since E = h^{2}k^{2} / (8p^{2}m), the finite well ground state energy is: E = h^{2}(p  2q)^{2} / (8p^{2}mL^{2}). (12.6) Fig. 12.4a plots P(x) for a ground state electron over the range –2L < x < L, using y_{o} = 1 and L = 1E10. Numerically integrating P(x) over this range and normalizing indicates that the normalized value for y_{o} is y_{o} = 1.987E7. (12.7) Fig. 12.4b plots the normalized P(x). Figure 12.4a P(x), y_{o} Set to Unity Figure 12.4b Normalized P(x) 13. Tunneling. Fig. 13.1 depicts a stream of monoenergetic electrons, of energy E = mv_{1}^{2}/2 < U_{2}, incident upon a potential barrier of finite thickness. Figure 13.1 A LimitedExtent Barrier Since E<U, all of the particles would classically be reflected at x=0. In the case of microscopic particles, however, significant penetration beyond x=0 may occur. As we saw in Sect. 11, y(x) attenuates exponentially in Zone 2. But it may still be significantly greater than zero at x=L. Since P(x) is presumably continuous in space, we can expect to (and do) find particles beyond x=L. This decidedly unclassical behavior is known as tunneling. It is of particular interest in areas such as nuclear fusion, where it is desirable to get relatively low energy particles through high potential energy barriers. It is also of interest in getting relatively low energy particles out of potential wells whose walls are of finite thickness. The nuclear force can be modeled as such a well, and in 1928 Gamow demonstrated how tunneling accounts for the tremendous range of halflives exhibited by different radioactive nuclei. In Zone 3, y_{+(3)}(x) is the same spiral as in Zone 1 (except for a lesser magnitude, due to the lesser density of particles moving to the right in Zone 3). There isn’t any y_{(3)}(x) spiral in Zone 3. In Zone 1 we should use two values for y_{o} (say y_{o+} and y_{o}) since more particles are moving to the right than to the left. A suggested strategy for solving this problem is to solve for y_{(2)}(L) using y_{o} for openers. We can then set y_{o} equal to (y_{o}  y_{(2)}(L)). (y_{o+} remains the same as y_{o}.) y_{(1)}(x) is then computed in the usual way, but using y_{o} to compute y_{(1)}(x). In Zone 2, y_{+(2)}(x) and y_{(2)}(x) are computed in the usual way (using y_{o} in each case), but then y_{(2)}(L) is subtracted from every value of y_{(2)}(x) (thereby driving y_{(2)}(L) to zero, among other things). In Zone 3 we would use y_{o(3)} = y_{(2)}(L) to compute y_{+(3)}(x), and of course y_{(3)}(x) would be set to zero. Fig. 13.2 plots the computed P(x) in all three zones. (Electrons, with speeds of 1E4 m/sec in Zones 1 and 3 , were assumed and y_{o} = y_{+(1)}(0) was set to unity.) Note how, in Zone 1, P(x) never goes all the way to zero … a consequence of the fact that y_{(1)}(x) < y_{+(1)}(x) at all x<0. In Zone 2, P(x) attenuates in the usual way. And in Zone 3, P(x) is constant (there being no y_{} wave to interfere with y_{+(3)}(x)). Figure 13.2 P(x), Tunneling 14. U: Ultimately a Step Function? The classical picture of force … as a quantity that varies continuously in space and time … is inconsistent with the idea that a particle interacts in finite quanta with its environment at points in spacetime. On a fine enough scale the evidence indicates that particles interact with their environment impulsively. In effect, particles appear to exchange energy and momentum with their environments instantaneously, and in finite sized quanta. To the extent F_{x} = dU/dx, U must ultimately be a step function. The steps may be very small, and macroscopically U may appear to be a smooth function of x. But on a fine enough scale it can be argued that U is stepped. Fig. 14.1 illustrates a possible situation at the center of a binding well. Figure 14.1 A Stepped U at the Center of a Well A good example of such a stepped U is the potential of a spring, U=kx^{2}/2 (where k is the spring constant, and x is the amount the spring is stretched or compressed). The energy levels in this potential differ by hn, and the lowest (ground state) energy level is hn/2. In the ground state, y(x) would be mostly confined to Zone 1, but would attenuate exponentially in Zones 2, 3, etc. In Zone 1 P(x) would be the middle part of a half cos^{2}(kx) curve. As we saw in Sect. 12, k will be somewhat smaller than in an infinite square well. In the first excited state, P(x) would be part of a full sin^{2}(kx) curve, with a zero at the well center. The curve would attenuate exponentially in Zones 3, 4, etc. Similarly for higher energy states. Solving for y_{+} and y_{} in a stepped potential may pave the way for easier solutions to the Schroedinger equation, given arbitrary U(x)’s. It is an approach that warrants further study, perhaps in a thesis. A general solution will not be attempted here. 15. "Space" Quantization. Many elementary particles (e.g. electrons) have magnetic dipole moments. They behave like tiny magnets. Nominally their dipole moments presumably point every which way in space. When a magnet is held at some angle in an external magnetic field, it may have potential energy, depending on whether it is aligned with the external field or not. Fig. 15.1 illustrates. The potential energy is zero when q=0, and it is at a maximum when q=p. In other words, work would have to be done to change q from an initial value of zero to a final value of p radians. Figure 15.1 A Magnet with Potential Energy If the external magnetic field in Fig. 15.1 is nonuniform … for example, if it grows stronger with increasing y … then the depicted magnet will also experience a net force up or down. The size and direction of this force (like the torque) depends on q. Thus if the magnet is coasting into the page, with q held constant in Fig. 15.1, it will be deflected up. A stream of electrons could create an image on a collector plate somewhere behind the page. Assuming the electrons’ dipole moments point every which way, we would expect to obtain a continuous smear on the collector plate. But here is an interesting possibility: energy quantization seems to be the rule in quantum phenomena. What if each tiny electronmagnet can assume only certain discrete potential energies? Indeed, what if it can assume only two potential energies: zero for q=0 and a nonzero energy for q=p. This being the case, we should get two spots on the photographic plate, rather than a continuous smear. Now the orientation of the external field is arbitrary. Thus upon entering the external field, the little electron magnets presumably would "snap" exclusively to values of q=0 or q=p. Having done so, those that snapped "up" would experience one common deflecting force, and those that snapped "down" would experience another. Of course if more than two values for the potential energy are allowed … if q can "snap" to more than two values … then three or more spots should appear on the screen. And if the potential energy is not quantized (no snapping), we should get the (classically expected) continuous smear on the screen. In 1922 Stern and Gerlach put electrons to the test and essentially obtained two spots. Evidently the electrons’ potential energies can take on only two values in an external magnetic field: zero or a maximum. Here, then, is another kind of "resonance." The quantization of an electron’s dipole moment to just two directions, aligned with or against an external field, is referred to as "space quantization." Some find the term a bit confusing. What we really have is another form of energy quantization. The "snapping" phenomenon is an excellent example of one of the hard realities we must adapt to when studying the microscopic world. We are parts of the world we observe, and we necessarily interact with and perturb what we study whenever we observe it. Logically it makes sense that, a priori, the magnetic moments of electrons point in all directions. But we can only infer that this is the case on logical grounds. Whenever we try to find out how the moments point (by an experiment like SternGerlach), we find only two directions. In the very process of observing which way the moments point, we force them into two diametrically opposed directions! 16. Quantum States. Although quantum theory is historically rooted in Planck’s postulate, that a system can assume only certain discrete energies, it quickly became clear that energy is not the only quantized quantity. Bohr noted that angular momentum is also limited to discrete values. Gradually it became clear that a given system would always be found in one of a set of possible quantum states, with each state defined by a set of "quantum numbers" (for energy, angular momentum, etc.) In 1925 Pauli proposed that "spin" must be added to the quantities that delineate a quantum state. In the case of the ubiquitous electron, the spin can assume just two values. Pauli’s proposed "spin" not only provided a theoretical basis for the space quantization demonstrated by SternGerlach, but it also provided insight into the fine structure of atomic spectra and the arrangement of elements in the periodic table. One of the fascinating (and useful) characteristics of photons is that they like to get into the same quantum state (same direction of propagation, etc.). For example, given a collection of excited atoms emitting photons, the incidence of a photon on an excited atom can induce the emission of a second photon in the same state. Einstein suggested the possibility of light amplification through the stimulated emission of radiation (or "laser" for short!). Multiple electrons, on the other hand, avoid being in the same quantum state … a phenomenon called the Pauli Exclusion Principle. It is why the electrons in a Rutherfordlike atom occupy "shells," with 2 electrons in the innermost shell, 8 in the next shell, etc. Other calculations linked the absolute temperatures of systems to the numbers of states represented in an ensemble. At absolute zero all systems (e.g. atoms) are in the ground state, with (among other things) a single, lowest possible resonant energy. As the temperature edges upward, "higher" states appear, and the fraction of the total number of systems in any given state is predicted by quantum theory. Were classical rules to dictate behaviors on a microscopic scale, the world as we know it could not exist. Atoms would collapse into points. Thanks to Heisenberg’s Uncertainty Principle, Pauli’s Exclusion Principle, and other ideas too numerous to get into here, atoms steadfastly hold their own, even when subjected to enormous pressures. It is a two way street. It takes a lot to tear an atom apart. But it is also literally impossible to squeeze one down to a point. They have their own tiny (but finite) volumes of space that they claim exclusive ownership to. 17. Beyond Y. One of the more useful tools in the electrical engineer’s toolbox is the fourier transform. Let us suppose that we live in an area served by 15 broadcasting stations, but not all are "on the air" in the wee hours of the morning. Between 2 and 3 AM we record the sum total electromagnetic radiant power incident upon an antenna. It appears to be a horrible mish mash of white noise. Traced on an oscilloscope, the signal appears to be a completely random line of dancing squiggles. We don’t have a clue which of the 15 stations the superimposed signals are emanating from. Now given signal power data spanning an adequate period of time (engineers call this the "time domain"), we can subject it to a fourier transform and work up a power spectrum in the frequency domain. That is, we can work up a plot of power vs. frequency. When we do this, what appeared to be random noise (when plotted against time) is transformed into a sequence of sharp gaussian curves when plotted against frequency. By seeing which (practically discrete) frequencies the power is carried on, we immediately determine that stations A, D, F, L, P and Z (each with its own assigned frequency) are on the air. We cannot directly take a snapshot of the electromagnetic radiation along a given line (or ray, or axis, or …) in space. But since radiation of all frequencies propagates with the one speed c, we can infer what such a snapshot would have looked like, by recording the signal at some fixed location using a directional antenna. Indeed, having inferred what the plot of radiated power vs. x would have looked like, we can run this data through a fourier transform to get a plot of power vs. wave number (k). In other words we transform from the space domain into the wave number domain. Again we get a nice sequence of gaussians, showing which wave numbers (or, if you prefer, which wavelengths) the power was being transmitted on. An interesting ramification of the fourier transform is that the more data we record in the time domain, the more precise our knowledge of where the power is concentrated in the frequency domain becomes. Ideally if we could record a signal for all time and subject the data to a fourier transform, we’d get a set of spikes in the frequency domain. Of course the whole thing works both ways. If we record a signal over all frequencies, we can transform it into power vs. time, etc. What has all this got to do with quantum physics? Just this: according to deBroglie, particle energy is proportional to Y’s frequency, and particle momentum is proportional to Y’s wave number. Suppose we apply a fourier transform to a Y wave over an adequately long period of time. Then we get a plot of probability density against frequency (n), and hence against energy (hn). We can tell which energies are the major contributors. Or, alternatively we could apply a fourier transform to Y(x) over an adequate range of x. When we do so we get a plot in the wave number (k), and hence in the momentum (hk/2p) domain. That is why Heisenberg says that time and energy are complementary variables (and momentum and position are complementary variables). And that is the basis for his Uncertainty Principle: the more accurately we wish to determine the energy (or energies) of an ensemble’s (or grand ensemble’s) particles, the longer (in time) we must know Y. Or, the more accurately we wish to determine the momentum of an ensemble’s particles, the longer (in space) we must know Y. And vice versa. We shall not get into the mathematical details of fourier transforms here. Suffice it to say that there is a complementary function for y(x). It gives the probability density for the xcomponent of momentum, and is usually denoted as f. The fourier transform for going from y(x) to f(p_{x}) is . (17.1a) Similarly, . (17.1b) In the integral of Eq. 17.1a, p_{x} is a constant, and the integration ranges over all x. In Eq. 17.1b x is a constant, and the integration ranges over all p_{x}. Eqs. 17.1a and 17.1b give precise values for f(p_{x}) at a particular p_{x}, and for y(x) at a particular x. (We would have to repeatedly apply the transforms in order to work up plots of f vs. p_{x} and/or y vs. x.) In practice we have knowledge of y and/or f over only limited ranges of x or p_{x}, and our integrals provide less precise values for f and y. What is the "quantum surprise" in Eqs. 17.1a and 17.1b? These transforms tell us that f at any particular p_{x} is a function of y over all x! And y at a particular x is a function of f over all p_{x}. The probability densities for momentum and position are intimately connected in a manner unsuspected during the classical era. Similar probability densities in the time and energy domains can be identified. Thus the position probabilities deriving from y … the focus of most of this book … are only the tip of the iceberg! The student contemplating a first course in quantum mechanics may (or may not) encounter these relationships between ensemble positions and momenta, times and energies, etc. But anyone contemplating a career in physics must ultimately grapple with and master the diverse probability distributions inherent in the theory. Quantum physics is one of the most mathematically intense of all paradigms mankind has invented to describe the physical world. Obtaining a background in calculus, complex algebra, and fourier transforms (at a minimum) is well advised before plunging into the quantum swim. 18. Concluding Remarks. We have barely scratched the surface of quantum mechanics. Yet hopefully the prospective student who has made it this far will have obtained a toehold, at least gaining some appreciation of what Y is and is not. One of the most difficult tricks for the new student to master is the abandonment of classical paradigms. Certain classical ideas, such as that any object is at a definite point in space at any given moment, seem to be rooted in our very genes. For millions of years our forebears viewed a macroscopic world in which causality seemed to be axiomatic. Even today we cannot say that this is not ultimately the case. What we can say is that, as part of the physical world we study, we must use statistics in order to wring any kind of predictability out of the very small. We must observe ensembles of "identically prepared" systems in order to learn anything meaningful about them. For in the very act of observing any one system, we become "part of the problem," perturbing the system in ways that can only be estimated. And to the extent we interact in instantaneous quantum exchanges, we often get only one brief peek at any given system before effectively eliminating it from the ensemble. Many of the definitions opted for by Schroedinger and other early pioneers may seem somewhat arbitrary. For example, there is an obvious alternative to using a particle’s kinetic energy (plus its potential energy) to calculate deBroglie’s w, and that would be to use its total energy, mc^{2}/(1v^{2}/c^{2})^{1/2}. It is not difficult to show that using the total energy yields a phase velocity of c^{2}/v … a possibility of obvious interest in communications. To date no one has provided conclusive proof that w = pmv^{2}/h, as deBroglie postulated. Perhaps a repeat of the double slit experiment, where the time lapse, between the covering of one of the slits and the loss of the interference pattern at the detectors, is carefully measured, will settle the matter. The visualization of Y as a sum of spinning spirals in space hopefully helps clarify what has for too long been obscured by mathematics. No less a math prodigy than Dirac felt that he didn’t really understand what an equation meant until he could picture things in his mind’s eye. Our brains have evolved over millions of years to evaluate the world in terms of objects in space, behaving certain ways with the passage of time. And although Y is not real, we can perhaps lend it a realistic flair by imagining a set of complex planes layered orthogonally along an axis, with y_{+} corkscrewing through them and spinning. Of course the fascinating aspect of Y_{+} is that different spirals/waves of this very unreal quantity interfere in ways that are often found, by the beginner, to be counterintuitive in the real, observable world. Many of the important concepts of quantum theory have not been discussed herein … parity, selection rules, etc. And a good deal of new ground has been broken beyond Schroedinger’s wave mechanics. Indeed, although wave mechanics may have been the first word, it is certainly not turning out to be the last. The whole study of nature on very small scales (and also on very large scales) is a rapidly expanding frontier. As Feynman noted, there is always "a lot of dust in the air" on such cutting edges. Given the implicit uncertainties of the quantum world, this may turn out always to be the case. That is, our view of the very small may prove always necessarily to be primarily inferred in our mind’s eye. We cannot see electrons (let alone photons) traveling through space without significantly perturbing them. Inferring how they behave … what rules they follow when we aren’t observing them … appears ultimately to be the name of the game. The primary goal of this little book has been to provide a few intuitive hooks on which to hang such inferences. If it eases the beginning student’s passage, from the intuitively satisfying world of classical theory into the sometimes counterintuitive and mathematically abstract world of quantum theory, then it will have met that goal.
***Appendix A*** Useful Math for Beginning Students This appendix provides some mathematical identities that may prove useful to students planning to take a course in quantum theory. In Sect. 3 the idea was first introduced that a simple wave function, Y(x,t), can be visualized as a right or lefthanded spiral, spinning clockwise around the xaxis. The equations for these two spirals are Y_{+} = Y_{o} cos(kx  wt) + iY_{o} sin(kx  wt), (A.1a) and Y_{} = Y_{o} cos(kx + wt)  iY_{o} sin(kx + wt), (A.1b) The cos (or cosine) and sin (or sine) functions are trigonometric functions. "Trigon" is Greek for "threesided figure," and trigonometry is the study of triangles. A particularly useful triangle is the right triangle, which has a right angle for one of its angles. Fig. A.1 illustrates. Figure A.1
A Right Triangle In Fig. A.1 the side "c" … the side opposite from the right angle … is called the hypotenuse. The angle q (Greek letter "theta") is one of the other two angles (both necessarily smaller than the right angle). The sine of q is defined by sin(q) = a/c. (A.2a) Similarly, cos(q) = b/c. (A.2b) If angles q and q are subtended from the xaxis, then it is clear that sin(q) =  sin(q), (A.2c) cos(q) = cos(q). (A.2d) The Greek mathematician Pythagoras deduced that, for any right triangle, c^{2} = a^{2} + b^{2}. (A.3) Eq. A.3 is called the Pythagorean Theorem. A familiar unit for measuring angles is the degree. In this unit, the right angle in Fig. A.1 measures 90 degrees (or 90^{o}). The angular unit most often used in physics is the radian. There are 2p radians in any full circle. That is, 2p radians = 360 degrees. (A.4) Thus the right angle in Fig. A.1 measures p/2 radians. In Eqs. A.1a and A.1b, k and w are related to particle momentum and energy by the deBroglie relations k = 2p / l = 2pp / h, (A.5a) w = 2pn = 2pE / h. (A.5b) In Eq. A.1a, cos(kx  wt) is called the Real part of Y_{+}, and sin(kx  wt) is the Imaginary part. (Similarly for the cosine and sine functions in Eq. A.1b.) Given two Ys, say Y_{A} and Y_{B}, Y_{A} and Y_{B} are equal if and only if ReY_{A} equals ReY_{B} and ImY_{A} equals ImY_{B}. The complex conjugate of Y is Y* = ReY  i ImY where i = (1)^{1/2}. Since by definition i^{2} = 1, we have YY* = Y^{2}, (A.6a) where Y = [(ReY)^{2} + (ImY)^{2}]^{1/2}. (A.6b) That is, Y is the magnitude of the complex number Y in the complex plane. The sum of complex numbers Y_{A} and Y_{B} is found by adding their Real and Imaginary parts: Y_{A} + Y_{B} = (ReY_{A} + ReY_{B}) (A.7)
Thus it is useful to know how sines and cosines add. A fundamental fact about sines and cosines (easily derived from the Pythagorean Theorem, Eq. A.3) is: sin^{2}(q) + cos^{2}(q) = 1. (A.8) In Eq. A.8 sin^{2}(q) is the same as sin(q)sin(q) (or [sin(q)]^{2}), and q is any angle expressed in radians. Using the identity in Eq. A.8, it is not difficult to derive the following identities: sin^{2}(A) = (1/2)(1 – cos(2A)), (A.9a) cos^{2}(A) = (1/2)(1 + cos(2A)), (A.9b) sin(A + B) = sin(A)cos(B) (A.9c)
sin(A – B) = sin(A)cos(B) (A.9d)
cos(A + B) = cos(A)cos(B) (A.9e)
cos(A – B) = cos(A)cos(B) (A.9f)
And, if we let a = A + B, (A.10a) b = A – B, (A.10b) so that A = (1/2)(a + b), (A.10c) B = (1/2)(a  b), (A.10d) then sin(a) + sin(b) = (A.11a)
sin(a)  sin(b) = (A.11b)
cos(a) + cos(b) = (A.11c)
cos(a)  cos(b) = (A.11d)
The angles of a triangle always add up to p radians. Thus in Fig. A.2, f = p/2  q, (A.12a) and cos(q) = sin(p/2  q), (A.12b) etc. Figure A.2 A Right Triangle In the complex plane any complex number Y can be depicted as a vector with length (or magnitude) Y = [(ReY)^{2} + (ImY)^{2}]^{1/2}. Fig. A.3 illustrates. Figure A.3 Representing Y in the Complex Plane Now although Eqs. A.2a and A.2b are the easiest way to define the sine and cosine of an angle, the sine and cosine functions can also be represented as infinite series: sin(q) = q  q^{3}/3! + q^{5}/5!  q^{7}/7! + … (A.13a) cos(q) = 1  q^{2}/2! + q^{4}/4!  q^{6}/6! + …(A.13b) where 3! = 3 x 2 x 1, etc. The base to the natural logarithms (denoted as "e") is that value that satisfies de^{x}/dx = e^{x}. (A.14) e^{x} can also be represented as an infinite series: e^{x} = 1 + x + x^{2}/2! + x^{3}/3! + … (A.15) Thus e^{i}^{q} = 1 + iq  q^{2}/2!  iq^{3}/3! + q^{4}/4! … (A.16) and we have the very convenient fact that e^{i}^{q} = cos(q) + i sin(q). (A.17) Thus any complex number Y can alternately be expressed as Y = Y e^{i}^{q}, (A.18) where q is as indicated in Fig. A.3. Y is of course the magnitude of Y, and is therefore a real number. In general, multiplying any complex number Y by e^{i}^{q} is the same as rotating Y counterclockwise, through an angle of q, in the complex plane. In normalizing solutions for Y it is often useful to know how sin^{2}(q) and/or cos^{2}(q) integrate. , (A.19a)
