1 / 85

Quantum Mechanics

Quantum Mechanics. Chapter 9 . Time-Dependent Perturbations and Radiation.

neron
Télécharger la présentation

Quantum Mechanics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Quantum Mechanics Chapter 9. Time-Dependent Perturbations and Radiation

  2. In previous chapters you have seen selection rules related to transitions between atomic states. These rules are consistent with all observations of atomic spectra; transitions in which these rules would be violated are never (well, hardly ever) seen when we observe atomic spectra. • And when "forbidden" transitions (those violating a selection rule) are seen, quantum-mechanical calculations can accurately predict how often they occur, relative to possible allowed transitions from the same initial state.

  3. A rigorous calculation of transition probabilities requires that we go beyond the Schroedinger equation and quantize the classical equations for electric and magnetic fields (Maxwell's equations), so that we can deal with events involving single photons. • Such calculations are beyond the scope of this book, so we shall settle for an approximation treatment that gives insight into the reason for selection rules and allows us to gain some understanding of devices such as the laser and the maser.

  4. § 9.1 Transition Rates for Induced Transitions • Transition rates for induced transitions can be calculated quite well by means of time-dependent perturbation theory. • This theory follows the method of Section 8.2, time-independent perturbation theory, in that it assumes that the potential energy of the system contains a small perturbing term and that without this term the Schroedinger equation can be solved exactly.

  5. The difference is that the perturbing term, u(x, y, z, t), is assumed to be applied for a limited time, and the result is that the system may make a transition from one unperturbed state to another. • Therefore, we write the time-independent Schroedinger equation (in one dimension for convenience) as • where as before, H0 is the Hamiltonian (or energy) operator for the unperturbed system, and the equation for any eigenfunction ψl of the unperturbed system is

  6. As in Section 8.3, we now rewrite the perturbed equation [Eq. (14.1)] by expanding the function ψn’ as a linear combination of the unperturbed eigenfunctionsψl , with the important difference that the coefficients in the expansion are, in general, time dependent. • Thus • and Eq. (14.1) therefore becomes • After differentiating the right-hand series term by term, we obtain

  7. and eliminating the brackets on both sides gives us • We now see from Eq. (14.2) that each term in the first series on the left is equal to the corresponding term in the second series on the right-hand side, so we can eliminate both series, reducing Eq. (l4.6) to • We now proceed as in Section 8.2

  8. We multiply each side of Eq. (14.7) by a particular functionψm* and then integrate over all values of x (or over all space in the three-dimensional world) to obtain • which we integrate term by term as we did with similar expressions in Section 8.3. Because the wave functions {ψl } are normalized and orthogonal to one another, the only nonzero term on the right-hand side is the one for which l = m, namely iћ. Using the fact that the time dependence of m is

  9. integrating the left-hand side of Eq. (14.8) term by term, and again using the normalization and orthogonality properties of the wave functions, we finally arrive at an exact equation for the time dependence of the coefficient anm: where, as in Section 8.2, we use an abbreviation:

  10. Equation (14.9) is still exact, but like Eq. (12.10), it contains too many unknown quantities to be useful as it stands. • Therefore, we again assume the approximation that the eigenfunctions of the perturbed system differ very slightly from those of the unperturbed system. • This permits us to make the approximation that all of the coefficients anl are very small, except for ann, which is approximately equal to 1. • If we set ann equal to l, and all other coefficients anl equal to zero, Eq. (14.9) becomes

  11. If v(x, t) is known for all values of x and t, then it would appear that it is possible to integrate Eq. (14.10) and determine the behavior of the system, with an accuracy that is limited by the size of the neglected coefficients anm. • Comparison of Time-Dependent and Time-Independent Perturbation Theory • We use time-independent perturbation theory with a known set of states, to calculate probabilities of transitions between levels. We know the possible states of the system because the time-dependent perturbation is assumed to continue for a limited time interval,

  12. after which the system reverts to one of its unperturbed states. Typically, we consider the following sequence of events: • l. At time t = 0, the system is in an unperturbed state│ψn〉, an eigenstate of the Schroedinger equation with energy eigenvalue En. • 2. The perturbing potential is then "turned on." For t> 0, the system is then described by the perturbed Schroedinger equation, with a different set of eigenstates│ψn’〉. If the perturbation is small and/or is applied for a very short time, the new state never differs greatly from the state│ψn〉. • 3. The perturbing potential is turned off at time t = t', and the system is again described by the unperturbed Schroedinger equation.

  13. The eigenstate may be the original state│ψn〉, or it may be a different state. • In the latter case, we say that the perturbation has induced a transition to the new state │ψm〉. • The probability that the system will be found in the state│ψm〉is given by│anm│2, which is the square of the coefficient of the wave function ψn’ in the expansion of wave functionψm in series of eigenfunctions of the original wave function. • Example Problem 14.l A particle is in its ground state (n = 1, kinetic energy E1 , potential energy zero) in an infinitely deep one-dimensional square potential well.

  14. A constant perturbing potential V =δis tuned on at time t = 0. Find the probability that the particle will be found in the second excited state (n = 3) at time t = t'. • Solution. • The probability that the particle will be found in the second excited state is |a13|2. The second excited state in this well has kinetic energy E3 = 9E1. • Substitution into Eq.(14.10) gives , and since al3 = 0 at time t = 0, we have

  15. But v31 = 0, because the functions ψ1 andψ3 are orthogonal. Thus the probability is zero. • To induce a transition in this potential, the perturbing potential must depend on x. • Dipole Radiation • Let us now apply Eq. (14.10) to atomic radiation, considering an electromagnetic wave as a perturbation that induces a transition between two atomic states. • We begin with radiation whose wavelength is much greater than the diameter of the atoms involved, as is true for visible light.

  16. In this case, we can make the approximation that at a given time the entire atom feels the same field. That is, the field varies in time but not in space. • This is known as the dipole approximation, for reasons that will be clear as we develop the equations. • We start with monochromatic (single frequency) radiation, polarized along the x axis. • Thus the result depends on the x component of the electric field E, or Ex = Eox cos ωt, where Eox is constant. • It is convenient to rewrite this field in complex form as

  17. The perturbing potential v(x, t) is the potential energy of an electron of charge -e (e is not to be confused with 2.71828. . .) in this field, given by • Inserting this expression into Eq. (14.10) gives • where the abbreviation xmn represents the integral . • [We now see the reason for the expression "dipole" radiation.

  18. The dipole moment of an electric charge e at a distance x from the origin is given by ex. • If the electron were in a stable quantum state│ψn〉, the dipole moment would depend on the probability density for the electron in that state, and thus would be given by the integral • When there is a transition , the electron (before it is observed) is in a mixed state, with dipole moment given by the integral • This integral is called the dipole moment between states n and m.] • We now assume that the E field is "turned on" at time t = 0 and "turned off' at time t= t'.

  19. Therefore we must integrate Eq. (14.13) on t between these two limits to find the transition probability from state n to state m, which is given by |anm|2. • From the initial condition anm(0) = 0 unless n = m, we obtain anm(t’ ): • Rather than attempting to find the complicated general expression for the transition probability |anm(t’)|2, let us examine Eq. (14.14) to gain some insight.

  20. The first denominator(分母) is zero when Em –En = -ћω; the second is zero when Em –En = +ћω. • It is reasonable to suppose that we can neglect the first term for frequencies such that Em –En +ћω. • We can then simplify |anm(t’)|2 to: • or • If we define the frequency nm by

  21. nm=(Em-En)/ћ (14.17) and the function f() by then Eq. (14.16) becomes • Figure 14.1 shows a graph of f() versus -nm. The maximum value of f() occurs when = nm, the frequency at which the photon energy ћ is equal to the difference between the energy levels En and Em. • This should come as no surprise,

  22. but the fact that other frequencies also contribute to transitions appears to violate the law of conservation of energy. • However, when we consider the results of Section 2.4 we find that there is no violation. • The fact that the perturbation exists for a limited time t' makes the frequency uncertain, just as confining a particle in a limited space makes its wavelength uncertain. • If we let t' approach infinity in Eq. (l4.18) we see that f(ω) approaches a delta function, becoming zero for all frequencies except =nm.

  23. • (For each point on the horizontal axis, the value of -nm is a multiple of l/t'; when t' becomes infinite, every point on the horizontal axis represents a value of zero. • Thus when t' is infinite the value of -nm is zero over the entire curve.) Uncertainty Relation for Energy and Time • When t' is finite, a Fourier analysis (Section 2.3) of the light wave would show a sinusoidal distribution of frequencies which is consistent with Eq.(14.18).

  24. Therefore Figure14.l agrees with the law of conservation of energy and with the condition that a photon of angular frequency ω has energy ћω. • This figure shows that, for the overwhelming majority of transitions, ћω-(Em-En)≤2πћ/t′= h/t' (14.20) • Let us now consider the probable results of a measurement of the energy difference Em-En between two levels in a collection of identical atoms. • We might measure this difference by applying a field of angular frequency ω to the atoms for a time t' and measuring the amount of energy that is absorbed.

  25. •By repeating this procedure at different frequencies, we could plot a graph like Figure 14.l. • • But Eq. (14.20) shows that any observed photon energy ћω can differ from the energy difference • Em – En by as much as 2πћ/t', or h/t'. • • Denoting this difference as the uncertainty ΔE in the measurement, we have, for this special case, ΔE 2ћ/t' = h/t' or t'ΔE h (14.21) • The time interval t' can be thought of as the uncertainty in the time of the measurement of the energy.

  26. Thus we have an uncertainty relation involving time and energy, just as we have a relation involving position and momentum. • In the general case, the uncertainty relation for energy and time is written Et  ћ/2 (14.22) where ΔE is the uncertainty in a measurement of the energy of a system, and Δt is the time interval over which the measurement is made. • This relation, like the parallel relation ΔpxΔxћ/2, is based upon the fact

  27. that a wave of finite length must consist of a superposition of waves of different frequencies. • In the case of the energy measurement, the wave is that of a photon of the radiation field that induces the transition between energy levels, but the mathematics governing this wave is identical to that of a matter wave. • Transition Probability for a Continuous Spectrum of Frequencies • In the general case, Eq. (14.14) cannot give the transition probability directly; it must be modified so that it represents a component in a continuous spectrum of frequencies.

  28. When we have a continuous spectrum, there can not be an amplitude for a single frequency. • Instead, there is an energy density function () such that the integral of ()d over the range from 1 to 2 is the energy density of radiation with frequencies between 1 to 2. • According to classical electromagnetic theory, the quantity 0E0x2/2 is the average energy density in the electromagnetic field given by • Thus for radiation in a narrow range of frequencies d, we have ε0E20x/2=ρ(ω)dω (14.23)

  29. and E0x2 can be replaced in Eq. (14.1l) by 2() d/0. • Then to find the total transition probability Tnm resulting from the entire spectrum of radiation, we integrate the resulting expression for anm(t’ )2 over all frequencies, obtaining • We can simplify Eq. (14.24) by assuming that () varies much more slowly than f(), and since f() is symmetric, with a maximum at =nm, we can replace () by the constant value (nm), with little loss of accuracy.

  30. If we remove (nm) from the integral and we define = (nm-)t’ /2, Eq. (l4.24) becomes • [The reader should verify that Eqs. (14.24) and (14.25) are equivalent, given the substitutions that were made.] • The integral is standard, being equal to /2, so the transition probability, for radiation that is polarized in the x direction, is

  31. In the general case, when the radiation is randomly polarized, Tnm must include equal contributions from xnm2, ynm2, and znm2, and we have • where we have divided by 3 because the intensity is equally distributed among the three polarization directions. • The factor t' in Eq. (l4.27) requires more scrutiny. It is logical that the probability of a transition should increase with time, but it cannot increase indefinitely, because a probability can never be greater than l.

  32. Obviously the approximation breaks down at times t' such that Tnm is no longer small relative to l. • If the radiation is coherent (for example, produced by a laser; see Section 14.3.7), then the perturbation is maintained for times t' that are quite long relative to incoherent radiation, such as that emitted by the sun. • Therefore, atoms that are bathed in laser light can be perturbed for such a long time that Eq. (14.37) is no longer valid. (Analysis of such situations falls into the realm of nonlinear optics.) • On the other hand, incoherent radiation consists of brief pulses emitted by individual atoms at random;

  33. for example, the 3p level of the hydrogen atom survives for about 10-8 second. In such cases, the emitted pulse (one photon) can perturb another hydrogen atom for a time t' of the same order of magnitude. • This time interval is sufficiently small to satisfy the condition Tnm << l, and in those cases Eq.(l4.27) is quite accurate. • After the time t'. the perturbation ends, and the hydrogen atom is in its original 1s state or is in the 2p state. The probability that it is in the 2p state is given by Eq. (14.37). • This probability can be tested by simply observing that the second atom emits a photon in returning to the 1s state.

  34. § 9.2 Spontaneous Transitions • In the previous section we found the probability that a system in one quantum state n will be induced to change to another state m, if it is acted upon by a perturbation such as radiation at the resonant frequency nm =  En-Em /h. • But we still need a way to compute the probability of a spontaneous transitiona transition that occurs in the absence of a perturbation.

  35. Fortunately, there is a simple way to attack this problem. Even before quantum mechanics was developed, Einstein was able to derive the rate of • spontaneous transitions from basic thermodynamics, given only the induced transition rate. He used the following argument. • Einstein's Derivation • Consider a collection of identical atoms which can exchange energy only by means of radiation. • The collection is in thermal equilibrium inside a cavity whose walls are kept at a constant temperature.

  36. Because the system is in thermal equilibrium, each atom must be emitting and absorbing radiation at the same average rate, if one averages over a sufficiently long time (such as one second). • Define Pnm as the probability of an induced transition of a given atom from the state n to state m in a short time interval dt. • This probability must be proportional to the probability pn, that the atom is initially in state n multiplied by the transition probability Tnm for an atom in that state, which for unpolarized dipole radiation is given by Eq. (14.27).

  37. Thus Pnm =Tnm Pn (14.28) • Guided by Eq. (l4.27), we can now write a general equation for Pnm as Pnm = Anm(nm)pn dt (14.29) • which expresses the fact that Tnm is proportional to the radiation density (nm), to the time interval dt (denoted by t' in Eq. (14.27), and to other factors, incorporated into Anm, which depend on matrix elements. • Equation (14.27) can be applied equally to an induced transition from state n to state m, or from state m to state n.

  38. From the symmetry of the equations, we know that Anm = Amn and nm = mn .Therefore, Pmn = Amn(nm)pm dt (14.30) • Equation (14.30) gives the probability of an induced transition from state m to state n, while Eq. (14.29) gives the probability of an induced transition in the other direction, from state n to state m. • The only difference between these probabilities is in the occupation probabilities pn and pm. • These are not equal, because the probability that a state is occupied depends on its energy. • Let us say that state n has the lower energy; that is, En < Em. Then pn > pm, and therefore Pnm > Pmn.

  39. There are more induced transitions from n to m than there are from m to n, simply because there are more atoms in state n to begin with. • But the atoms are in thermal equilibrium. Therefore there must be other transitions, spontaneous ones, from m to n, to make the total probability of a transition from m to n equal to the probability of a transition from n tom. • This means that Pnm = Pmn + Smn (14.31) • where Smn is the spontaneous transition probability, which may be written Smn = Bmnpm dt (l4.32)

  40. Notice that, unlike Pmn or Pnm, Smn does not contain the factor (nm), because a spontaneous transition, by definition, does not depend on external fields. • Substituting from Eqs. (14.29), (14.30), and (14.32) into (14.31), we have Anm(nm)pn= Anm(nm)pm+ Bmnpm (14.33) • or Bmn = Anm(nm){pn/pm - l} (14.34) • Remember that Bmn is associated with a spontaneous transition, so it does not really depend on the energy density of the electric field. • However, we have derived this equation by relating Bmn to induced transitions in a cavity;

  41. therefore the energy density in the cavity has appeared in the result. • We can eliminate (nm) from the result by using the formula for the energy density in a cavity (see Appendix C for the derivation of this formula): • You can verify that () has the correct dimensions (energy per second per unit volume). Inserting this expression into Eq. (14.34) yields

  42. To complete the derivation of Bmn we need the ratio of the occupation probabilities, pn/pm. • This ratio is known from Boltzmann statistics (Appendix B and Section 16.l) to be given by • and therefore Eq. (14.36) becomes simply • Using Eqs. (14.27) and (l4.29) to find Anm,. we find that the spontaneous transition probability in a short time interval dt, from state m to state n, is equal to dt,

  43. where , the probability per unit time for a transition to occur (also called the decay constant). is given by • A "short" time interval dt is one for which dt << 1. You should verify that  has the proper dimensions (reciprocal time, to make dt dimensionless). • When we speak of decay rates, we must remember that the transition is observed as a discontinuous process; a photon interacts with a measuring instrument as a discrete unit of energy. • Here is the same wave-particle duality that has been discussed before.

  44. The term "measuring instrument" has a very broad meaning; it is not necessarily an artifact of our own making. • For example, thousands of years ago in Africa a nuclear chain reaction began spontaneously. • No measuring instrument could count the decays, but the evidence remains at the site for all to see. (If a tree falls where nobody can hear it, does it make a sound? Of course it does; many animals can hear it.) • Now consider the time at which each atom makes a transition. This is determined by the interaction of a photon with the measuring instrument, which could be any kind of matter on which the photon could leave a lasting imprint.

  45. Thus nature makes the measurement without our intervention. • We can make an analog to alpha-particle emission by a radioactive nucleus. (See Section 12.5.) • The alpha particle in a uranium nucleus travels back and forth and has 1020 or more opportunities to escape during each second. • If it does not escape, the atom is unchanged; a billion-year-old 235U atom is identical to a 235U atom that was just formed by any means whatsoever (perhaps by alpha decay of a 239Pu atom). • In a similar way, the oscillating dipole moment of a hydrogen atom in a mixed state,

  46. like that of Eq. (l4.13), creates an electromagnetic field that, sooner or later, will transfer energy to another hydrogen atom. • But the energy can only be transferred by a photon; as long as no transfer has taken place, the original hydrogen atom is unchanged, and thus the probability of decay in the next picosecond is not changed. • Energy Dependence of Transition Rates and Decay of Subatomic Particles • The factor nm in Eq. (14.39) tells us that the decay constant is proportional to the cube of the energy difference between states n and m.

  47. • This is true for any transition that is governed by the electromagnetic force, (where photons are involved). • • A striking example of this is given by comparing the mean lifetimes of two subatomic systems: the neutral pi meson (pion, 0) and positronium (Ps), which is a bound stare of a positron and an electron. • In both cases the entire mass of the system disappears and two photons (gamma rays) are emitted. • • The total energy of these photons is equal to ћω, which in this case is just the original rest energy.

  48. The rest energy of Ps is twice the electron rest energy or l.02 MeV; the rest energy of the pion is 135 MeV. Therefore the value of ωnm for the pion is about 130 times its value for Ps. • Since the value of λ is proportional to ω3, we would expect the ratio of the mean lifetime of Ps to the mean lifetime of the pion to be, neglecting other factors, about 1303, or about 2 × 109. • The lifetime of Ps is l.24 ×10-9 s; that of the π0 is 0.83 ×10-16s .The ratio is about l.5 ×109. • Exponential Decay Law • Given a collection of N0 identical atoms in the first excited state at time t= 0, we expect to find that N of these atoms will remain unchanged

  49. when they are observed at time t > 0. Given the value of the decay constant λ, let us predict the value of N. • In any time interval dt, the probability of decay to the ground state will be λdt for each atom, so for N the number of decays will be Nλdt. • Thus during any time interval dt the change in N will be dN = -Nλdt (14.40) • We can integrate this equation by separating the variables as follows: dN/N= -λdt

More Related