1 / 29

Heat capacity

Heat capacity. The electronic heat capacity C e can be found by taking the derivative of Equation (19.18): For temperatures that are small compared with the Fermi temperature, we can neglect the second term in the expansion compared with the first and obtain.

mindy
Télécharger la présentation

Heat capacity

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Heat capacity • The electronic heat capacity Ce can be found by taking the derivative of Equation (19.18): • For temperatures that are small compared with the Fermi temperature, we can neglect the second term in the expansion compared with the first and obtain

  2. Thus the electronic specific heat capacity is 2.2 x 10-2 R. This small value explains why metals have a specific heat capacity of about 3R, the same as for other solids. • It was originally believed that their free electrons should contribute an additional (3/2) R associated with their three translational degrees of freedom. Our last calculation shows that the contribution is negligible. • The energy of the electrons changes only slightly with temperature (dU/dT is small) because only those electrons near the Fermi level can increase their energies as the temperature is raised, and there are precious few of them.

  3. At very low temperatures the picture is different. From the Debye theory, Cv is proportional to T3 and so the heat capacity of a metal takes the form Cv = AT + BT3, where the first term is the electronic contribution and the second is associated with the crystal lattice. • At sufficiently low temperatures, the AT term can dominate, as the sketch of Figure 19.9 indicates. Figure 19.9 Sketch of the heat capacity of a metal as a function of temperature showing the electronic and lattice contributions.

  4. The fermion gas pressure is found from S = 0 at T = 0, as it must be. The Helmholtz function F = U -TS is

  5. For silver we found that N/V = 5.9 x1028 m-3 and TF= 65,000K . Thus P = 2/5 *5.9*1028 *(1.38*10-23) (6.5*104) = 2.1*1010 Pa = 2.1*105 atm. • Given this tremendous pressure, we can appreciate the role of the surface potential barrier in keeping the electrons from evaporating from the metal.

  6. 19.5 Applications to White Dwarf Stars The temperature inside the core of a typical star is at the order of 107K. The atoms are completely ionized at such a high T, which creates a hugh electron gas The loss of gravitational energy balances with an increase in the kinetic energy of the electrons and ions, which prevent the collapse of star!

  7. Example: The pressure of the electron gas in Sirius B can be calculated with the formula Using the following numbers Mass M = 2.09 × 1030 kg Radius R= 5.57 × 106 m Volume V= 7.23 × 1020 m3

  8. Assuming that nuclear fusion has ceased after all the core hydrogen has been converted to helium! • The number nucleons = • Since the ratio of nucleons and electrons is 2:1 there are electrons MeV

  9. Therefore, T(=107 K) is much smaller then TF . i.e. is a valid assumption ! Thus: P can be calculated as

  10. A white dwarf is stable when its total energy is minimum For Since can be expressed as Where

  11. For gravitational energy of a solid With In summary To find the minimum U with respect to R

  12. 19.7 a) Calculate Fermi energy for aluminum assuming three electrons per aluminum atom.

  13. 19.7b) Show that the aluminum at T = 1000 K, μ differs from εFby less than 0.01%. (The density of aluminum is 2.69 x 103 kg m-3 and its atomic weight is 27.)

  14. 19.7c) Calculate the electronic contribution to the specific heat capacity of aluminum at room temperature and compare it to 3R. Using the following equation

  15. 19.13. Consider the collapse of the sun into a white dwarf. For the sun, M= 2 x 1030 kg, R = 7 x 108 m, V= 1.4 x 1027 m3. • Calculate the Fermi energy of the Sun’s electrons.

  16. (b) What is the Fermi temperature? (c) What is the average speed of the electrons in the fermion gas (see problem 19-4). Compare your answer with the speed of light.

  17. Chapter 20 Information Theory

  18. 20.1 Introduction • Statistical thermodynamics provides the tool of calculating entropy. • Entropy is a measure of the degree of randomness or disorder of a system. • Disorder implies a lack of information regarding the exact state of the system. • A disordered system is one about which we lack complete information.

  19. 20.2 Uncertainty and Information • Claude Shannon laid down the foundations of the information theory. • Further developed by Leon Brillouin. • Applied to Statistical thermodynamics by E. T. Jaynes. • The cornerstone of the Shannon theory is the observation that information is a combination of the certain and the uncertain, of the expected and the unexpected.

  20. The degree of surprise generated by a certain event – one that has already occurred – is zero. • If a less probable event is reported, the information conveyed is greater. • The information should increase as the probability decreases.

  21. For a given experiment, consider a set of possible outcomes whose probabilities are p1, p2, … pn. • It is possible to find a quantity H(p1 . . . pn) that measures the amount of uncertainty represented by the given set of probabilities. • Only three conditions are needed to specify the function H(p1 . . . pn) to within a constant factor. They are: 1. H is a continuous function of the p. 2. If all the pi’s are equal, pi = 1/n; then H(1/n,…, 1/n) is a monotonic increasing function of n. 3. If the possible outcomes of a particular experiment depend on the possible outcomes of n subsidiary experiments, then H is the sum of the uncertainties of the subsidiary experiments.

  22. The above discussion leads to g(R) + g(S) = g(RS), where one can expect that the function g( ) shall be a logarithm function. • In a general format, the function can be written as g(x) = A ln(x) + C, where A and C are constants. • From the earlier transformation g(x) = x f(1/x), one gets that the uncertainty quantity, H, shall be (1/p)f(p) = A ln(p) + C, where p = 1/n (n is the total number of event). • Therefore, f(p) = A*p*ln(p) + C • Given that if the probability is 1, the uncertainty H must be zero, the constant C should be equal to ZERO. • Thus, f(p) = A*p*ln(p). • Since p is smaller than 1, ln(p) shall be minus and thus the constant A is inherently negative

  23. Following conventional notion, we write f(p) = -K*p*ln(p), where K is a positive coefficient. • The uncertainty quantity H(p1, p2, …pn) = Σ f(pi) • Thus, H(p1, p2, …pn) = Σ –K*pi*ln(pi) = -K Σpi*ln(pi) • Example: H(1/2, 1/3, 1/6) = -K*[1/2ln(1/2) + 1/3*ln(1/3) + 1/6*ln(1/6)] = -K*(-0.346 – 0.377 – 0.299) = 1.01K from the decomposed procedure, H(1/2,1/2) + 1/2H(2/3, 1/3) = -K*[1/2ln(1/2) + 1/2ln(1/2)] -1/2*K*[2/3*ln(2/3) + 1/3ln(1/3)] = -K(-0.346-0.346) – K/2*(-0.27 – 0.366) = 1.01K • For equal probable events, pi = 1/n, H = K*ln(n)

  24. In a binary case, where two possible outcomes of an experiment with probabilities, p1 and p2 with p1 + p2 = 1 H = - K*[p1ln(p1) + p2ln(p2)] • To determine H value when p1 is 0 or 1, one need L’Hopital’s rule lim[u(x)/v(x)] as x approaches 0 equals lim[u’(x)/v’(x)] • Therefore, as p1 approaches 0, lim[p1ln(p1)] = lim[(1/x)/(-1/x2)] = 0 • The uncertainty is therefore 0 when either p1 or p2 is zero! • Under what value of p1 while H reaches the maximum? differentiate equation - K*[p1ln(p1) + p2ln(p2)] against p1 and set the derivative equal 0 dH/dp1 = -K*[ln(p1) + p1/(p1) - ln(1- p1) – (1- p1)/ (1- p1)] = 0 which leads to p1 = 1/2

  25. 20.3 Unit of Information • Choosing 2 as the basis of the logarithm and take K = 1, one gets H = 1 • We call the unit of information a bit for binary event. • Decimal digit, H = log2(10) = 3.32, thus a decimal digit contains about 3 and 1/3 bits of information.

  26. Linguistics • A more refined analysis works in terms of component syllables. One can test what is significant in a syllable in speech by swapping syllables and seeing if meaning or tense is changed or lost. The table gives some examples of the application of this statistical approach to some works of literature.

  27. Linguistics • The type of interesting results that arise from such studies include: (a) English has the lowest entropy of any major language, and (b) Shakespeare’s work has the lowest entropy of any author studied. • These ideas are now progressing beyond the scientific level and are impinging on new ideas of criticism. Here as in biology, the thermodynamic notions can be helpful though they must be applied with caution because concepts such as ‘quality’ cannot be measured as they are purely subjective

More Related