Essential Graduate Physics - Classical Mechanics, Classical Electrodynamics, Quantum Mechanics, Statistical Mechanics [1-4]
 9780750313988, 9780750314015, 9780750314053, 9780750314084, 9780750314114, 9780750314145, 9780750314166, 9780750314206

Table of contents :
References, Appendices & All Parts Merged
Recommended Citation
Front Matter.pdf
CM merged
CM Table of Contents.pdf
CM Chapter 01
CM Chapter 02
CM Chapter 03
CM Chapter 04
CM Chapter 05
CM Chapter 06
CM Chapter 07
CM Chapter 08
CM Chapter 09
CM Chapter 10
EM merged
EM Table of Contents.pdf
EM Chapter 01
EM Chapter 02
EM Chapter 03
EM Chapter 04
EM Chapter 05
EM Chapter 06
EM Chapter 07
EM Chapter 08
EM Chapter 09
EM Chapter 10
QM merged
QM Table of Contents.pdf
QM Chapter 01
QM Chapter 02
QM Chapter 03
QM Chapter 04
QM Chapter 05
QM Chapter 06
QM Chapter 07
QM Chapter 08
QM Chapter 09
QM Chapter 10
SM merged
SM Table of Contents.pdf
SM Chapter 01
SM Chapter 02
SM Chapter 03
SM Chapter 04
SM Chapter 05
SM Chapter 06
Selected Mathematical Formulas
Selected Units and Constants
References

Citation preview

Stony Brook University

Academic Commons Essential Graduate Physics

Department of Physics and Astronomy

2013

References, Appendices & All Parts Merged Konstantin Likharev SUNY Stony Brook, [email protected]

Follow this and additional works at: https://commons.library.stonybrook.edu/egp Part of the Physics Commons

Recommended Citation Likharev, Konstantin, "References, Appendices & All Parts Merged" (2013). Essential Graduate Physics. 6. https://commons.library.stonybrook.edu/egp/6

This Book is brought to you for free and open access by the Department of Physics and Astronomy at Academic Commons. It has been accepted for inclusion in Essential Graduate Physics by an authorized administrator of Academic Commons. For more information, please contact [email protected], [email protected].

Konstantin K. Likharev

Essential Graduate Physics Lecture Notes and Problems

Beta version

Essential Graduate Physics Lecture Notes and Problems Beta version, December 2013 with later problem additions and error corrections

© Copyright Konstantin K. Likharev

Open access to this material is provided online, at two mirror sites: http://commons.library.stonybrook.edu/egp/ and https://sites.google.com/site/likharevegp/

under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

– see http://creativecommons.org/licenses/by-nc-sa/4.0/

The above copyright and license do not cover the figures adapted from published books and journal papers (with copyright holders listed in figure captions) and publicly accessible Web sites (with URLs cited in figure captions). Using these materials beyond the “fair use”1 boundaries requires obtaining permissions from the respective copyright holders.

1

To the best of the author’s knowledge, the fair use clause of the U.S. copyright law (see, e.g., http://www.copyright.gov/fls/fl102.html) allows viewing, browsing, and/or downloading copyrighted materials for temporary, non-commercial, personal purposes, without requesting the copyright holder’s permission.

Front Matter

ii

Essential Graduate Physics

K. Likharev

Preface This is a series of lecture notes and problems on “Essential Graduate Physics”, consisting of the following four parts: CM: Classical Mechanics (for a 1-semester course), EM: Classical Electrodynamics (2 semesters), QM: Quantum Mechanics (2 semesters), and SM: Statistical Mechanics (1 semester). The parts share a teaching style, structure, and (with a few exceptions) notation, and are interlinked by extensive cross-referencing. I believe that due to this unity, the notes may be used for teaching these courses not only in the (preferred) sequence shown above but in almost any order – or in parallel. Each part is a two-component package consisting of: (i) Lecture Notes chapter texts,2 with a list of exercise problems at the end of each chapter, and (ii) Exercise and Test Problems with Model Solutions files. The series also includes this front matter, two brief reference appendices, MA: Selected Mathematical Formulas (16 pp.) and UCA: Selected Units and Constants (4 pp.), and a list of references (2 pp.) The series is a by-product of the so-called core physics courses I taught at Stony Brook University from 1991 to 2013. Reportedly, most physics departments require their graduate students to either take a set of similar courses or pass comprehensive exams based on an approximately similar body of knowledge. This is why I hope that my notes may be useful for both instructors and students of such courses, as well as for individual learners. The motivation for composing the lecture notes (which had to be typeset because of my horrible handwriting) and their distribution to Stony Brook students was my desperation to find textbooks I could actually use for teaching. First of all, the textbooks I could find, including the most influential Theoretical Physics series by Landau and Lifshitz, did not match my class audiences, which included experiment-oriented students, some PhD candidates from other departments, some college graduates with substandard undergraduate backgrounds, and a few advanced undergraduates. Second, for the rigid time restrictions imposed on the core physics courses, most available textbooks are way too long, and using them would mean hopping from one topic to another, picking up a chapter here and a section there, at a high risk of losing the necessary background material and logical connections between course components – and students’ interest with them. On the other hand, many textbooks lack even brief discussions of several traditional and modern topics that I believe are necessary parts of every professional physicist’s education.3,4 2

The texts are saved as separate .pdf files of each chapter, optimized for two-page viewing and double-side printing; merged files for each part and the series as a whole, convenient for search purposes, are also provided. 3 To list just a few: statics and dynamics of elastic and fluid continua, basic notions of physical kinetics, turbulence and deterministic chaos, physics of reversible and quantum computation, energy relaxation and dephasing of open quantum systems, the van der Pol method (a.k.a. the Rotating-Wave Approximation, RWA) in classical and quantum mechanics, physics of electrons and holes in semiconductors, the weak-potential and tightbinding approximations in the energy band theory, optical fiber electrodynamics, macroscopic quantum effects in Front Matter

iii

Essential Graduate Physics

K. Likharev

The main goal of my courses was to make students familiar with the basic notions and ideas of physics (hence the series’ title), and my main effort was to organize the material in a logical sequence the students could readily follow and enjoy, at each new step understanding why exactly they need to swallow the next knowledge pill. As a backside of such a minimalistic goal, I believe that my texts may be used by advanced undergraduate physics students as well. Moreover, I hope that selected parts of the series may be useful for graduate students of other disciplines, including astronomy, chemistry, mechanical engineering, electrical, computer and electronic engineering, and material science. At least since Confucius and Sophocles, i.e. for the past 2,500 years, teachers have known that students can master a new concept or method only if they have seen its application to at least a few particular situations. This is why in my notes, the range of theoretical physics methods is limited to the approaches that are indeed necessary for the solution of the problems I had time to discuss, and the introduction of every new technique is always accompanied by an application example or two. Additional exercise problems are listed at the end of each chapter of the lecture notes; they may be used for homework assignments. Individual readers are strongly encouraged to solve as many of these problems as possible.5 Detailed model solutions of the exercise problems (some with additional expansion of the lecture material), and several shorter problems suitable for tests (also with model solutions), are gathered in six separate files – one per semester. These files are available for both university instructors and individual readers – free of charge, but in return for a signed commitment to avoid unlimited distribution of the solutions – see p. vii below. For instructors, these files are available not only in the Adobe Systems’ Portable Document Format (*.pdf) but also in the Microsoft Office 1997-2003 format (*.doc) free of macros, so that the problem assignments and solutions may be readily grouped, edited, etc., before their distribution to students, using either virtually any version of Microsoft Word or independent software tools – e.g., the public-domain OpenOffice.org. I know that my texts are far from perfect. In particular, some sacrifices made in the topic selection, always very subjective, were extremely painful. (Most regretfully, I could not find time for even a brief introduction to general relativity.6) Moreover, it is almost certain that despite all my efforts and the great help from SBU students and teaching assistants, not all typos/errors have been weeded out. This is why all remarks (however candid) and suggestions by the readers would be highly appreciated; they may be sent to [email protected]. All significant contributions will be gratefully acknowledged in future editions of the series.

Bose-Einstein condensates, Bloch oscillations and Landau-Zener tunneling, cavity QED, and the Density Functional Theory (DFT). All these topics are discussed, if only concisely, in these notes. 4 Recently, several high-quality graduate-level teaching materials became available online, including M. Fowler’s Graduate Quantum Mechanics Lectures (http://galileo.phys.virginia.edu/classes/751.mf1i.fall02/home.html), R. Fitzpatrick’s text on Classical Electromagnetism (farside.ph.utexas.edu/teaching/jk1/Electromagnetism.pdf), B. Simons’ lecture notes on Advanced Quantum Mechanics (www.tcm.phy.cam.ac.uk/~bds10/aqp.html), and D. Tong’s lecture notes on several topics (www.damtp.cam.ac.uk/user/tong/teaching.html). 5 The problems that require either longer calculations or more creative approaches (or both) are marked by asterisks. 6 For an introduction to that subject, I can recommend either its review by S. Carroll, Spacetime and Geometry, Addison-Wesley, 2003, or a longer text by A. Zee, Einstein Gravity in a Nutshell, Princeton U. Press, 2013. Front Matter

iv

Essential Graduate Physics

K. Likharev

Disclaimer Since these materials are available free of charge, it is hard to imagine somebody blaming their author for deceiving “customers” for his commercial gain. Still, I would like to go a little bit beyond the usual litigation-avoiding claims,7 and offer a word of caution to potential readers, to preempt their possible later disappointment. This is NOT a course of theoretical physics – at least in the contemporary sense of the term Though much of the included material is similar to that in textbooks on “theoretical physics” (most notably in the famous series by L. Landau and E. Lifshitz), this lecture note series is different from them by its focus on the basic concepts and ideas of physics, their relation to experimental data, and most important applications – rather than on sophisticated theoretical techniques. Indeed, the set of theoretical methods discussed in the notes is limited to the minimum necessary for quantitative understanding of the key notions of physics and for solving a few (or rather about a thousand :-) core problems. Moreover, because of the notes’ shortness, I have not been able to cover some key fields of theoretical physics, most notably general relativity and quantum field theory – beyond some introductory elements of quantum electrodynamics in QM Chapter 9. If you want to work in modern theoretical physics, you need to know much more than what this series teaches! Moreover, this is NOT a textbook – at least not the usual one A usual textbook tries (though most commonly fails) to cover virtually all aspects of the addressed field. As a result, it is typically way too long to be fully read and understood by students during the time allocated for the corresponding course, so the instructors are forced to pick up selected chapters and sections, frequently losing the narrative’s logic lines. In contrast, these notes are much shorter (about 200 pages per semester), enabling their thorough reading – perhaps with just a few later sections dropped, depending on the reader’s interests. I have tried to mitigate the losses due to this minimalistic approach by providing quite a few solved problems and extensive further reading recommendations on the topics I had no time to cover. The reader is highly encouraged to use these sources (and/or the corresponding chapters of more detailed textbooks) on any topics of their special interest. Then, what do these notes ARE and why you may like to use them – I think By tradition, graduate physics education consists of two main components: research experience and advanced physics courses. Unfortunately, the latter component is currently under pressure in many physics departments, apparently because of two reasons. On one hand, the average knowledge level of the students entering graduate school is not improving, so bringing them up to the level of contemporary research becomes increasingly difficult. On the other hand, the research itself is becoming more 7

Yes, Virginia, these notes represent only my personal opinions, not necessarily those of the Department of Physics and Astronomy of Stony Brook University, the SBU at large, the SUNY system as a whole, the Empire State of New York, the federal agencies and private companies that funded my group’s research, etc. No dear, I cannot be held responsible for any harm, either bodily or mental, their reading may (?) cause.

Front Matter

v

Essential Graduate Physics

K. Likharev

fragmented, so the students frequently do not feel an immediate need for a broad physics knowledge base for their PhD project success. Some thesis advisors, trying to maximize the time they could use their students as a cheap laboratory workforce, do not help. I believe that this trend toward the reduction of broad physics education in graduate school is irresponsible. Experience shows that during their future research career, a typical current student will change their research fields several times. Starting from scratch in a new field is hard – terribly hard at an advanced age (believe me :-). However, physics is fortunate to have a stable hard core of knowledge, which many other sciences lack. With this knowledge, students will always feel in physics at home, while without it, they may not be able even to understand research literature in the new field, and would risk being reduced to auxiliary work roles – if any at all. I have seen the main objective of my Stony Brook courses to give an introduction to this core of physics, at the same time trying to convey my own enchantment by the unparalleled beauty of the concepts and ideas of this science, and the remarkable logic of their fusion into a wonderful single construct. Let me hope that these notes relay not only the knowledge as such but also at least a part of this enchantment.

Acknowledgments I am extremely grateful to my faculty colleagues and other readers who commented on certain sections of the notes; here is their list (in the alphabetic order):8 A. Abanov, P. Allen, D. Averin, S. Berkovich, P.-T. de Boer, M. Fernandez-Serra, R. F. Hernandez, P. Johnson, T. Konstantinova, A. Korotkov, V. Semenov, F. Sheldon, S. Sridharan, E. Tikhonov, O. Tikhonova, X. Wang, T.-C. Wei. (Obviously, they are not responsible for the remaining deficiencies.) The Department of Physics and Astronomy of Stony Brook University was very responsive to my kind requests for certain time ordering of my teaching assignments, which was beneficial for note writing and editing. The department, and the university as a whole, also provided a very friendly general environment for my work there for almost three decades. A large part of my scientific background and research experience reflected in these materials came from my education (and then work) in the Department of Physics of Moscow State University. And last but not least, I would like to thank my wife Lioudmila for several good bits of advice on aesthetic aspects of note typesetting, and more importantly for all her love, care, and patience – without them, this writing project would be impossible. Konstantin K. Likharev https://you.stonybrook.edu/likharev/

8

I am very sorry that I have not kept proper records from the beginning of my lectures at Stony Brook, so I cannot list all the numerous students and TAs who had kindly attracted my attention to typos in earlier versions of these notes. Needless to say, I am very grateful to them all as well.

Front Matter

vi

Essential Graduate Physics

K. Likharev

Problem Solution Request Templates Requests should be sent to either [email protected] or [email protected] in either of the following forms: - an email from a valid university address, - a scanned copy of a signed letter – as an email attachment. Approximate contents: A. Request from a Prospective Instructor Dear Dr. Likharev, I plan to use your lecture notes and problems of the Essential Graduate Physics series, part(s) , in my course during in the . I would appreciate your sending me the file(s) Exercise and Test Problems with Model Solutions of that part(s) of the series in the format(s). I will avoid unlimited distribution of the solutions, in particular their posting on externally searchable websites. If I distribute the solutions among my students, I will ask them to adhere to the same restraint. I will let you know of any significant typos/deficiencies I may find. Sincerely, B. Request from an Individual Learner Dear Dr. Likharev, I plan to use your lecture notes and problems of the Essential Graduate Physics series, part(s) , for my personal education. I would appreciate your sending me the file(s) Exercise and Test Problems with Model Solutions of that part(s) of the series. I will not share the material with anyone, and will not use it for passing courses that are officially based on your series. I will let you know of any significant typos/deficiencies I may find. Sincerely,

Front Matter

vii

Essential Graduate Physics

K. Likharev

Notation Abbreviations

Fonts

Symbols

Eq. any formula (e.g., equality)

F, F scalar variables9

 time differentiation operator (d/dt)

Fig. figure

F, F vector variables

 spatial differentiation vector (del)

Sec. section

Fˆ , Fˆ scalar operators

 approximately equal to

c.c. complex conjugate

Fˆ , Fˆ vector operators

~ of the same order as

h. c. Hermitian conjugate

F matrix

 proportional to

Fjj’ matrix element

 equal to by definition (or evidently)

Parts of the series CM: Classical Mechanics EM: Classical Electrodynamics

 scalar (“dot-”) product  vector (“cross-”) product10

time averaging

QM: Quantum Mechanics

  statistical averaging

SM: Statistical Mechanics

[ , ] commutator

Appendices MA: Selected Mathematical Formulas

{ , } anticommutator n unit vector

UCA: Selected Units and Constants Prime signs The prime signs (′, ″, etc.) are used to distinguish similar variables or indices (such as j and j′ in the matrix element above), rather than to denote derivatives. Italics In the text, Italic fonts are used for emphasis – in particular, of the new terms at their first usage. Frames The most general and/or important formulas are highlighted with blue frames and short titles on the margins. Numbering Chapter numbers are dropped in all references to formulas, figures, footnotes, and problems within the same chapter. 9

The same letter, typeset in different fonts, typically denotes different variables. On a few occasions, the cross sign is used to emphasize the usual multiplication of scalars.

10

Front Matter

viii

Essential Graduate Physics

K. Likharev

General Table of Contents Pages

Exercise Problems

4 14 14 22 32 38 30 38 30 14 16

– 14 11 27 36 22 26 23 25 5 10

CM TOTAL: 252

199

CM: Classical Mechanics Table of Contents Chapter 1. Review of Fundamentals Chapter 2. Lagrangian Analytical Mechanics Chapter 3. A Few Simple Problems Chapter 4. Rigid Body Motion Chapter 5. Oscillations Chapter 6. From Oscillations to Waves Chapter 7. Deformations and Elasticity Chapter 8. Fluid Mechanics Chapter 9. Deterministic Chaos Chapter 10. A Bit More of Analytical Mechanics

Additional file (available upon request): Exercise and Test Problems with Model Solutions

Problems 199 + 47 = 246

Pages

Exercise Problems

4 20 68 28 16 42 38 70 38 56 40

– 20 47 30 15 29 31 43 28 42 15

EM TOTAL: 420

300

EM: Classical Electrodynamics Table of Contents Chapter 1. Electric Charge Interaction Chapter 2. Charges and Conductors Chapter 3. Dipoles and Dielectrics Chapter 4. DC Currents Chapter 5. Magnetism Chapter 6. Electromagnetism Chapter 7. Electromagnetic Wave Propagation Chapter 8. Radiation, Scattering, Interference, and Diffraction Chapter 9. Special Relativity Chapter 10. Radiation by Relativistic Charges

Additional file (available upon request): Exercise and Test Problems with Model Solutions

Front Matter

Pages 338

Pages 464

Problems 300 + 50 = 350

ix

Essential Graduate Physics

K. Likharev

Pages

Exercise Problems

4 26 76 64 52 48 36 54 52 36 16

– 18 47 49 36 55 31 17 35 22 1

QM TOTAL: 464

311

QM: Quantum Mechanics Table of Contents Chapter 1. Introduction Chapter 2. 1D Wave Mechanics Chapter 3. Higher Dimensionality Effects Chapter 4. Bra-ket Formalism Chapter 5. Some Exactly Solvable Problems Chapter 6. Perturbative Approaches Chapter 7. Open Quantum Systems Chapter 8. Multiparticle Systems Chapter 9. Introduction to Relativistic Quantum Mechanics Chapter 10. Making Sense of Quantum Mechanics

Additional file (available upon request): Exercise and Test Problems with Model Solutions

Pages

4 24 44 34 36 44 38

– 18 36 30 24 30 18

SM TOTAL: 224

156

Table of Contents Chapter 1. Review of Thermodynamics Chapter 2. Principles of Physical Statistics Chapter 3. Ideal and Not-So-Ideal Gases Chapter 4. Phase Transitions Chapter 5. Fluctuations Chapter 6. Elements of Kinetics

Additional file (available upon request): Exercise and Test Problems with Model Solutions

MA: Selected Mathematical Formulas UCA: Selected Units and Constants References A partial list of books used at work on the series

Front Matter

Problems 311 + 69 = 380 Exercise Problems

SM: Statistical Mechanics

Appendices

Pages 574

Pages 272

Problems 156 + 26 = 182

Pages

16 4 Pages 2

x

Konstantin K. Likharev

Essential Graduate Physics Lecture Notes and Problems Beta version Open online access at http://commons.library.stonybrook.edu/egp/ and https://sites.google.com/site/likharevegp/

Part CM: Classical Mechanics Last corrections: June 15, 2023

A version of this material was published in 2017 under the title

Classical Mechanics: Lecture notes IOPP, Essential Advanced Physics – Volume 1, ISBN 978-0-7503-1398-8, with model solutions of the exercise problems published in 2018 under the title

Classical Mechanics: Problems with solutions IOPP, Essential Advanced Physics – Volume 2, ISBN 978-0-7503-1401-5 However, by now this online version of the lecture notes, as well as the problem solutions available from the author by request, have been much better corrected

About the author: https://you.stonybrook.edu/likharev/

© K. Likharev

Essential Graduate Physics

CM: Classical Mechanics

Table of Contents Chapter 1. Review of Fundamentals (14 pp.) 1.0. Terminology: Mechanics and dynamics 1.1. Kinematics: Basic notions 1.2. Dynamics: Newton laws 1.3. Conservation laws 1.4. Potential energy and equilibrium 1.5. OK, can we go home now? 1.6. Self-test problems (14) Chapter 2. Lagrangian Analytical Mechanics (14 pp.) 2.1. Lagrange equation 2.2. Three simple examples 2.3. Hamiltonian function and energy 2.4. Other conservation laws 2.5. Exercise problems (11) Chapter 3. A Few Simple Problems (22 pp.) 3.1. One-dimensional and 1D-reducible systems 3.2. Equilibrium and stability 3.3. Hamiltonian 1D systems 3.4. Planetary problems 3.5. Elastic scattering 3.6. Exercise problems (27) Chapter 4. Rigid Body Motion (32 pp.) 4.1. Translation and rotation 4.2. Inertia tensor 4.3. Fixed-axis rotation 4.4. Free rotation 4.5. Torque-induced precession 4.6. Non-inertial reference frames 4.7. Exercise problems (36) Chapter 5. Oscillations (38 pp.) 5.1. Free and forced oscillations 5.2. Weakly nonlinear oscillations 5.3. Reduced equations 5.4. Self-oscillations and phase locking 5.5. Parametric excitation 5.6. Fixed point classification 5.7. Numerical approaches 5.8. Higher-harmonic and subharmonic oscillations 5.9. Relaxation oscillations 5.10. Exercise problems (22)

Table of Contents

Page 2 of 4

Essential Graduate Physics

CM: Classical Mechanics

Chapter 6. From Oscillations to Waves (30 pp.) 6.1. Two coupled oscillators 6.2. N coupled oscillators 6.3. 1D waves 6.4. Acoustic waves 6.5. Standing waves 6.6. Wave decay and attenuation 6.7. Nonlinear and parametric effects 6.8. Exercise problems (26) Chapter 7. Deformations and Elasticity (38 pp.) 7.1. Strain 7.2. Stress 7.3. Hooke’s law 7.4. Equilibrium 7.5. Rod bending 7.6. Rod torsion 7.7. 3D acoustic waves 7.8. Elastic waves in thin rods 7.9. Exercise problems (23) Chapter 8. Fluid Mechanics (30 pp.) 8.1. Hydrostatics 8.2. Surface tension effects 8.3. Kinematics 8.4. Dynamics: Ideal fluids 8.5. Dynamics: Viscous fluids 8.6. Turbulence 8.7. Exercise problems (25) Chapter 9. Deterministic Chaos (14 pp.) 9.1. Chaos in maps 9.2. Chaos in dynamic systems 9.3. Chaos in Hamiltonian systems 9.4. Chaos and turbulence 9.5. Exercise problems (5) Chapter 10. A Bit More of Analytical Mechanics (16 pp.) 10.1. Hamilton equations 10.2. Adiabatic invariance 10.3. The Hamilton principle 10.4. The Hamilton-Jacobi equation 10.5. Exercise problems (10) *** Additional file (available from the author upon request): Exercise and Test Problems with Model Solutions (199 + 47 = 246 problems; 338 pp.) Table of Contents

Page 3 of 4

Essential Graduate Physics

CM: Classical Mechanics

***

Author’s Remarks This course mostly follows the well-established traditions of teaching classical mechanics at the graduate level. Its most distinguishing feature is substantial attention to the mechanics of physical continua, including the discussions of 1D waves in Chapter 6, deformations and elasticity (including 3D waves) in Chapter 7, and fluid dynamics in Chapter 8. Its other not-quite-standard feature is that the introduction to analytical mechanics, starting with the Lagrangian formalism in Chapter 2, is based on the experiment-based Newton’s laws rather than general concepts such as the Hamilton principle, which is discussed only at the end of the course (Sec. 10.3). I feel that this route emphasizes better the experimental roots of physics, and the secondary nature of any general principles – regardless of their aesthetic and heuristic value.

Table of Contents

Page 4 of 4

Essential Graduate Physics

CM: Classical Mechanics

Chapter 1. Review of Fundamentals After a brief discussion of the title and contents of the course, this introductory chapter reviews the basic notions and facts of the non-relativistic classical mechanics, that are supposed to be known to the reader from their undergraduate studies.1 Due to this reason, the discussion is very short. 1.0. Terminology: Mechanics and dynamics A more fair title for this course would be Classical Mechanics and Dynamics, because the notions of mechanics and dynamics, though much intertwined, are still somewhat different. The term mechanics, in its narrow sense, means the derivation of equations of motion of point-like particles and their systems (including solids and fluids), the solution of these equations, and an interpretation of the results. Dynamics is a more ambiguous term; it may mean, in particular: (i) the part of physics that deals with motion (in contrast to statics); (ii) the part of physics that deals with reasons for motion (in contrast to kinematics); (iii) the part of mechanics that focuses on its two last tasks, i.e. the solution of the equations of motion and discussion of the results.2 Because of this ambiguity, after some hesitation, I have opted to use the traditional name Classical Mechanics, with the word Mechanics in its broad sense that includes (similarly to Quantum Mechanics and Statistical Mechanics) studies of dynamics of some non-mechanical systems as well. 1.1. Kinematics: Basic notions The basic notions of kinematics may be defined in various ways, and some mathematicians pay much attention to alternative systems of axioms and the relations between them. In physics, we typically stick to less rigorous ways (in order to proceed faster to solving particular problems) and end debating any definition as soon as “everybody in the room” agrees that we are all speaking about the same thing – at least in the context they are being discussed. Let me hope that the following notions used in classical mechanics do satisfy this criterion in our “room”: 1

The reader is advised to perform (perhaps after reading this chapter as a reminder) a self-check by solving a few problems of those listed in Sec. 1.6. If the results are not satisfactory, it may make sense to start with some remedial reading. For that, I could recommend, e.g., J. Marion and S. Thornton, Classical Dynamics of Particles and Systems, 5th ed., Saunders, 2003; and D. Morin, Introduction to Classical Mechanics, Cambridge U., 2008. 2 The reader may have noticed that the last definition of dynamics is suspiciously close to the part of mathematics devoted to differential equation analysis; what is the difference? An important bit of philosophy: physics may be defined as an art (and a bit of science :-) of describing Mother Nature by mathematical means; hence in many cases the approaches of a mathematician and a physicist to a problem are very similar. The main difference between them is that physicists try to express the results of their analyses in terms of the properties of the systems under study, rather than the functions describing them, and as a result develop a sort of intuition (“gut feeling”) about how other similar systems may behave, even if their exact equations of motion are somewhat different – or not known at all. The intuition so developed has enormous heuristic power, and most discoveries in physics have been made through gut-feeling-based insights rather than by plugging one formula into another one. © K. Likharev

Essential Graduate Physics

CM: Classical Mechanics

(i) All the Euclidean geometry notions, including the point, the straight line, the plane, etc.3 (ii) Reference frames: platforms for observation and mathematical description of physical phenomena. A reference frame includes a coordinate system used for measuring the point’s position (namely, its radius vector r that connects the coordinate origin to the point – see Fig. 1) and a clock that measures time t. A coordinate system may be understood as a certain method of expressing the radius vector r of a point as a set of its scalar coordinates. The most important of such systems (but by no means the only one) are the Cartesian (orthogonal, linear) coordinates4 rj of a point, in which its radius vector may be represented as the following sum: 3

r   n j rj ,

(1.1)

j 1

Cartesian coordinates

where n1, n2, and n3 are unit vectors directed along the coordinate axis – see Fig. 1.5 r3  z n3

r1  x

0 n1

r

n2

point Fig. 1.1. Cartesian coordinates of a point.

r2  y

(iii) The absolute (“Newtonian”) space/time,6 which does not depend on the matter distribution. The space is assumed to have the Euclidean metric, which may be expressed as the following relation between the length r of any radius vector r and its Cartesian coordinates: r2  r

2

3

  r j2 ,

(1.2)

j 1

while time t is assumed to run similarly in all reference frames. These assumptions are critically revised in the relativity theory (which, in this series, is discussed only starting from EM Chapter 9.)

3

All these notions are of course abstractions: simplified models of the real objects existing in Nature. But please always remember that any quantitative statement made in physics (e.g., a formula) may be strictly valid only for an approximate model of a physical system. (The reader should not be disheartened too much by this fact: experiments show that many models make extremely precise predictions of the behavior of the real systems.) 4 In this series, the Cartesian coordinates (introduced in 1637 by René Descartes, a.k.a. Cartesius) are denoted either as either {r1, r2, r3} or {x, y, z}, depending on convenience in each particular case. Note that axis numbering is important for operations like the vector (“cross”) product; the “correct” (meaning generally accepted) numbering order is such that the rotation n1  n2  n3  n1… looks counterclockwise if watched from a point with all rj > 0 – like the one shown in Fig. 1. 5 Note that the representation (1) is also possible for locally-orthogonal but curvilinear (for example, polar/cylindrical and spherical) coordinates, which will be extensively used in this series. However, such coordinates are not Cartesian, and for them some of the relations given below are invalid – see, e.g., MA Sec. 10. 6 These notions were formally introduced by Sir Isaac Newton in his main work, the three-volume Philosophiae Naturalis Principia Mathematica published in 1686-1687, but are rooted in earlier ideas by Galileo Galilei, published in 1632. Chapter 1

Page 2 of 14

Euclidean metric

Essential Graduate Physics

CM: Classical Mechanics

(iv) The (instant) velocity of the point, Velocity

dr  r , dt

(1.3)

dv  v  r . dt

(1.4)

v (t) 

and its acceleration: a(t) 

Acceleration

Radius vector’s transformation

(v) Transfer between reference frames. The above definitions of vectors r, v, and a depend on the chosen reference frame (are “reference-frame-specific”), and we frequently need to relate those vectors as observed in different frames. Within Euclidean geometry, the relation between the radius vectors in two frames with the corresponding axes parallel at the moment of interest (Fig. 2), is very simple: r in 0'  r in 0  r0 in 0' . (1.5) point r in 0 '

0'

r in 0

Fig. 1.2. Transfer between two reference frames. r0 in 0 '

0

If the frames move versus each other by translation only (no mutual rotation!), similar relations are valid for the velocities and accelerations as well:

v in 0'  v in 0  v 0 in 0' ,

(1.6)

a in 0'  a in 0  a 0 in 0' .

(1.7)

Note that in the case of mutual rotation of the reference frames, the transfer laws for velocities and accelerations are more complex than those given by Eqs. (6) and (7). Indeed, in this case, notions like v0in 0’ are not well defined: different points of an imaginary rigid body connected to frame 0 may have different velocities when observed in frame 0’. It will be more natural for me to discuss these more general relations at the end of Chapter 4 devoted to rigid body motion. (vi) A particle (or “point particle”): a localized physical object whose size is negligible, and the shape is irrelevant to the given problem. Note that the last qualification is extremely important. For example, the size and shape of a spaceship are not too important for the discussion of its orbital motion but are paramount when its landing procedures are being developed. Since classical mechanics neglects the quantum mechanical uncertainties,7 in it, the position of a particle at any particular instant t may be identified with a single geometrical point, i.e. with a single radius vector r(t). The formal final goal of classical mechanics is finding the laws of motion r(t) of all particles participating in the given problem. 7

This approximation is legitimate when the product of the coordinate and momentum scales of the particle motion is much larger than Planck’s constant  ~ 10-34 Js. More detailed conditions of the classical mechanics’ applicability depend on a particular system – see, e.g., the QM part of this series.

Chapter 1

Page 3 of 14

Essential Graduate Physics

CM: Classical Mechanics

1.2. Dynamics: Newton’s laws Generally, the classical dynamics is fully described (in addition to the kinematic relations discussed above) by three Newton’s laws. In contrast to the impression some textbooks on theoretical physics try to create, these laws are experimental in nature, and cannot be derived from purely theoretical arguments. I am confident that the reader of these notes is already familiar with Newton’s laws,8 in some formulation. Let me note only that in some formulations, the 1st Newton’s law looks just like a particular case of the 2nd law – when the net force acting on a particle equals zero. To avoid this duplication, the 1st law may be formulated as the following postulate: There exists at least one reference frame, called inertial, in which any freeparticle (i.e. a particle fully isolated from the rest of the Universe) moves with v = const, i.e. with a = 0.

st

1 Newton’s law

Note that according to Eq. (7), this postulate immediately means that there is also an infinite number of inertial reference frames – because all frames 0’ moving without rotation or acceleration relative to the postulated inertial frame 0 (i.e. having a0in 0’ = 0) are also inertial. On the other hand, the 2nd and 3rd Newton’s laws may be postulated together in the following elegant way. Each particle, say number k, may be characterized by a scalar constant (called mass mk), such that at any interaction of N particles (isolated from the rest of the Universe), in any inertial system,

P   p k  mk v k  const.

(1.8)

Total momentum and its conservation

p k  mk v k ,

(1.9)

Particle’s momentum

N

N

k 1

k 1

(Each component of this sum, is called the mechanical momentum9 of the corresponding particle, while the sum P, the total momentum of the system.) Let us apply this postulate to just two interacting particles. Differentiating Eq. (8) written for this case, over time, we get p 1  p 2 . (1.10) Let us give the derivative p 1 (which is a vector) the name of the force F exerted on particle 1. In our current case, when the only possible source of the force is particle 2, it may be denoted as F12: p 1  F12 . Similarly, F21  p 2 , so Eq. (10) becomes the 3rd Newton’s law F12  F21 .

(1.11)

Plugging Eq. (1.9) into these force definitions, and differentiating the products mkvk, taking into account that particle masses are constants,10 we get that for the k and k’ taking any of values 1, 2, 8

Due to the genius of Sir Isaac, these laws were formulated in the same Principia (1687), well ahead of the physics of his time. 9 The more extended term linear momentum is typically used only in cases when there is a chance of confusion with the angular momentum of the same particle/system – see below. The present-day definition of linear momentum and the term itself belong to John Wallis (1670), but the concept may be traced back to more vague notions of several previous scientists – all the way back to at least a 570 AD work by John Philoponus. Chapter 1

Page 4 of 14

3rd Newton’s law

Essential Graduate Physics

CM: Classical Mechanics

mk v k  mk a k  Fkk ' ,

where k'  k . .

(1.12)

Now, returning to the general case of several interacting particles, and making an additional (but very natural) assumption that all partial forces Fkk’ acting on particle k add up as vectors, we may generalize Eq. (12) into the 2nd Newton’s law mk a k  p k 

2nd Newton’s law

F

k ' k

kk '

 Fk ,

(1.13)

that allows a clear interpretation of the mass as a measure of a particle’s inertia. As a matter of principle, if the dependence of all pair forces Fkk’ of particle positions (and generally of time as well) is known, Eq. (13) augmented with the kinematic relations (2) and (3) allows calculation of the laws of motion rk(t) of all particles of the system. For example, for one particle the 2nd law (13) gives an ordinary differential equation of the second order:

mr  F(r, t ) ,

(1.14)

which may be integrated – either analytically or numerically.

Newton’s gravity law

In certain cases, this is very simple. As an elementary example, for local motions with r > 1, of particles (~1023 of them if we think about atoms in a 1-cm-scale body). If the only way to analyze its motion would be to write Newton’s laws for each of the particles, the situation would be completely hopeless. Fortunately, the number of constraints imposed on its motion is almost similarly huge. (At negligible deformations of the body, the distances between each pair of its particles should be constant.) As a result, the number of actual degrees of freedom of such a body is small (at negligible deformations, just six – see Sec. 4.1), so with the kind help from analytical mechanics, the motion of the body may be, in many important cases, analyzed even without numerical calculations. One more important motivation for analytical mechanics is given by the dynamics of “nonmechanical” systems, for example, of the electromagnetic field – possibly interacting with charged particles, conducting bodies, etc. In many such systems, the easiest (and sometimes the only practicable) way to find the equations of motion is to derive them from either the Lagrangian or Hamiltonian function of the system. Moreover, the Hamiltonian formulation of the analytical mechanics (to be reviewed in Chapter 10 below) offers a direct pathway to deriving quantum-mechanical Hamiltonian operators of various systems, which are necessary for the analysis of their quantum properties. 1.6. Self-test problems 1.1. A bicycle, ridden with velocity v on wet pavement, has no mudguards on its wheels. How far behind should the following biker ride to avoid being splashed over? Neglect the air resistance effects.

engines, though it had been used in European windmills at least since the early 1600s.) Just as a curiosity: the now-ubiquitous term cybernetics was coined by Norbert Wiener in 1948 from the word “governor” (or rather from its Ancient-Greek original ή) exactly in this meaning because the centrifugal governor had been the first well-studied control device. Chapter 1

Page 12 of 14

Essential Graduate Physics

CM: Classical Mechanics

1.2. Two round disks of radius R are firmly connected with a coaxial cylinder of a smaller radius r, and a thread is wound on the resulting spool. The spool is placed on a horizontal surface, and the thread’s end is being pooled out at angle  – see the figure on the right. Assuming that the spool does not slip on the surface, what direction would it roll?

1.3.* Calculate the equilibrium shape of a flexible heavy rope of length l, with a constant mass  per unit length, if it is hung in a uniform gravity field between two points separated by a horizontal distance d – see the figure on the right.

R

r O

T 

d

, l g

1.4. A uniform, long, thin bar is placed horizontally on two similar round cylinders rotating toward each other with the same angular velocity  and displaced by distance d – see the figure on   the right. Calculate the laws of relatively slow horizontal motion of d the bar within the plane of the drawing, for both possible directions of cylinder rotation, assuming that the friction force between the g R R slipping surfaces of the bar and each cylinder obeys the simple Coulomb approximation27  F  = N, where N is the normal pressure force between them, and  is a constant (velocity-independent) coefficient. Formulate the condition of validity of your result. 1.5. A small block slides, without friction, down a smooth slide that ends with a round loop of radius R – see the figure on the right. What smallest initial height h allows the block to make its way around h the loop without dropping from the slide if it is launched with negligible initial velocity?

R g

v0

1.6. A satellite of mass m is being launched from height H over the surface of a spherical planet with radius R and mass M >> m – see the figure on the right. Find the range of initial velocities v0 (normal to the radius) providing closed orbits above the planet’s surface.

H

m

M, R 1.7. Prove that the thin-uniform-disk model of a galaxy describes small sinusoidal (“harmonic”) oscillations of stars inside it, along the direction normal to the disk, and calculate the frequency of these oscillations in terms of Newton’s gravitational constant G and the average density  of the disk’s matter. 27

It was suggested in 1785 by the same Charles-Augustin de Coulomb who has discovered the famous Coulomb law of electrostatics, and hence pioneered the whole quantitative science of electricity – see EM Ch. 1.

Chapter 1

Page 13 of 14

Essential Graduate Physics

CM: Classical Mechanics

1.8. Derive differential equations of motion for small oscillations of two similar pendula coupled with a spring (see the figure on the right), within their common vertical plane. Assume that at the vertical position of both pendula, the spring is not stretched (L = 0). g

l

F   L

m

l m

1.9. One of the popular futuristic concepts of travel is digging a straight railway tunnel through the Earth and letting a train go through it, without initial velocity – driven only by gravity. Calculate the train’s travel time through such a tunnel, assuming that the Earth’s density  is constant, and neglecting the friction and planet-rotation effects. 1.10. A small bead of mass m may slide, without friction, along a light string stretched with force T >> mg, between two T points separated by a horizontal distance 2d – see the figure on the right. Calculate the frequency of horizontal oscillations of the bead about its equilibrium position.

2d

T m

g

1.11. For a rocket accelerating due to its working jet motor (and hence spending the jet fuel), calculate the relation between its velocity and the remaining mass. Hint: For the sake of simplicity, consider the 1D motion. 1.12. Prove the following virial theorem:28 for a set of N particles performing a periodic motion, T 

1 N  Fk  rk , 2 k 1

where the top bar means averaging over time – in this case over the motion period. What does the virial theorem say about: (i) a 1D motion of a particle in the confining potential29 U(x) = ax2s, with a > 0 and s > 0, and (ii) an orbital motion of a particle in the central potential U(r) = –C/r? N

Hint: Explore the time derivative of the following scalar function of time: G t    p k  rk . k 1

1.13. As will be discussed in Chapter 8, if a body moves through a fluid with a sufficiently high velocity v, the fluid’s drag force is approximately proportional to v2. Use this approximation (introduced by Sir Isaac Newton himself) to find the velocity as a function of time during the body’s vertical fall in the air near the Earth’s surface. 1.14. A particle of mass m, moving with velocity u, collides head-on with a particle of mass M, initially at rest. Calculate the velocities of both particles after the collision, if the initial energy of the system is barely sufficient for an increase of its internal energy by E. 28

It was first stated by Rudolf Clausius in 1870. Here and below I am following the (regretful) custom of using the single word “potential” for the potential energy of the particle – just for brevity. This custom is also common in quantum mechanics, but in electrodynamics, these two notions should be clearly distinguished – as they are in the EM part of this series.

29

Chapter 1

Page 14 of 14

Essential Graduate Physics

CM: Classical Mechanics

Chapter 2. Lagrangian Analytical Mechanics The goal of this chapter is to describe the Lagrangian formalism of analytical mechanics, which is extremely useful for obtaining the differential equations of motion (and sometimes their first integrals) not only for mechanical systems with holonomic constraints but also for some other dynamic systems. 2.1. Lagrange equations In many cases, the constraints imposed on the 3D motion of a system of N particles may be described by N vector (i.e. 3N scalar) algebraic equations

rk  rk (q1 , q 2 ,..., q j ,..., q J , t ),

with 1  k  N ,

(2.1)

where qj are certain generalized coordinates that (together with constraints) completely define the system position. Their number J ≤ 3N is called the number of the actual degrees of freedom of the system. The constraints that allow such a description are called holonomic.1 For example, for the problem already mentioned in Section 1.5, namely the bead sliding along a rotating ring (Fig. 1), J = 1, because with the constraints imposed by the ring, the bead’s position is uniquely determined by just one generalized coordinate – for example, its polar angle  .



0

 R 

y z

x

mg

Fig. 2.1. A bead on a rotating ring as an example of a system with just one degree of freedom (J = 1).

Indeed, selecting the reference frame as shown in Fig. 1 and using the well-known formulas for the spherical coordinates,2 we see that in this case, Eq. (1) has the form r  x, y, z  R sin  cos  , R sin  sin  , R cos  ,

where   t  const ,

(2.2)

with the last constant depending on the exact selection of the axes x and y and the time origin. Since the angle , in this case, is a fixed function of time, and R is a fixed constant, the particle’s position in space

1 Possibly,

the simplest counter-example of a non-holonomic constraint is a set of inequalities describing the hard walls confining the motion of particles in a closed volume. Non-holonomic constraints are better dealt by other methods, e.g., by imposing proper boundary conditions on the (otherwise unconstrained) motion. 2 See, e.g., MA Eq. (10.7). © K. Likharev

Essential Graduate Physics

CM: Classical Mechanics

at any instant t is completely determined by the value of its only generalized coordinate . (Note that its dimensionality is different from that of Cartesian coordinates!) Now returning to the general case of J degrees of freedom, let us consider a set of small variations (alternatively called “virtual displacements”) qj allowed by the constraints. Virtual displacements differ from the actual small displacements (described by differentials dqj proportional to time variation dt) in that qj describes not the system’s motion as such, but rather its possible variation – see Fig. 1.

qj

possible motion

dq j

q j

actual motion Fig. 2.2. Actual displacement dqj vs. the virtual one (i.e. variation) qj.

dt t

Generally, operations with variations are the subject of a special field of mathematics, the calculus of variations.3 However, the only math background necessary for our current purposes is the understanding that operations with variations are similar to those with the usual differentials, though we need to watch carefully what each variable is a function of. For example, if we consider the variation of the radius vectors (1), at a fixed time t, as functions of independent variations qj, we may use the usual formula for the differentiation of a function of several arguments:4

 rk   j

rk q j . q j

(2.3)

Now let us break the force acting upon the kth particle into two parts: the frictionless, constraining part Nk of the reaction force and the remaining part Fk – including the forces from other sources and possibly the frictional part of the reaction force. Then the 2nd Newton’s law for the kth particle of the system may be rewritten as mk v k  Fk  N k .

(2.4)

Since any variation of the motion has to be allowed by the constraints, its 3N-dimensional vector with N 3D-vector components rk has to be perpendicular to the 3N-dimensional vector of the constraining forces, also having N 3D-vector components Nk. (For example, for the problem shown in Fig. 1, the virtual displacement vector rk may be directed only along the ring, while the constraining force N exerted by the ring, has to be perpendicular to that direction.) This condition may be expressed as

3

For a concise introduction to the field see, e.g., either I. Gelfand and S. Fomin, Calculus of Variations, Dover, 2000, or L. Elsgolc, Calculus of Variations, Dover, 2007. An even shorter review may be found in Chapter 17 of Arfken and Weber – see MA Sec. 16. For a more detailed discussion, using many examples from physics, see R. Weinstock, Calculus of Variations, Dover, 2007. 4 See, e.g., MA Eq. (4.2). Also, in all formulas of this section, summations over j are from 1 to J, while those over the particle number k are from 1 to N, so for the sake of brevity, these limits are not explicitly specified. Chapter 2

Page 2 of 14

Essential Graduate Physics

CM: Classical Mechanics

N

k

  rk  0 ,

(2.5)

k

where the scalar product of 3N-dimensional vectors is defined exactly like that of 3D vectors, i.e. as the sum of the products of the corresponding components of the operands. The substitution of Eq. (4) into Eq. (5) results in the so-called D’Alembert principle:5

 (m v

D’Alembert principle

k

k

 Fk )   rk  0 .

(2.6)

 rk  F j  q j  0 , q j 

(2.7)

k

Plugging Eq. (3) into Eq. (6), we get 

  m v j



k

k



k

where the scalars Fj, called the generalized forces, are defined as follows:6

F j   Fk 

Generalized force

k

rk . q j

(2.8)

Now we may use the standard argument of the calculus of variations: for the left-hand side of Eq. (7) to be zero for an arbitrary selection of independent variations qj, the expression in the curly brackets, for every j, should equal zero. This gives us the desired set of J  3N equations

 m v k

k

k



rk  Fj  0 ; q j

(2.9)

what remains is just to recast them in a more convenient form. First, using the differentiation by parts to calculate the following time derivative: r d  vk  k q j dt 

    v k  rk  v k  d  rk  q j dt  q j 

 ,  

(2.10)

we may notice that the first term on the right-hand side is exactly the scalar product in the first term of Eq. (9). Second, let us use another key fact of the calculus of variations (which is, essentially, evident from Fig. 3): the differentiation of a variable over time and over the generalized coordinate variation (at a fixed time) are interchangeable operations. As a result, in the second term on the right-hand side of Eq. (10), we may write d  rk    drk  v k (2.11) .    dt  q j  q j  dt  q j

5

It was spelled out in a 1743 work by Jean le Rond d’Alembert, though the core of this result has been traced to an earlier work by Jacob (Jean) Bernoulli (1667 – 1748) – not to be confused with his son Daniel Bernoulli (17001782) who is credited, in particular, for the Bernoulli equation for ideal fluids, to be discussed in Sec. 8.4 below. 6 Note that since the dimensionality of generalized coordinates may be arbitrary, that of generalized forces may also differ from the newton. Chapter 2

Page 3 of 14

Essential Graduate Physics

CM: Classical Mechanics

 (df )  d (f )

f  f

f

f

df

Fig. 2.3. The variation of the differential (of any smooth function f) is equal to the differential of its variation.

dt

t

Finally, let us differentiate Eq. (1) over time: vk 

drk r r   k q j  k . dt t j q j

(2.12)

This equation shows that particle velocities vk may be considered to be linear functions of the generalized velocities q j considered as independent variables, with proportionality coefficients v k rk .  q j q j

(2.13)

With the account of Eqs. (10), (11), and (13), Eq. (9) turns into v v d mk v k  k   mk v k  k  F j  0 .  q j q j dt k k

(2.14)

This result may be further simplified by making, for the total kinetic energy of the system,

T  k

mk 2 1 v k   mk v k  v k , 2 2 k

(2.15)

the same commitment as for vk, i.e. considering T a function of not only the generalized coordinates qj and time t but also of the generalized velocities q i – as variables independent of qj and t. Then we may calculate the partial derivatives of T as v T   mk v k  k , q j q j k

v T   mk v k  k , q j q j k

(2.16)

and notice that they are exactly the two sums participating in Eq. (14). As a result, we get a system of J Lagrange equations,7 d T T  F j  0, for j  1, 2,..., J .  dt q j q j

(2.17)

Their big advantage over the initial Newton’s-law equations (4) is that the Lagrange equations do not include the constraining forces Nk, and thus there are only J of them – typically much fewer than 3N.

7

They were derived in 1788 by Joseph-Louis Lagrange, who pioneered the whole field of analytical mechanics – not to mention his key contributions to the number theory and celestial mechanics.

Chapter 2

Page 4 of 14

General Lagrange equations

Essential Graduate Physics

CM: Classical Mechanics

This is as far as we can go for arbitrary forces. However, if all the forces may be expressed in the form similar to, but somewhat more general than Eq. (1.22), Fk = –kU(r1, r2,…, rN, t), where U is the effective potential energy of the system,8 and k denotes the spatial differentiation over coordinates of the kth particle, we may recast Eq. (8) into a simpler form:

F j   Fk  k

 U x k U y k U z i rk       q j y k q j z i q j k  x k q j

    U .  q j 

(2.18)

Since we assume that U depends only on particle coordinates (and possibly time), but not velocities: U / q j  0, with the substitution of Eq. (18), the Lagrange equation (17) may be represented in the socalled canonical form:

d L L   0, dt q j q j

Canonical Lagrange equations

(2.19a)

where L is the Lagrangian function (sometimes called just the “Lagrangian”), defined as Lagrangian function

L  T U .

(2.19b)

(It is crucial to distinguish this function from the mechanical energy (1.26), E = T + U.) Note also that according to Eq. (2.18), for a system under the effect of an additional generalized external force Fj(t) we have to use, in all these relations, not the internal potential energy U(int) of the system, but its Gibbs potential energy U  U(int) – Fjqj – see the discussion in Sec. 1.4. Using the Lagrangian approach in practice, the reader should always remember, first, that each system has only one Lagrange function (19b), but is described by J 1 Lagrange equations (19a), with j taking values 1, 2,…, J, and second, that differentiating the function L, we have to consider the generalized velocities as its independent arguments, ignoring the fact they are actually the time derivatives of the generalized coordinates. 2.2. Three simple examples As the first, simplest example, consider a particle constrained to move along one axis (say, x):

T

m 2 x , 2

U  U ( x, t ).

(2.20)

In this case, it is natural to consider x as the (only) generalized coordinate, and x as the generalized velocity, so m (2.21) L  T  U  x 2  U ( x, t ). 2 Considering x and x as independent variables, we get L / x  mx , and L / x  U / x , so Eq. (19) (the only Lagrange equation in this case of the single degree of freedom!) yields

8

Note that due to the possible time dependence of U, Eq. (17) does not mean that the forces Fk have to be conservative – see the next section for more discussion. With this understanding, I will still use for function U the convenient name of “potential energy”.

Chapter 2

Page 5 of 14

Essential Graduate Physics

CM: Classical Mechanics

d mx     U   0, dt  x 

(2.22)

evidently the same result as the x-component of the 2nd Newton’s law with Fx = –U/x. This example is a good sanity check, but it also shows that the Lagrange formalism does not provide too much advantage in this particular case. Such an advantage is, however, evident in our testbed problem – see Fig. 1. Indeed, taking the polar angle  for the (only) generalized coordinate, we see that in this case, the kinetic energy depends not only on the generalized velocity but also on the generalized coordinate:9





m 2 2 R    2 sin 2  , U  mgz  const  mgR cos   const, 2 m L  T  U  R 2  2   2 sin 2   mgR cos   const. 2

T



(2.23)



Here it is especially important to remember that at substantiating the Lagrange equation,  and  have to be treated as independent arguments of L, so L  mR 2,  

L  mR 2 2 sin  cos   mgR sin  , 

(2.24)

giving us the following equation of motion:



 



d mR 2  mR 2 2 sin  cos   mgR sin   0. dt

(2.25)

As a sanity check, at  = 0, Eq. (25) is reduced to the equation (1.18) of the usual pendulum: 1/ 2

g   Ω 2 sin   0, where Ω    . (2.26) R We will explore Eq. (25) in more detail later, but please note how simple its derivation was – in comparison with writing the 3D Newton’s law and then excluding the reaction force. Next, though the Lagrangian formalism was derived from Newton’s law for mechanical systems, the resulting equations (19) are applicable to other dynamic systems, especially those for which the kinetic and potential energies may be readily expressed via some generalized coordinates. As the simplest example, consider the well-known connection of a capacitor with capacitance C to an inductive coil with self-inductance L 10 (Electrical engineers frequently call it the LC tank circuit.) I

 Q

V

 C

L Fig. 2.4. LC tank circuit.

9 The above expression for T  (m / 2)( x 2  y 2  z 2 ) may be readily obtained either by the formal differentiation

of Eq. (2) over time, or just by noticing that the velocity vector has two perpendicular components: one (of magnitude R ) along the ring, and another one (of magnitude   R sin  ) normal to the ring’s plane. 10 A fancy font is used here to avoid any chance of confusion between the inductance and the Lagrange function. Chapter 2

Page 6 of 14

Essential Graduate Physics

CM: Classical Mechanics

As the reader (hopefully :-) knows from their undergraduate studies, at relatively low frequencies we may use the so-called lumped-circuit approximation, in which the total energy of this system is the sum of two components, the electric energy Ee localized inside the capacitor, and the magnetic energy Em localized inside the inductance coil: L I2 Q2 Ee  Em  , . (2.27) 2C 2 Since the electric current I through the coil and the electric charge Q on the capacitor are related by the charge continuity equation dQ/dt = I (evident from Fig. 4), it is natural to declare Q the generalized coordinate of the system, and the current, its generalized velocity. With this choice, the electrostatic energy Ee (Q) may be treated as the potential energy U of the system, and the magnetic energy Em(I), as its kinetic energy T. With this attribution, we get T E m   L I  L Q , q I

U E e Q   , q Q C

T E m   0, q Q

(2.28)

so the Lagrange equation (19) becomes





d Q L Q      0, dt  C

  1 Q  0 . i.e. Q

LC

(2.29)

Note, however, that the above choice of the generalized coordinate and velocity is not unique. Instead, one can use, as the generalized coordinate, the magnetic flux  through the inductive coil, related to the common voltage V across the circuit (Fig. 4) by Faraday’s induction law V = –d/dt. With this choice, (-V) becomes the generalized velocity, Em = 2/2L should be understood as the potential energy, and Ee = CV2/2 treated as the kinetic energy. For this choice, the resulting Lagrange equation of motion is equivalent to Eq. (29). If both parameters of the circuit, L and C, are constant in time, Eq. (29) describes sinusoidal oscillations with the frequency

0 

1 . LC 1 / 2

(2.30)

This is of course a well-known result, which may be derived in a more standard way – by equating the voltage drops across the capacitor (V = Q/C) and the inductor (V = –LdI/dt  –Ld2Q/dt2). However, the Lagrangian approach is much more convenient for more complex systems – for example, for the general description of the electromagnetic field and its interaction with charged particles.11 2.3. Hamiltonian function and energy The canonical form (19) of the Lagrange equation has been derived using Eq. (18), which is formally similar to Eq. (1.22) for a potential force. Does this mean that the system described by Eq. (19) always conserves energy? Not necessarily, because the “potential energy” U that participates in Eq. (18), may depend not only on the generalized coordinates but on time as well. Let us start the analysis of this issue with the introduction of two new (and very important!) notions: the generalized momentum corresponding to each generalized coordinate qj, 11

See, e.g., EM Secs. 9.7 and 9.8.

Chapter 2

Page 7 of 14

Essential Graduate Physics

CM: Classical Mechanics

pj 

L , q j

(2.31)

Generalized momentum

(2.32)

Hamiltonian function: definition

and the Hamiltonian function12 H  j

L q j  L   p j q j  L . q j j

To see whether the Hamiltonian function is conserved during the motion, let us differentiate both sides of its definition (32) over time:  d  L dH      dt j  dt  q j 

   q j  L q j   dL .  q j  dt 

(2.33)

If we want to make use of the Lagrange equation (19), the last derivative has to be calculated considering L as a function of independent arguments q j , q j , and t, so  L dL L  L , q j  q j     dt q j  t j  q j

(2.34)

where the last term is the derivative of L as an explicit function of time. We see that the last term in the square brackets of Eq. (33) immediately cancels with the last term in the parentheses of Eq. (34). Moreover, using the Lagrange equation (19a) for the first term in the square brackets of Eq. (33), we see that it cancels with the first term in the parentheses of Eq. (34). As a result, we arrive at a very simple and important result: L dH (2.35)  . dt t The most important corollary of this formula is that if the Lagrangian function does not depend on time explicitly ( L / t  0), the Hamiltonian function is an integral of motion: H  const.

(2.36)

Let us see how this works, using the first two examples discussed in the previous section. For a 1D particle, the definition (31) of the generalized momentum yields px 

L  mv , v

(2.37)

so it coincides with the usual linear momentum – or rather with its x-component. According to Eq. (32), the Hamiltonian function for this case (with just one degree of freedom) is H  pxv  L  px

2 px  m 2  p   x  U   x  U , m 2  2m

(2.38)

12

It is named after Sir William Rowan Hamilton, who developed his approach to analytical mechanics in 1833, on the basis of the Lagrangian mechanics. This function is sometimes called just the “Hamiltonian”, but it is advisable to use the full term “Hamiltonian function” in classical mechanics, to distinguish it from the Hamiltonian operator used in quantum mechanics, whose abbreviation to Hamiltonian is extremely common. (The relation of these two notions will be discussed in Sec. 10.1 below.) Chapter 2

Page 8 of 14

Hamiltonian function: time evolution

Essential Graduate Physics

CM: Classical Mechanics

i.e. coincides with the particle’s mechanical energy E = T + U. Since the Lagrangian does not depend on time explicitly, both H and E are conserved. However, it is not always that simple! Indeed, let us return again to our testbed problem (Fig. 1). In this case, the generalized momentum corresponding to the generalized coordinate  is p 

L  mR 2,  

(2.39)

and Eq. (32) yields:





m  H  p   L  mR 2 2   R 2  2   2 sin 2   mgR cos    const 2  m 2 2 2 2  R    sin   mgR cos   const. 2



(2.40)



This means that (as soon as   0 ), the Hamiltonian function differs from the mechanical energy E  T U 





m 2 2 R    2 sin 2   mgR cos   const . 2

(2.41)

The difference, E – H = mR22sin2 (besides an inconsequential constant), may change at bead’s motion along the ring, so although H is an integral of motion (since L/t = 0), the energy is generally not conserved. In this context, let us find out when these two functions, E and H, do coincide. In mathematics, there is a notion of a homogeneous function f(x1,x2,…) of degree , defined in the following way: for an arbitrary constant a, (2.42) f ( ax1 , ax 2 ,...)  a  f ( x1 , x 2 ,...). Such functions obey the following Euler theorem:13 f

 x j

x j  f ,

(2.43)

j

which may be simply proved by differentiating both parts of Eq. (42) over a and then setting this parameter to the particular value a = 1. Now, consider the case when the kinetic energy is a quadratic form of all generalized velocities q j :

T   t jj' (q1 , q 2 ,..., t ) q j q j' ,

(2.44)

j , j'

with no other terms. It is evident that such T satisfies the definition (42) of a homogeneous function of the velocities with  = 2,14 so the Euler theorem (43) gives T

 q j

q j  2T .

(2.45)

j

13

This is just one of many theorems bearing the name of their author – the genius mathematician Leonhard Euler (1707-1783). 14 Such functions are called quadratic-homogeneous. Chapter 2

Page 9 of 14

Essential Graduate Physics

CM: Classical Mechanics

But since U is independent of the generalized velocities, L / q j  T / q j , and the left-hand side of Eq. (45) is exactly the first term in the definition (32) of the Hamiltonian function, so in this case H  2T  L  2T  (T  U )  T  U  E.

(2.46)

So, for a system with a kinetic energy of the type (44), for example, a free particle with T considered as a function of its Cartesian velocities,





m 2 v x  v y2  v z2 , (2.47) 2 the notions of the Hamiltonian function and mechanical energy are identical. Indeed, some textbooks, very regrettably, do not distinguish these notions at all! However, as we have seen from our bead-onthe-rotating-ring example, these variables do not always coincide. For that problem, the kinetic energy, in addition to the term proportional to  2 , has another, velocity-independent term – see the first of Eqs. (23) – and hence is not a quadratic-homogeneous function of the angular velocity, giving E  H. Thus, Eq. (36) expresses a new conservation law, generally different from that of mechanical energy conservation. T

2.4. Other conservation laws Looking at the Lagrange equation (19), we immediately see that if L  T – U is independent of some generalized coordinate qj, L/qj = 0,15 then the corresponding generalized momentum is an integral of motion:16 L pj   const. (2.48) q j

For example, for a 1D particle with the Lagrangian (21), the momentum px is conserved if the potential energy is constant (and hence the x-component of force is zero) – of course. As a less obvious example, let us consider a 2D motion of a particle in the field of central forces. If we use polar coordinates r and  in the role of generalized coordinates, then the Lagrangian function17 L  T U 

m 2 r  r 2 2   U (r ) 2

(2.49)

is independent of , and hence the corresponding generalized momentum, p 

L  mr 2 , 

(2.50)

Such coordinates are frequently called cyclic, because in some cases (like  in Eq. (49) below) they represent periodic coordinates such as angles. However, this terminology is somewhat misleading, because some “cyclic” coordinates (e.g., x in our first example) have nothing to do with rotation. 16 This fact may be considered a particular case of a more general mathematical statement called the Noether theorem – named after its author, Emmy Nöther, sometimes called the “greatest woman mathematician ever lived”. Unfortunately, because of time/space restrictions, for its discussion I have to refer the interested reader elsewhere – for example to Sec. 13.7 in H. Goldstein et al., Classical Mechanics, 3rd ed. Addison Wesley, 2002. 17 Note that here r 2 is the square of the scalar derivative r , rather than the square of the vector r  = v. 15

Chapter 2

Page 10 of 14

Essential Graduate Physics

CM: Classical Mechanics

is conserved. This is just a particular (2D) case of the angular momentum conservation – see Eq. (1.24). Indeed, for the 2D motion within the [x, y] plane, the angular momentum vector, nx

ny

nz

y z , L  rp  x mx my mz

(2.51)

has only one component different from zero, namely the component normal to the motion plane: L z  x (my )  y (mx ).

(2.52)

Differentiating the well-known relations between the polar and Cartesian coordinates,

x  r cos  ,

y  r sin  ,

(2.53)

over time, and plugging the result into Eq. (52), we see that L z  mr 2  p .

(2.54)

Thus the Lagrangian formalism provides a powerful way of searching for non-evident integrals of motion. On the other hand, if such a conserved quantity is obvious or known a priori, it is helpful for the selection of the most appropriate generalized coordinates, giving the simplest Lagrange equations. For example, in the last problem, if we knew in advance that p had to be conserved, this could provide sufficient motivation for using the angle  as one of the generalized coordinates. 2.5. Exercise problems In each of Problems 2.1-2.11, for the given system: (i) introduce a convenient set of generalized coordinates qj, (ii) write down the Lagrangian L as a function of q j , q j , and (if appropriate) time, (iii) write down the Lagrange equation(s) of motion, (iv) calculate the Hamiltonian function H; find out whether it is conserved, (v) calculate the mechanical energy E; is E = H?; is the energy conserved? (vi) any other evident integrals of motion? 2.1. A double pendulum – see the figure on the right. Consider only the motion within the vertical plane containing the suspension point.

l

m

g

2.2. A stretchable pendulum (i.e. a massive particle hung on an elastic cord that exerts force F = –(l – l0), where  and l0 are positive constants), also confined to the vertical plane:

Chapter 2

l

m

l g

m

Page 11 of 14

Essential Graduate Physics

CM: Classical Mechanics

x0 ( t )

2.3. A fixed-length pendulum hanging from a horizontal support whose motion law x0(t) is fixed. (No vertical plane constraint here.)

l g

2.4. A pendulum of mass m, hung on another point mass m’ that may slide, without friction, along a straight horizontal rail – see the figure on the right. The motion is confined to the vertical plane that contains the rail.

m

m' l

g

m



2.5. A point-mass pendulum of length l, attached to the rim of a disk of radius R, which is rotated in a vertical plane with a constant angular velocity  – see the figure on the right. (Consider only the motion within the disk’s plane.)

R

l m

g 2.6. A bead of mass m, sliding without friction along a light string stretched by a fixed force T between two horizontally T displaced points – see the figure on the right. Here, in contrast to the similar Problem 1.10, the string’s tension T may be comparable with the bead’s weight mg, and the motion is not restricted to the vertical plane.

2d

T g

m 2d

2.7. A bead of mass m, sliding without friction along a light string of a fixed length 2l, that is hung between two points displaced horizontally by distance 2d < 2l – see the figure on the right. As in the previous problem, the motion is not restricted to the vertical plane. 2.8. A block of mass m that can slide, without friction, along the inclined plane surface of a heavy wedge with mass m’. The wedge is free to move, also without friction, along a horizontal surface – see the figure on the right. (Both motions are within the vertical plane containing the steepest slope line.) 2.9. The two-pendula system that was the subject of Problem 1.8 – see the figure on the right.

Chapter 2

2l

g

m

m m'  

g

l g



m

l m

Page 12 of 14

Essential Graduate Physics

CM: Classical Mechanics

M

2.10. A system of two similar, inductively-coupled LC circuits – see the figure on the right.

C

2.11.*A small Josephson junction – the system consisting of two superconductors (S) weakly coupled by Cooper-pair tunneling through a EJ , C thin insulating layer (I) that separates them – see the figure on the right.

C

L

L S I S

Hints:

(i) At not very high frequencies (whose quantum  is lower than the binding energy 2 of the Cooper pairs), the Josephson effect in a sufficiently small junction may be described by the following coupling energy: U     E J cos   const , where the constant EJ describes the coupling strength, while the variable  (called the Josephson phase difference) is connected to the voltage V across the junction by the famous frequency-to-voltage relation d 2e  V,  dt

where e  1.60210-19 C is the fundamental electric charge and   1.05410-34 Js is the Planck constant.18 (ii) The junction (as any system of two close conductors) has a substantial electric capacitance C.

More discussion of the Josephson effect and the physical sense of the variable  may be found, for example, in EM Sec. 6.5 and QM Secs. 1.6 and 2.8 of this series, but the given problem may be solved without that additional information.

18

Chapter 2

Page 13 of 14

Essential Graduate Physics

CM: Classical Mechanics

This page is intentionally left blank

Chapter 2

Page 14 of 14

Essential Graduate Physics

CM: Classical Mechanics

Chapter 3. A Few Simple Problems The objective of this chapter is to solve a few simple but very important particle dynamics problems that may be reduced to 1D motion. They notably include the famous “planetary” problem of two particles interacting via a spherically-symmetric potential, and the classical particle scattering problem. In the process of solution, several methods that will be very essential for the analysis of more complex systems are also discussed. 3.1. One-dimensional and 1D-reducible systems If a particle is confined to motion along a straight line (say, axis x), its position is completely determined by this coordinate. In this case, as we already know, the particle’s Lagrangian function is given by Eq. (2.21): m (3.1) L  T x   U ( x, t ), T  x   x 2 , 2 so the Lagrange equation of motion given by Eq. (2.22),

mx  

U ( x, t ) x

(3.2)

is just the x-component of the 2nd Newton’s law. It is convenient to discuss the dynamics of such really-1D systems as a part of a more general class of effectively-1D systems. This is a system whose position, due to either holonomic constraints and/or conservation laws, is also fully determined by one generalized coordinate q, and whose Lagrangian may be represented in a form similar to Eq. (1): Effectively1D system

L  Tef (q )  U ef (q, t ),

Tef 

mef 2 q , 2

(3.3)

where mef is some constant which may be considered as the effective mass of the system, and the function Uef, its effective potential energy. In this case, the Lagrange equation (2.19), describing the system’s dynamics, has a form similar to Eq. (2):

mef q  

U ef (q, t ) . q

(3.4)

As an example, let us return to our testbed system shown in Fig. 2.1. We have already seen that for this system, having one degree of freedom, the genuine kinetic energy T, expressed by the first of Eqs. (2.23), is not a quadratically-homogeneous function of the generalized velocity. However, the system’s Lagrangian function (2.23) still may be represented in the form (3),

L provided that we take

© K. Likharev

m 2 2 m 2 2 R   R  sin 2   mgR cos   const  Tef  U ef , 2 2

(3.5)

Essential Graduate Physics

Tef 

CM: Classical Mechanics

m 2 2 R , 2

U ef  

m 2 2 R  sin 2   mgR cos   const. 2

(3.6)

In this new partitioning of the function L, which is legitimate because Uef depends only on the generalized coordinate , but not on the corresponding generalized velocity, Tef includes only a part of the genuine kinetic energy T of the bead, while Uef includes not only its real potential energy U in the gravity field but also an additional term related to ring rotation. (As we will see in Sec. 4.6, this term may be interpreted as the effective potential energy due to the inertial centrifugal “force” arising at the problem’s solution in the non-inertial reference frame rotating with the ring.) Returning to the general case of effectively-1D systems with Lagrangians of the type (3), let us calculate their Hamiltonian function, using its definition (2.32): H

L q  L  mef q 2  (Tef  U ef )  Tef  U ef . q

(3.7)

So, H is expressed via Tef and Uef exactly as the energy E is expressed via genuine T and U. 3.2. Equilibrium and stability Autonomous systems are defined as dynamic systems whose equations of motion do not depend on time explicitly. For the effectively-1D (and in particular the really-1D) systems obeying Eq. (4), this means that their function Uef, and hence the Lagrangian function (3) should not depend on time explicitly. According to Eqs. (2.35), in such systems, the Hamiltonian function (7), i.e. the sum Tef + Uef, is an integral of motion. However, be careful! Generally, this conclusion is not valid for the genuine mechanical energy E of such a system; for example, as we already know from Sec. 2.2, for our testbed problem, with the generalized coordinate q =  (Fig. 2.1), E is not conserved. According to Eq. (4), an autonomous system, at appropriate initial conditions, may stay in equilibrium at one or several stationary (alternatively called fixed) points qn, corresponding to either the minimum or a maximum of the effective potential energy (see Fig. 1): dU ef q n   0. dq

1 ~2  ef q 2

q0

U ef (q )

q1

q2

(3.8)

Fig. 3.1. An example of the effective potential energy profile near stable (q0, q2) and unstable (q1) fixed points, and its quadratic approximation (10) near point q0.

q

In order to explore the stability of such fixed points, let us analyze the dynamics of small deviations q~ (t )  q(t )  q n (3.9) from one of such points. For that, let us expand the function Uef(q) in the Taylor series at qn:

Chapter 3

Page 2 of 22

Fixed-point condition

Essential Graduate Physics

CM: Classical Mechanics

dU ef 1 d 2U ef ~ U ef (q)  U ef (q n )  (q n ) q  (q n ) q~ 2  ... . 2 dq 2 dq

(3.10)

The first term on the right-hand side, Uef(qn), is an arbitrary constant and does not affect motion. The next term, linear in the deviation q~ , equals zero – see the fixed point’s definition (8). Hence the fixed point’s stability is determined by the next term, quadratic in q~ , more exactly by its coefficient,

 ef 

d 2U ef (q n ) , dq 2

(3.11)

which is frequently called the effective spring constant. Indeed, neglecting the higher terms of the Taylor expansion (10),1 we see that Eq. (4) takes the familiar form: mef q~   ef q~  0.

(3.12)

I am confident that the reader of these notes knows everything about this equation, but since we will soon run into similar but more complex equations, let us review the formal procedure of its solution. From the mathematical standpoint, Eq. (12) is an ordinary linear differential equation of the second order, with constant coefficients. The general theory of such equations tells us that its general solution (for any initial conditions) may be represented as  t  t q~ (t )  c  e   c  e  ,

(3.13)

where the constants c are determined by initial conditions, while the so-called characteristic exponents  are completely defined by the equation itself. To calculate these exponents, it is sufficient to plug just one partial solution, et, into the equation. In our simple case (12), this yields the following characteristic equation: mef 2   ef  0 . (3.14) If the ratio kef/mef is positive, i.e. the fixed point corresponds to the minimum of potential energy (e.g., see points q0 and q2 in Fig. 1), the characteristic equation yields

    i 0 ,

 with  0   ef  mef

  

1/ 2

,

(3.15)

(where i is the imaginary unit, i2 = –1), so Eq. (13) describes harmonic (sinusoidal) oscillations of the system,2  i 0 t  i 0 t (3.16) q~ (t )  c  e  c e  c c cos  0 t  c s sin  0 t ,

Those terms may be important only in very special cases then ef is exactly zero, i.e. when a fixed point is also an inflection point of the function Uef(q). 2 The reader should not be scared of the first form of (16), i.e. of the representation of a real variable (the deviation from equilibrium) via a sum of two complex functions. Indeed, any real initial conditions give c–* = c+, so the sum is real for any t. An even simpler way to deal with such complex representations of real functions will be discussed in the beginning of Chapter 5, and then used throughout this series. 1

Chapter 3

Page 3 of 22

Essential Graduate Physics

CM: Classical Mechanics

with the frequency 0, about the fixed point – which is thereby stable.3 On the other hand, at the potential energy maximum (kef < 0, e.g., at point q1 in Fig. 1), we get 1/ 2

     , where    ef  , so that q~ (t )  c  e  t  c  e  t .  mef  Since the solution has an exponentially growing part,4 the fixed point is unstable.

(3.17)

Note that the quadratic expansion of function Uef(q), given by the truncation of Eq. (10) to the three displayed terms, is equivalent to a linear Taylor expansion of the effective force: Fef  

dU ef   ef q~, dq

(3.18)

immediately resulting in the linear equation (12). Hence, to analyze the stability of a fixed point qn, it is sufficient to linearize the equation of motion with respect to small deviations from the point, and study possible solutions of the resulting linear equation. This linearization procedure is typically simpler to carry out than the quadratic expansion (10). As an example, let us return to our testbed problem (Fig. 2.1) whose function Uef we already know – see the second of Eqs. (6). With it, the equation of motion (4) becomes mR 2  









dU ef  mR 2  2 cos   Ω 2 sin  , i.e.    2 cos   Ω 2 sin  , d

(3.19)

where   (g/R)1/2 is the frequency of small oscillations of the system at  = 0 – see Eq. (2.26).5 From Eq. (8), we see that on any 2-long segment of the angle , 6 the system may have four fixed points; for example, on the half-open segment (-, +] these points are

 0  0,

1   ,

 2,3   cos 1

Ω2

2

.

(3.20)

The last two fixed points, corresponding to the bead shifted to either side of the rotating ring, exist only if the angular velocity  of the rotation exceeds . (In the limit of very fast rotation,  >> , Eq. (20) yields 2,3  /2, i.e. the stationary positions approach the horizontal diameter of the ring – in accordance with our physical intuition.) ~ To analyze the fixed point stability, we may again use Eq. (9), in the form    n   , plug it ~ into Eq. (19), and Taylor-expand both trigonometric functions of  up to the term linear in  : ~

 

~





~



   2 cos  n  sin  n    2 sin  n  cos  n  .

(3.21)

3

This particular type of stability, when the deviation from the equilibrium oscillates with a constant amplitude, neither growing nor decreasing in time, is called either the orbital, or “neutral”, or “indifferent” stability. 4 Mathematically, the growing part vanishes at some special (exact) initial conditions which give c = 0. However, + the futility of this argument for real physical systems should be obvious to anybody who has ever tried to balance a pencil on its sharp point. 5 Note that Eq. (19) coincides with Eq. (2.25). This is a good sanity check illustrating that the procedure (5)-(6) of moving a term from the potential to the kinetic energy within the Lagrangian function is indeed legitimate. 6 For this particular problem, the values of  that differ by a multiple of 2, are physically equivalent. Chapter 3

Page 4 of 22

Essential Graduate Physics

CM: Classical Mechanics

Generally, this equation may be linearized further by purging its right-hand side of the term proportional ~ to  2 ; however in this simple case, Eq. (21) is already convenient for analysis. In particular, for the fixed point 0 = 0 (corresponding to the bead’s position at the bottom of the ring), we have cos 0 = 1 and sin0 = 0, so Eq. (21) is reduced to a linear differential equation

   2 – Ω 2   , ~ 

~

(3.22)

whose characteristic equation is similar to Eq. (14) and yields

2   2 – Ω 2 , for    0 .

(3.23a)

This result shows that if 2 < 2, both roots  are imaginary, so this fixed point is orbitally stable. However, if the rotation speed is increased so that 2 < 2, the roots become real:  = (2 – 2)1/2, with one of them positive, so the fixed point becomes unstable beyond this threshold, i.e. as soon as fixed points 2,3 exist. Absolutely similar calculations for other fixed points yield  Ω 2   2  0, for   1 ,   2 2 for    2,3 . Ω   , 2

(3.23b)

These results show that the fixed point 1 (the bead on the top of the ring) is always unstable – just as we could foresee, while the side fixed points 2,3 are orbitally stable as soon as they exist – at 2 < 2. Thus, our fixed-point analysis may be summarized very simply: an increase of the ring rotation speed  beyond a certain threshold value, equal to  given by Eq. (2.26), causes the bead to move to one of the ring sides, oscillating about one of the fixed points 2,3. Together with the rotation about the vertical axis, this motion yields quite a complex (generally, open) spatial trajectory as observed from a lab frame, so it is fascinating that we could analyze it quantitatively in such a simple way. Later in this course, we will repeatedly use the linearization of the equations of motion for the analysis of the stability of more complex dynamic systems, including those with energy dissipation. 3.3. Hamiltonian 1D systems Autonomous systems that are described by time-independent Lagrangians are frequently called Hamiltonian ones because their Hamiltonian function H (again, not necessarily equal to the genuine mechanical energy E!) is conserved. In our current 1D case, described by Eq. (3), H

mef 2 q  U ef (q )  const . 2

(3.24)

From the mathematical standpoint, this conservation law is just the first integral of motion. Solving Eq. (24) for q , we get the first-order differential equation, 1/ 2

 2  dq   H  U ef (q) dt  mef 

,

m  i.e.   ef   2 

1/2

dq

H  U ef (q)1 / 2

 dt ,

(3.25)

which may be readily integrated:

Chapter 3

Page 5 of 22

Essential Graduate Physics

CM: Classical Mechanics

m    ef   2 

q (t )

1/ 2

dq'

 q (t ) H  U 0

(q' )

1/ 2

ef

 t  t0 .

(3.26)

Since the constant H (as well as the proper sign before the integral – see below) is fixed by initial conditions, Eq. (26) gives the reciprocal form, t = t(q), of the desired law of system motion, q(t). Of course, for any particular problem the integral in Eq. (26) still has to be worked out, either analytically or numerically, but even the latter procedure is typically much easier than the numerical integration of the initial, second-order differential equation of motion, because at the addition of many values (to which any numerical integration is reduced7) the rounding errors are effectively averaged out. Moreover, Eq. (25) also allows a general classification of 1D system motion. Indeed: (i) If H > Uef(q) in the whole range of our interest, the effective kinetic energy Tef (3) is always positive. Hence the derivative dq/dt cannot change its sign, so this effective velocity retains the sign it had initially. This is an unbound motion in one direction (Fig. 2a). (a)

(b)

H

(c) H

H

A

U ef (q )

A

B

U ef (q )

U ef (q )

U min

q0

(d) U ef ( ) 2mgR

H2 0

B'

B

H1

 Ω

A

 1 .5

1

2 1

0

1

 /

2

3

Fig. 3.2. Graphical representations of Eq. (25) for three different cases: (a) an unbound motion, with the velocity sign conserved, (b) a reflection from a “classical turning point”, accompanied by the velocity sign change, and (c) bound, periodic motion between two turning points – schematically. (d) The effective potential energy (6) of the bead on the rotating ring (Fig. 2.1) for a particular case with 2 < 2.

(ii) Now let the particle approach a classical turning point A where H = Uef(q) – see Fig. 2b.8 According to Eq. (25), at that point the particle velocity vanishes, while its acceleration, according to Eq. (4), is still finite. This means that the particle’s velocity sign changes its sign at this point, i.e. it is reflected from it. 7

See, e.g., MA Eqs. (5.2) and (5.3). This terminology comes from quantum mechanics, which shows that a particle (or rather its wavefunction) actually can, to a certain extent, penetrate “classically forbidden” regions where H < Uef(q). 8

Chapter 3

Page 6 of 22

Essential Graduate Physics

CM: Classical Mechanics

(iii) If, after the reflection from some point A, the particle runs into another classical turning point B (Fig. 2c), the reflection process is repeated again and again, so the particle is bound to a periodic motion between two turning points. The last case of periodic oscillations presents a large conceptual and practical interest, and the whole Chapter 5 will be devoted to a detailed analysis of this phenomenon and numerous associated effects. Here I will only note that for an autonomous Hamiltonian system described by Eq. (4), Eq. (26) immediately enables the calculation of the oscillation period: m  T  2  ef   2 

Oscillation period

1/ 2 A

dq , 1/ 2 ef ( q )]

 [H  U B

(3.27)

where the additional front factor 2 accounts for two time intervals: of the motion from B to A and back – see Fig. 2c. Indeed, according to Eq. (25), at each classically allowed point q, the velocity’s magnitude is the same, so these time intervals are equal to each other. (Note that the dependence of points A and B on H is not necessarily continuous. For example, for our testbed problem, whose effective potential energy is plotted in Fig. 2d for a particular value of  > , a gradual increase of H leads to a sudden jump, at H = H1, of the point B to a new position B’, corresponding to a sudden switch from oscillations about one fixed point 2,3 to oscillations about two adjacent fixed points – before the beginning of a persistent rotation around the ring at H > H2.)

Now let us consider a particular, but a very important limit of Eq. (27). As Fig. 2c shows, if H is reduced to approach Umin, the periodic oscillations take place at the very bottom of this potential well, about a stable fixed point q0. Hence, if the potential energy profile is smooth enough, we may limit the Taylor expansion (10) to the displayed quadratic term. Plugging it into Eq. (27), and using the mirror symmetry of this particular problem about the fixed point q0, we get m  T  4  ef   2 

dq~

1/ 2 A

 H  U 0

min

  ef q~ 2 / 2



1/ 2



4

0

1

I,

with I   0

d

1   

2 1/ 2

,

(3.28)

where   q~ / A , with A  (2/ef)1/2(H – Umin)1/2 being the classical turning point, i.e. the oscillation amplitude, and 0 the frequency given by Eq. (15). Taking into account that the elementary integral I in that equation equals /2,9 we finally get 2 T  , (3.29)

0

as it should be for the harmonic oscillations (16). Note that the oscillation period does not depend on the oscillation amplitude A, i.e. on the difference (H – Umin) – while it is sufficiently small. 3.4. Planetary problems Leaving a more detailed study of oscillations for Chapter 5, let us now discuss the so-called planetary systems10 whose description, somewhat surprisingly, may be also reduced to an effectively 1D introducing a new variable  as   sin , we get d = cos  d = (1–2)1/2 d, so that the function under the integral is just d, and its limits are  = 0 and  = /2. 9 Indeed,

Chapter 3

Page 7 of 22

Essential Graduate Physics

CM: Classical Mechanics

problem. Indeed, consider two particles that interact via a conservative central force F21 = –F12 = nrF(r), where r and nr are, respectively, the magnitude and the direction of the distance vector r  r1 – r2 connecting the two particles (Fig. 3). m1

F12

r1 r2

r  r1 - r2 F21 Fig. 3.3. Vectors in the planetary problem.

m2

0

Generally, two particles moving without constraints in 3D space, have 3 + 3 = 6 degrees of freedom, which may be described, e.g., by their Cartesian coordinates {x1, y1, z1, x2, y2, z2} However, for this particular form of interaction, the following series of tricks allows the number of essential degrees of freedom to be reduced to just one. First, the conservative force of particle interaction may be described by a time-independent potential energy U(r), such that F(r) = –U(r)/r.11 Hence the Lagrangian function of the system is L  T  U r  

m1 2 m2 2 r1  r2  U (r ). 2 2

(3.30)

Let us perform the transfer from the initial six scalar coordinates of the particles to the following six generalized coordinates: three Cartesian components of the distance vector r  r1 – r2 ,

(3.31)

and three scalar components of the following vector: R

m1r1  m2 r2 , with M  m1  m2 , M

(3.32)

Center of mass

which defines the position of the center of mass of the system, with the total mass M. Solving the system of two linear equations (31) and (32) for r1 and r2, we get r1  R 

m2 r, M

r2  R 

m1 r. M

(3.33)

Plugging these relations into Eq. (30), we see that it is reduced it to M 2 m 2 R  r  U (r ), 2 2

(3.34)

m1 m2 1 1 1 . , so that   M m m1 m2

(3.35)

L where m is the so-called reduced mass: m 10 This

name is very conditional, because this group of problems includes, for example, charged particle scattering (see Sec. 3.7 below). 11 See, e.g., MA Eq. (10.8) with / = / = 0. Chapter 3

Page 8 of 22

Reduced mass

Essential Graduate Physics

CM: Classical Mechanics

Note that according to Eq. (35), the reduced mass is lower than that of the lightest component of the two-body system. If one of m1,2 is much less than its counterpart (like it is in most star-planet or planetsatellite systems), then with a good precision m  min [m1, m2].  rather than R itself, according to our Since the Lagrangian function (34) depends only on R discussion in Sec. 2.4, all Cartesian components of R are cyclic coordinates, and the corresponding generalized momenta are conserved:

Pj 

L  MR j  const, R j

j  1, 2, 3.

(3.36)

Physically, this is just the conservation law for the full momentum P  MR of our system, due to the absence of external forces. Actually, in the axiomatics used in Sec. 1.3 this law is postulated – see Eq. (1.10) – but now we may attribute the momentum P to a certain geometric point, with the center-of-mass radius vector R. In particular, since according to Eq. (36) the center moves with a constant velocity in the inertial reference frame used to write Eq. (30), we may consider a new inertial frame with the origin at point R. In this new frame, R  0, so the vector r (and hence the scalar r) remain the same as in the old frame (because the frame transfer vector adds equally to r1 and r2, and cancels in r = r1 – r2), and the Lagrangian (34) is now reduced to m L  r 2  U (r ). (3.37) 2 Thus our initial problem has been reduced to just three degrees of freedom – three scalar components of the vector r. In other words, Eq. (37) shows that the dynamics of the vector r of our initial, two-particle system is identical to that of the radius vector of a single particle with the effective mass m, moving in the central potential field U (r ). Two more degrees of freedom may be excluded from the planetary problem by noticing that according to Eq. (1.35), the angular momentum L = rp of our effective single particle of mass m is also conserved, both in magnitude and direction. Since the direction of L is, by its definition, perpendicular to both r and v = p/m, this means that the particle’s motion is confined to the plane whose orientation is determined by the initial directions of the vectors r and v. Hence we can completely describe the particle’s position by just two coordinates in that plane, for example by the distance r to the origin, and the polar angle  In these coordinates, Eq. (37) takes the form identical to Eq. (2.49): L





m 2 r  r 2 2  U (r ) . 2

(3.38)

Moreover, the latter coordinate, polar angle , may be also eliminated by using the conservation of angular momentum’s magnitude, in the form of Eq. (2.50): 12 L z  mr 2  const.

(3.39)

A direct corollary of this conservation is the so-called 2nd Kepler’s law:13 the radius vector r sweeps equal areas A in equal time periods. Indeed, in the linear approximation in dA 0, there is only one classical turning point where E = Uef, so the distance r either grows with time from the very beginning or (if the initial value of r was negative) first decreases and then, after the reflection from the increasing potential Uef, starts to grow indefinitely. The latter case, of course, describes the scattering of the effective particle by the attractive center.17 (ii) On the opposite, if the energy is within the range

U ef (r0 )  E  0,

(3.50)

the system moves periodically between two classical turning points rmin and rmax – see Fig. 5. These oscillations of the distance r correspond to the bound orbital motion of our effective particle about the attracting center. Let us start with the discussion of the bound motion, with the energy within the range (50). If the energy has its minimal possible value,

E  U ef (r0 )  min [U ef (r )],

(3.51)

the distance cannot change, r = r0 = const, so the particle’s orbit is circular, with the radius r0 satisfying the condition dUef/dr = 0. Using Eq. (44), we see that the condition for r0 may be written as

L2z dU  r r0 . 3 dr mr0

(3.52)

Since at circular motion, the velocity v is perpendicular to the radius vector r, Lz is just mr0v, the lefthand side of Eq. (52) equals mv2/r0, while its right-hand side is just the magnitude of the attractive force, so this equality expresses the well-known 2nd Newton’s law for the circular motion. Plugging this result into Eq. (47), we get a linear law of angle change,   t  const, with the angular velocity



Lz v  , 2 r0 mr0

(3.53)

and hence the rotation period T   2/ obeys the elementary relation

17 In

the opposite case when the interaction is repulsive, U(r) > 0, the addition of the positive angular energy term only increases the trend, and the scattering scenario is the only one possible.

Chapter 3

Page 12 of 22

Essential Graduate Physics

CM: Classical Mechanics

T 

2 r0 . v

(3.54)

Now let the energy be above its minimum value (but still negative). Using Eq. (46) just as in Sec. 3, we see that the distance r oscillates with the period m T r  2  2

1 / 2 rmax

 r

min

dr . [ E  U (r )  L2z / 2mr 2 ]1 / 2

(3.55)

This period is not necessarily equal to another period, T, that corresponds to the 2-change of the angle. Indeed, according to Eq. (48), the change of the angle  between two sequential points of the nearest approach,   2

rmax

Lz

2m  r  1/ 2

min

dr



r 2 E  U (r )  L2z / 2mr 2



1/ 2

(3.56)

,

is generally different from 2. Hence, the general trajectory of the bound motion has a spiral shape – see, e.g., an illustration in Fig. 6.

r

 0

Fig. 3.6. A typical open orbit of a particle moving in a non-Coulomb central field.

The situation is special, however, for a very important particular case, namely that of the Coulomb potential described by Eq. (49).18 Indeed, plugging this potential into Eq. (48), we get



Lz

dr

2m   r E   / r  L 1/ 2

2

2 z

/ 2mr 2



1/ 2

.

(3.57)

This is a table integral,19 giving

  cos

-1

L2z / mr  1

1  2 EL

2 z

/ m 2



1/ 2

 const.

(3.58)

For the power-law interaction, U  r, the orbits are closed curves only if  = –1 (the Coulomb potential) or if  = +2 (the 3D harmonic oscillator) – the so-called Bertrand theorem, proved by J. Bertrand only in 1873. 19 See, e.g., MA Eq. (6.3a). 18

Chapter 3

Page 13 of 22

Essential Graduate Physics

CM: Classical Mechanics

Hence the reciprocal function, r(), is 2-periodic: r

p 1  e cos(  const )

,

(3.59) Elliptic orbit and its parameters

so at E < 0, the orbit is a closed line characterized by the following parameters:20 L2 p z , m

 2 EL2z   e  1  2  m   

1/ 2

.

(3.60)

The physical meaning of these parameters is very simple. Indeed, the general Eq. (52), in the Coulomb potential for which dU/dr = /r2, shows that p is just the circular orbit radius21 for the given Lz: r0 = Lz2/m  p, so  2m minU ef (r )  U ef (r0 )   2 . (3.61) 2 Lz Using this equality together with the second of Eqs. (60), we see that the parameter e (called the eccentricity) may be represented just as 1/ 2

  E e  1    min [U ef (r )] 

.

(3.62)

Analytical geometry tells us that Eq. (59), with e < 1, is one of the canonical representations of an ellipse, with one of its two focuses located at the origin. The fact that planets have such trajectories is known as the 1st Kepler’s law. Figure 7 shows the relations between the dimensions of the ellipse and the parameters p and e.22 y  r sin  center

p

aphelion

b

p  1 e

a ae

0

p

focus (one of the two) perihelion

p 1 e

x  r cos  Fig. 3.7. Ellipse, and its special points and dimensions.

In particular, the major semi-axis a and the minor semi-axis b are simply related to p and e and hence, via Eqs. (60), to the motion integrals E and Lz: Lz p  p (3.63) a b  ,  . 2 1 / 2 2E 1 e 1  e 2  2m E 1 / 2

20

Let me hope that the difference between the parameter p and the particle momentum’s magnitude is absolutely clear from the context, so using the same (traditional) notation for both notions cannot lead to confusion. 21 Mathematicians prefer a more solemn terminology: the parameter 2p is called the latus rectum of the ellipse. 22 In this figure, the constant participating in Eqs. (58)-(59) is assumed to be zero. A different choice of the constant corresponds just to a different origin of , i.e. a constant turn of the ellipse about the origin. Chapter 3

Page 14 of 22

Essential Graduate Physics

CM: Classical Mechanics

As was mentioned above, at E  min [Uef(r)] the orbit is almost circular, with r()  r0  p. On the contrary, as E is increased to approach zero (its maximum value for the closed orbit), then e  1, so that the aphelion point rmax = p/(1 – e) tends to infinity, i.e. the orbit becomes extremely extended – see the magenta lines in Fig. 8.

e  1.0

1.25

(a)

1.5

(b)

1.01  e  1.00 0.99 

0.75

20 p

0.5 0.0

p

Fig.3.8.(a) Zoom-in and (b) zoom-out on the Coulombfield trajectories corresponding to the same parameter p (i.e., the same Lz), but different values of the eccentricity parameter e, i.e. of the energy E – see Eq. (60): ellipses (e < 1, red lines), a parabola (e = 1, magenta line), and hyperbolas (e > 1, blue lines). Note that the transition from closed to open trajectories at e = 1 is dramatic only at very large distances, r >> p.

The above relations enable, in particular, a ready calculation of the rotation period T  Tr = T. (In the case of a closed trajectory, Tr and T coincide.) Indeed, it is well known that the ellipse’s area A = ab. But according to the 2nd Kepler’s law (40), dA/dt = Lz/2m = const. Hence

T 

 ab A  . dA / dt L z / 2m

(3.64a)

Using Eqs. (60) and (63), this important result may be represented in several other forms:  m  T    2 3/ 2 2 E (1  e ) ( L z / 2m) 

 p2

  3  

1/ 2

m  2 a 3 / 2    

1/ 2

.

(3.64b)

Since for the Newtonian gravity (1.15),  = Gm1m2 = GmM, at m1 0, the motion is unbound for any realistic interaction potential. In this case, the two most important parameters of the particle trajectory are the impact parameter b and the scattering angle  (Fig. 9), and the main task of the theory is to find the relation between them in the given potential U(r).

rmin

b (" impact parameter" )

0

0

    2 0 (" scattering angle" ) 0

Fig. 3.9. Main geometric parameters of the scattering problem.

For that, it is convenient to note that b is related to the two conserved quantities, the particle’s E and its angular momentum Lz, in a simple way. Indeed, at r >> b, the definition L = r(mv) yields Lz = bmv, where v = (2E/m)1/2 is the initial (and hence the final) speed of the particle, so

energy23

L z  b2mE  . 1/ 2

(3.66)

Hence the angular contribution to the effective potential (44) may be represented as L2z b2  . E r2 2mr 2

(3.67)

Next, according to Eq. (48), the trajectory sections going from infinity to the nearest approach point (r = rmin) and from that point to infinity, have to be similar, and hence correspond to equal angle changes 0 – see Fig. 9. Hence we may apply the general Eq. (48) to just one of the sections, say [rmin, ], to find the scattering angle:

23 The energy conservation law is frequently emphasized by calling such process elastic scattering.

Chapter 3

Page 16 of 22

Essential Graduate Physics

    2 0    2

CM: Classical Mechanics



Lz

2m  r  1/ 2

min



dr

r 2 E  U (r )  L2z / 2mr



2 1/ 2

 2



 r

min



bdr

r 2 1  U (r ) / E  b 2 / r 2



1/ 2

. (3.68)

In particular, for the Coulomb potential (49), now with an arbitrary sign of , we can use the same table integral as in the previous section to get24

 / 2 Eb

    2 cos 1

1  ( / 2Eb) 

2 1/ 2

.

(3.69a)

This result may be more conveniently rewritten as tan

 2



 2 Eb

.

(3.69b)

Very clearly, the scattering angle’s magnitude increases with the potential strength , and decreases as either the particle energy or the impact parameter (or both) are increased. The general result (68) and the Coulomb-specific relations (69) represent a formally complete solution of the scattering problem. However, in a typical experiment on elementary particle scattering, the impact parameter b of a single particle is unknown. In this case, our results may be used to obtain the statistics of the scattering angle , in particular, the so-called differential cross-section25 d 1 dN  , d n d

Differential crosssection

(3.70)

where n is the average number of the incident particles per unit area, and dN is the average number of the particles scattered into a small solid angle interval d. For a uniform beam of initial particles, d/d may be calculated by counting the average number of incident particles that have the impact parameters within a small range db: dN  n 2 bdb. (3.71) Scattered by a spherically-symmetric center, which provides an axially-symmetric scattering pattern, these particles are scattered into the corresponding small solid angle interval d = 2sin d . Plugging these two equalities into Eq. (70), we get the following general geometric relation: d db . b d sin  d

(3.72)

In particular, for the Coulomb potential (49), a straightforward differentiation of Eq. (69) yields the so-called Rutherford scattering formula (reportedly derived by R. H. Fowler): 2

1 d     .  4 d  4 E  sin ( / 2)

Rutherford scattering formula

(3.73)

24

Alternatively, this result may be recovered directly from the first form of Eq. (65), with the eccentricity e expressed via the same dimensionless parameter (2Eb/): e = [1 + (2Eb/)2]1/2 > 1. 25 This terminology stems from the fact that an integral (74) of d/d over the full solid angle, called the total cross-section , has the dimension of the area:  = N/n, where N is the total number of scattered particles. Chapter 3

Page 17 of 22

Essential Graduate Physics

CM: Classical Mechanics

This result, which shows very strong scattering to small angles (so strong that the integral that expresses the total cross-section d (3.74)    d d 4 is diverging at   0),26 and very weak backscattering (to angles   ) was historically extremely significant: in the early 1910s, its good agreement with -particle scattering experiments carried out by Ernest Rutherford’s group gave a strong justification for his introduction of the planetary model of atoms, with electrons moving around very small nuclei – just as planets move around stars. Note that elementary particle scattering is frequently accompanied by electromagnetic radiation and/or other processes leading to the loss of the initial mechanical energy of the system. Such inelastic scattering may give significantly different results. (In particular, the capture of an incoming particle becomes possible even for a Coulomb attracting center.) Also, quantum-mechanical effects may be important at the scattering of light particles with relatively low energies,27 so the above results should be used with caution. 3.6. Exercise problems

2d

3.1. For the system considered in Problem 2.6 (a bead sliding along a string with fixed tension T, see the figure on the T right), analyze small oscillations of the bead near the equilibrium.

T g

m

3.2. For the system considered in Problem 2.7 (a bead sliding along a string of a fixed length 2l, see the figure on the right), analyze small oscillations near the equilibrium.

2d 2l

g 3.3. A bead is allowed to slide, without friction, along an inverted cycloid in a vertical plane – see the figure on the right. Calculate the frequency of its free oscillations as a function of their amplitude.

m

R

g

m

3.4. Illustrate the changes of the fixed point set of our testbed system (Fig. 2.1), which was analyzed at the end of Sec. 3.2 of the lecture notes, on the so-called phase plane [  ,  ].

26 This

divergence, which persists at the quantum-mechanical treatment of the problem (see, e.g., QM Chapter 3), is due to particles with very large values of b, and disappears at an account, for example, of any non-zero concentration of the scattering centers. 27 Their discussion may be found in QM Secs. 3.3 and 3.8. Chapter 3

Page 18 of 22

Total crosssection

Essential Graduate Physics

CM: Classical Mechanics

3.5. For a 1D particle of mass m, placed into potential U(q) = q2n (where  > 0, and n is a positive integer), calculate the functional dependence of the particle’s oscillation period T on its energy E. Explore the limit n  . 3.6. Two small masses m1 and m2  m1 may slide, without friction, over a horizontal surface. They are connected with a spring with an equilibrium length l and an elastic constant , and at t < 0 are at rest. At t = 0, the first of these masses receives a very short kick with impulse P  F(t)dt in the direction normal to the spring line. Calculate the largest and smallest magnitude of its velocity at t > 0. 3.7. Explain why the term mr 2 2 / 2 , recast in accordance with Eq. (42), cannot be merged with U(r) in Eq. (38), to form an effective 1D potential energy U(r) – Lz2/2mr2, with the second term’s sign opposite to that given by Eq. (44). We have done an apparently similar thing for our testbed bead-onrotating-ring problem at the very end of Sec. 1 – see Eq. (6); why cannot the same trick work for the planetary problem? Besides a formal explanation, discuss the physics behind this difference. 3.8. A system consisting of two equal masses m on a light rod of a fixed length l (frequently called a dumbbell) can slide without friction along a vertical ring of radius R, rotated about its vertical diameter with a constant angular velocity  – see the figure on the right. Derive the condition of stability of the lower horizontal position of the dumbbell.

R



l

m

m

g

3.9. Analyze the dynamics of the so-called spherical pendulum – a point mass hung, in a uniform gravity field g, on a light cord of length l, with no motion’s confinement to a vertical plane. In particular: (i) find the integrals of motion and reduce the problem to a 1D one, (ii) calculate the time period of the possible circular motion around the vertical axis, and (iii) explore small deviations from the circular motion. (Are the pendulum orbits closed?)28 3.10. If our planet Earth were suddenly stopped on its orbit around the Sun, how long would it take it to fall on our star? Solve this problem using two different approaches, neglecting the Earth’s orbit eccentricity and the Sun’s size. 3.11. The orbits of Mars and Earth around the Sun may be well approximated as circles,29 with a radii ratio of 3/2. Use this fact, and the Earth’s year duration, to calculate the time of travel to Mars spending the least energy on the spacecraft’s launch. Neglect the planets' size and the effects of their own gravitational fields. 3.12. Derive first-order and second-order differential equations for the reciprocal distance u  1/r as a function of , describing the trajectory of a particle’s motion in a central potential U(r). Spell out the latter equation for the particular case of the Coulomb potential (49) and discuss the result. 28 29

Solving this problem is very good preparation for the analysis of the symmetric top’s rotation in Sec. 4.5. Indeed, their eccentricities are close to, respectively, 0.093 and 0.0167.

Chapter 3

Page 19 of 22

Essential Graduate Physics

CM: Classical Mechanics

3.13. For the motion of a particle in the Coulomb attractive field (U(r) = –/r, with  > 0), calculate and sketch the so-called hodograph30 – the trajectory followed by the head of the velocity vector v, provided that its tail is kept at the origin. 3.14. Prove that for an arbitrary motion of a particle of mass m in the Coulomb field U = –/r, vector A  pL – mnr is conserved.31 After that: (i) spell out the scalar product rA and use the result for an alternative derivation of Eq. (59), and for a geometric interpretation of the vector A; (ii) spell out (A – pL)2, and use the result for an alternative derivation of the hodograph diagram discussed in the previous problem. 3.15. For motion in the following central potential:   U (r )    2 , r r (i) find the orbit r(), for positive  and , and all possible ranges of energy E; (ii) prove that in the limit   0, and for energy E < 0, the orbit may be represented as a slowly rotating ellipse; (iii) express the angular velocity of this slow orbit rotation via the parameters  and , the particle’s mass m, its energy E, and the angular momentum Lz. 3.16. A star system contains a much smaller planet and an even much smaller amount of dust. Assuming that the attractive gravitational potential of the dust is spherically symmetric and proportional to the square of the distance from the star,32 calculate the slow precession it gives to a basically circular orbit of the planet. 3.17. A particle is moving in the field of an attractive central force, with potential  U r    n , where n  0 . r For what values of n, the circular orbits are stable? 3.18. Determine the condition for a particle of mass m, moving under the effect of a central attractive force r  r F   3 exp   , r  R where  and R are positive constants, to have a stable circular orbit.

30

The use of this notion for the characterization of motion may be traced back at least to an 1846 treatise by W. Hamilton. Nowadays, it is most often used in applied fluid mechanics, in particular meteorology. 31 This fact, first proved in 1710 by Jacob Hermann, was repeatedly rediscovered during the next two centuries. As a result, the most common name of A is, rather unfairly, the Runge-Lenz vector. 32 As may be readily shown from the gravitation version of the Gauss law (see, e.g., the model solution of Problem 1.7), this approximation is exact if the dust density is constant between the star and the planet. Chapter 3

Page 20 of 22

Essential Graduate Physics

CM: Classical Mechanics

3.19. A particle of mass m, with an angular momentum Lz, moves in the field of an attractive central force with a distance-independent magnitude F. If the particle's energy E is slightly higher than the value Emin corresponding to the circular orbit of the particle, what is the time period of its radial oscillations? Compare the period with that of the circular orbit at E = Emin. 3.20. A particle may move without friction, in the uniform gravity field g = –gnz, over an axially-symmetric surface that is described, in cylindrical coordinates {, , z z}, by a smooth function Z() – see the figure on the right. Derive the condition of stability of circular orbits of the particle around the symmetry m axis z, with respect to small perturbations. For the cases when the condition is fulfilled, find out whether the weakly perturbed orbits are open or closed. g Spell out your results for the following particular cases: z  Z  

(i) a conical surface with Z = , (ii) a paraboloid with Z = 2/2, and (iii) a spherical surface with Z2 + 2 = R2, for  < R.

0



3.21. The gravitational potential (i.e. the gravitational energy of a unit probe mass) of our Milky Way galaxy, averaged over interstellar distances, is reasonably well approximated by the following axially-symmetric function: V2  r , z   ln r 2   z 2 , 2 where r is the distance from the galaxy’s symmetry axis and z is the distance from its central plane, while V and  > 0 are constants.33 Prove that circular orbits of stars in this gravity field are stable, and calculate the frequencies of their small oscillations near such orbits, in the r- and z-directions.





3.22. For particle scattering by a repulsive Coulomb field, calculate the minimum approach distance rmin and the velocity vmin at that point, and analyze their dependence on the impact parameter b (see Fig. 9) and the initial velocity v of the particle. 3.23. A particle is launched from afar, with an impact parameter b, toward an attracting center giving



, with n  2 and   0. rn (i) For the case when the initial kinetic energy E of the particle is barely sufficient for escaping its capture by the attracting center, express the minimum approach distance via b and n. (ii) Calculate the capture’s total cross-section and explore its limit at n  2. U (r )  

3.24. A small body with an initial velocity v approaches an atmosphere-free planet of mass M and radius R. (i) Find the condition on the impact parameter b for the body to hit the planet’s surface. (ii) If the body barely avoids the collision, what is its scattering angle? 33

Just for the reader’s reference, these constants are close to, respectively, 2.2105 m/s and 6.

Chapter 3

Page 21 of 22

Essential Graduate Physics

CM: Classical Mechanics

3.25. Calculate the differential and total cross-sections of the classical elastic scattering of small particles by a hard sphere of radius R. 3.26. The most famous34 confirmation of Einstein’s general relativity theory has come from the observation, by A. Eddington and his associates, of star light’s deflection by the Sun, during the May 1919 solar eclipse. Considering light photons as classical particles propagating with the speed of light, v0  c  3.00108m/s, and the astronomic data for the Sun’s mass, MS  1.991030kg, and radius, RS  6.96108m, calculate the non-relativistic mechanics’ prediction for the angular deflection of the light rays grazing the Sun’s surface. 3.27. Generalize the expression for the small angle of scattering, obtained in the solution of the previous problem, to a spherically-symmetric but otherwise arbitrary potential U(r). Use the result to calculate the differential cross-section of small-angle scattering by the potential U = C/rn, with integer n > 0.  d n / 2  1 / 2    1/ 2 Hint: You may like to use the following table integral:  . 1/ 2 2 n 1 n n / 2  1 1 





34

It was not the first confirmation, though. The first one came four years earlier from Albert Einstein himself, who showed that his theory may qualitatively explain the difference between the rate of Mercury orbit’s precession, known from earlier observations, and the non-relativistic theory of that effect. Chapter 3

Page 22 of 22

Essential Graduate Physics

CM: Classical Mechanics

Chapter 4. Rigid Body Motion This chapter discusses the motion of rigid bodies, with a heavy focus on its most nontrivial part: rotation. Some byproducts of this analysis enable a discussion, at the end of the chapter, of the motion of point particles as observed from non-inertial reference frames. 4.1. Translation and rotation It is natural to start a discussion of many-particle systems from a (relatively :-) simple limit when the changes of distances rkk’  rk –rk’ between the particles are negligibly small. Such an abstraction is called the (absolutely) rigid body; it is a reasonable approximation in many practical problems, including the motion of solid samples. In other words, this model neglects deformations – which will be the subject of the next chapters. The rigid-body approximation reduces the number of degrees of freedom of the system of N particles from 3N to just six – for example, three Cartesian coordinates of one point (say, 0), and three angles of the system’s rotation about three mutually perpendicular axes passing through this point – see Fig. 1.1 ω

n3

“lab” frame

“moving” frame

A n2

0 Fig. 4.1. Deriving Eq. (8).

n1

As it follows from the discussion in Secs. 1.1-1.3, any purely translational motion of a rigid body, at which the velocity vectors v of all points are equal, is not more complex than that of a point particle. Indeed, according to Eqs. (1.8) and (1.30), in an inertial reference frame, such a body moves exactly as a point particle upon the effect of the net external force F(ext). However, the rotation is a bit more tricky. Let us start by showing that an arbitrary elementary displacement of a rigid body may be always considered as a sum of the translational motion and of what is called a pure rotation. For that, consider a “moving” reference frame {n1, n2, n3}, firmly bound to the body, and an arbitrary vector A (Fig. 1). The vector may be represented by its Cartesian components Aj in that moving frame: 3

A   Ajn j .

(4.1)

j 1

1

An alternative way to arrive at the same number six is to consider three points of the body, which uniquely define its position. If movable independently, the points would have nine degrees of freedom, but since three distances rkk’ between them are now fixed, the resulting three constraints reduce the number of degrees of freedom to six. © K. Likharev

Essential Graduate Physics

CM: Classical Mechanics

Let us calculate the time derivative of this vector as observed from a different (“lab”) frame, taking into account that if the body rotates relative to this frame, the directions of the unit vectors nj, as seen from the lab frame, change in time. Hence, in each product contributing to the sum (1), we have to differentiate both operands: 3 dA 3 dn j dA j   n Aj . (4.2)   in lab j dt dt j 1 dt j 1 On the right-hand side of this equality, the first sum obviously describes the change of vector A as observed from the moving frame. In the second sum, each of the infinitesimal vectors dnj may be represented by its Cartesian components: 3

dn j   d jj' n j' ,

(4.3)

j' 1

where djj’ are some dimensionless scalar coefficients. To find out more about them, let us scalarmultiply each side of Eq. (3) by an arbitrary unit vector nj”, and take into account the obvious orthonormality condition: (4.4) n j'  n j"   j'j" , where j’j” is the Kronecker delta symbol.2 As a result, we get

dn j  n j"  d jj" .

(4.5)

Now let us use Eq. (5) to calculate the first differential of Eq. (4):

dn j'  n j"  n j'  dn j"  d j'j"  d j"j'  0;

in particular, 2dn j  n j  2d jj  0 .

(4.6)

These relations, valid for any choice of indices j, j’, and j” of the set {1, 2, 3}, show that the matrix with elements djj’ is antisymmetric with respect to the swap of its indices; this means that there are not nine just three non-zero independent coefficients djj’, all with j  j’. Hence it is natural to renumber them in a simpler way: djj’ = –dj’j  dj”, where the indices j, j’, and j” follow in the “correct” order – either {1,2,3}, or {2,3,1}, or {3,1,2}. It is straightforward to verify (either just by a component-by-component comparison or by using the Levi-Civita permutation symbol3) that in this new notation, Eq. (3) may be represented just as a vector product:

dn j  dφ  n j ,

(4.7)

Elementary rotation

where d is the infinitesimal vector defined by its Cartesian components dj in the rotating reference frame {n1, n2, n3}. This relation is the basis of all rotation kinematics. Using it, Eq. (2) may be rewritten as dA dt

in lab



dA dt

3

in mov

  Aj j 1

dφ dA n j  dt dt

in mov

 ω  A, where ω 

dφ . dt

(4.8)

To reveal the physical sense of the vector , let us apply Eq. (8) to the particular case when A is the radius vector r of a point of the body, and the lab frame is selected in a special way: its origin has the 2 3

See, e.g., MA Eq. (13.1). See, e.g., MA Eq. (13.2). Using this symbol, we may write djj’ = –dj’j  jj’j”dj” for any choice of j, j’, and j”.

Chapter 4

Page 2 of 32

Vector’s evolution in time

Essential Graduate Physics

CM: Classical Mechanics

same position and moves with the same velocity as that of the moving frame, in the particular instant under consideration. In this case, the first term on the right-hand side of Eq. (8) is zero, and we get

dr dt

in special lab frame

 ωr,

(4.9)

were vector r itself is the same in both frames. According to the vector product definition, the particle velocity described by this formula has a direction perpendicular to the vectors  and r (Fig. 2), and magnitude rsin. As Fig. 2 shows, the last expression may be rewritten as , where  = rsin is the distance from the line that is parallel to the vector  and passes through point 0. This is of course just the pure rotation about that line (called the instantaneous axis of rotation), with the angular velocity . According to Eqs. (3) and (8), the angular velocity vector  is defined by the time evolution of the moving frame alone, so it is the same for all points r, i.e. for the rigid body as a whole. Note that nothing in our calculations forbids not only the magnitude but also the direction of the vector , and thus of the instantaneous axis of rotation, to change in time; hence the name. ωr

ω

 r

 0

Fig. 4.2. The instantaneous axis and the angular velocity of rotation.

Now let us generalize our result a step further, considering two reference frames that do not rotate versus each other: one (“lab”) frame arbitrary, and another one selected in the special way described above, so that for it Eq. (9) is valid in it. Since their relative motion of these two reference frames is purely translational, we can use the simple velocity addition rule given by Eq. (1.6) to write Body point’s velocity

v

in lab

 v0

in lab

v

in special lab frame

 v0

in lab

 ω  r,

(4.10)

where r is the radius vector of a point is measured in the body-bound (“moving”) frame 0. 4.2. Inertia tensor Since the dynamics of each point of a rigid body is strongly constrained by the conditions rkk’ = const, this is one of the most important fields of application of the Lagrangian formalism discussed in Chapter 2. For using this approach, the first thing we need to calculate is the kinetic energy of the body in an inertial reference frame. Since it is just the sum of the kinetic energies (1.19) of all its points, we can use Eq. (10) to write:4 T 

m 2 m m m 2 v   v 0  ω  r    v02   mv 0  (ω  r )   (ω  r ) 2 . 2 2 2 2

(4.11)

4

Actually, all symbols for particle masses, coordinates, and velocities should carry the particle’s index, over which the summation is carried out. However, in this section, for the notation simplicity, this index is just implied. Chapter 4

Page 3 of 32

Essential Graduate Physics

CM: Classical Mechanics

Let us apply to the right-hand side of Eq. (11) two general vector analysis formulas listed in the Math Appendix: the so-called operand rotation rule MA Eq. (7.6) to the second term, and MA Eq. (7.7b) to the third term. The result is m m T   v02   mr  ( v 0  ω)    2 r 2  (ω  r ) 2 . (4.12) 2 2





This expression may be further simplified by making a specific choice of the point 0 (from which the radius vectors r of all particles are measured), namely by using for this point the center of mass of the body. As was already mentioned in Sec. 3.4 for the two-point case, the radius vector R of this point is defined as MR   mr, with M   m , (4.13) so M is the total mass of the body. In the reference frame centered at this point, we have R = 0, so that the second sum in Eq. (12) vanishes, and the kinetic energy is a sum of just two terms: T  Ttran  Trot , Ttran 





M 2 m V , Trot    2 r 2  (ω  r ) 2 , 2 2

(4.14)

where V  dR/dt is the center-of-mass velocity in our inertial reference frame, and all particle positions r are measured in the center-of-mass frame. Since the angular velocity vector  is common for all points of a rigid body, it is more convenient to rewrite the rotational part of the energy in a form in that the summation over the components of this vector is separated from the summation over the points of the body: 1 3 (4.15) Trot   I jj ' j  j ' , 2 j , j '1 where the 33 matrix with elements



I jj '   m r 2 jj '  r j r j '



(4.16)

represents, in the selected reference frame, the inertia tensor of the body.5 Actually, the term “tensor” for the construct described by this matrix has to be justified, because in physics it implies a certain reference-frame-independent notion, whose matrix elements have to obey certain rules at the transfer between reference frames. To show that the matrix (16) indeed described such notion, let us calculate another key quantity, the total angular momentum L of the same body.6 Summing up the angular momenta of each particle, defined by Eq. (1.31), and then using Eq. (10) again, in our inertial reference frame we get L   r  p   mr  v   mr  v 0  ω  r    mr  v 0   mr  ω  r  .

(4.17)

We see that the momentum may be represented as a sum of two terms. The first one, 5

While the ABCs of the rotational dynamics were developed by Leonhard Euler in 1765, an introduction of the inertia tensor’s formalism had to wait very long – until the invention of the tensor analysis by Tullio Levi-Civita and Gregorio Ricci-Curbastro in 1900 – soon popularized by its use in Einstein’s general relativity. 6 Hopefully, there is very little chance of confusing the angular momentum L (a vector) and its Cartesian components Lj (scalars with an index) on one hand, and the Lagrangian function L (a scalar without an index) on the other hand. Chapter 4

Page 4 of 32

Kinetic energy of rotation Inertia tensor

Essential Graduate Physics

CM: Classical Mechanics

L 0   m r  v 0  MR  v 0 ,

(4.18)

describes the possible rotation of the center of mass about the inertial frame’s origin. This term vanishes if the moving reference frame’s origin 0 is positioned at the center of mass (where R = 0). In this case, we are left with only the second term, which describes a pure rotation of the body about its center of mass: (4.19) L  L rot   mr  ω  r  . Using one more vector algebra formula, the “bac minis cab” rule,7 we may rewrite this expression as





L   m ωr 2  r r  ω  .

(4.20)

Let us spell out an arbitrary Cartesian component of this vector: 3 3   L j   m  j r 2  r j  r j'  j'    m  j' r 2 jj'  r j r j' . j' 1 j' 1  





(4.21)

By changing the summation order and comparing the result with Eq. (16), the angular momentum may be conveniently expressed via the same matrix elements Ijj’ as the rotational kinetic energy: 3

L j   I jj ' j' .

Angular momentum

(4.22)

j '1

Since L and  are both legitimate vectors (meaning that they describe physical vectors independent of the reference frame choice), the matrix of elements Ijj’ that relates them is a legitimate tensor. This fact, and the symmetry of the tensor (Ijj’ = Ij’j), evident from its definition (16), allow the tensor to be further simplified. In particular, mathematics tells us that by a certain choice of the coordinate axes’ orientations, any symmetric tensor may be reduced to a diagonal form I jj '  I j  jj' , where in our case Principal moments of inertia







(4.23)



I j   m r 2  r j2   m r j'2  r j"2   m 2j ,

(4.24)

j being the distance of the particle from the jth axis, i.e. the length of the perpendicular dropped from the point to that axis. The axes of such a special coordinate system are called the principal axes, while the diagonal elements Ij given by Eq. (24), the principal moments of inertia of the body. In such a special reference frame, Eqs. (15) and (22) are reduced to very simple forms: 3

Ij

j 1

2

Trot  

Trot and L in principal-axes frame

 2j ,

L j  I j j .

(4.25) (4.26)

Both these results remind the corresponding relations for the translational motion, Ttran = MV2/2 and P = MV, with the angular velocity  replacing the linear velocity V, and the tensor of inertia playing the role of scalar mass M. However, let me emphasize that even in the specially selected reference frame, with 7

See, e.g., MA Eq. (7.5).

Chapter 4

Page 5 of 32

Essential Graduate Physics

CM: Classical Mechanics

its axes pointing in principal directions, the analogy is incomplete, and rotation is generally more complex than translation, because the measures of inertia, Ij, are generally different for each principal axis. Let me illustrate the last fact on a simple but instructive system of three similar massive particles fixed in the vertices of an equilateral triangle (Fig. 3).

r2 m



a

0

a

r1

 h  /6

m

Fig. 4.3. Principal moments of inertia: a simple case study.

m

a

Due to the symmetry of the configuration, one of the principal axes has to pass through the center of mass 0 and be normal to the plane of the triangle. For the corresponding principal moment of inertia, Eq. (24) readily yields I3 = 3m2. If we want to express this result in terms of the triangle’s side a, we may notice that due to the system’s symmetry, the angle marked in Fig. 3 equals /6, and from the shaded right triangle, a/2 = cos(/6)  3/2, giving  = a/3, so, finally, I3 = ma2. Let me use this simple case to illustrate the following general axis shift theorem, which may be rather useful – especially for more complex systems. For that, let us relate the inertia tensor elements Ijj’ and I’jj’, calculated in two reference frames – one with its origin at the center of mass 0, and another one (0’) translated by a certain vector d (Fig. 4a), so for an arbitrary point, r’ = r + d. Plugging this relation into Eq. (16), we get





I' jj'   m r  d   jj '  r j  d j r j '  d j '  2







  m r 2  2r  d  d 2  jj '  r j r j '  r j d j '  r j ' d j  d j d j '  .

(4.27)

Since in the center-of-mass frame, all sums mrj equal zero, we may use Eq. (16) to finally obtain I' jj'  I jj '  M ( jj ' d 2  d j d j ' ) .

(4.28)

In particular, this equation shows that if the shift vector d is perpendicular to one (say, jth) of the principal axes (Fig. 4b), i.e. dj = 0, then Eq. (28) is reduced to a very simple formula: I' j  I j  Md 2 . m

r' r 0'

d 0

Chapter 4

(a)

(4.29)

(b)

r' j

rj d

Fig. 4.4. (a) A general coordinate frame’s shift from the center of mass, and (b) a shift perpendicular to one of the principal axes.

Page 6 of 32

Principal axis' shift

Essential Graduate Physics

CM: Classical Mechanics

Now returning to the particular system shown in Fig. 3, let us perform such a shift to the new (“primed”) axis passing through the location of one of the particles, still perpendicular to their common plane. Then the contribution of that particular mass to the primed moment of inertia vanishes, and I’3 = 2ma2. Now, returning to the center of mass and applying Eq. (29), we get I3 = I’3 – M2 = 2ma2 – (3m)(a/3)2 = ma2, i.e. the same result as above. The symmetry situation inside the triangle’s plane is somewhat less obvious, so let us start by calculating the moments of inertia for the axes shown vertical and horizontal in Fig. 3. From Eq. (24), we readily get: 2   a  2  a  2  ma 2 ma 2 a 2 2 I 1  2mh  m  m 2 I 2  2m          , (4.30) , 2 2 2   2 3   3  

Symmetric top: definition

where h is the distance from the center of mass and any side of the triangle: h =  sin(/6) = /2 = a/23. We see that I1 = I2, and mathematics tells us that in this case, any in-plane axis (passing through the center-of-mass 0) may be considered as principal, and has the same moment of inertia. A rigid body with this property, I1 = I2  I3, is called the symmetric top. (The last direction is called the main principal axis of the system.) Despite the symmetric top’s name, the situation may be even more symmetric in the so-called spherical tops, i.e. highly symmetric systems whose principal moments of inertia are all equal,

Spherical top: definition

Spherical top: properties

I1  I 2  I 3  I ,

(4.31)

Mathematics says that in this case, the moment of inertia for rotation about any axis (but still passing through the center of mass) is equal to the same I. Hence Eqs. (25) and (26) are further simplified for any direction of the vector : I Trot   2 , L  Iω , (4.32) 2 thus making the analogy of rotation and translation complete. (As will be discussed in the next section, this analogy is also complete if the rotation axis is fixed by external constraints.) Evident examples of a spherical top are a uniform sphere and a uniform spherical shell; its less obvious example is a uniform cube – with masses either concentrated in vertices, or uniformly spread over the faces, or uniformly distributed over the volume. Again, in this case any axis passing through the center of mass is a principal one and has the same principal moment of inertia. For a sphere, this is natural; for a cube, rather surprising – but may be confirmed by a direct calculation. 4.3. Fixed-axis rotation Now we are well equipped for a discussion of the rigid body’s rotational dynamics. The general equation of this dynamics is given by Eq. (1.38), which is valid for dynamics of any system of particles – either rigidly connected or not: L  τ , (4.33) where  is the net torque of external forces. Let us start exploring this equation from the simplest case when the axis of rotation, i.e. the direction of vector , is fixed by some external constraints. Directing

Chapter 4

Page 7 of 32

Essential Graduate Physics

CM: Classical Mechanics

the z-axis along this vector, we have x = y = 0. According to Eq. (22), in this case, the z-component of the angular momentum, L z  I zz  z , (4.34) where Izz, though not necessarily one of the principal moments of inertia. still may be calculated using Eq. (24): (4.35) I zz   m z2   mx 2  y 2  , with z being the distance of each particle from the rotation axis z. According to Eq. (15), in this case the rotational kinetic energy is just I Trot  zz  z2 . (4.36) 2 Moreover, it is straightforward to show that if the rotation axis is fixed, Eqs. (34)-(36) are valid even if the axis does not pass through the center of mass – provided that the distances z are now measured from that axis. (The proof is left for the reader’s exercise.) As a result, we may not care about other components of the vector L,8 and use just one component of Eq. (33), L z   z , (4.37) because it, when combined with Eq. (34), completely determines the dynamics of rotation: I zz  z   z ,

i.e. I zzz   z ,

(4.38)

where z is the angle of rotation about the axis, so z =  . The scalar relations (34), (36), and (38), describing rotation about a fixed axis, are completely similar to the corresponding formulas of 1D motion of a single particle, with z corresponding to the usual (“linear”) velocity, the angular momentum component Lz – to the linear momentum, and Iz – to the particle’s mass. The resulting motion about the axis is also frequently similar to that of a single particle. As a simple example, let us consider what is called the physical (or “compound”) pendulum (Fig. 5) – a rigid body free to rotate about a fixed horizontal axis that does not pass through the center of mass 0, in a uniform gravity field g. 0'

r in 0' l



r in 0 0 Mg

Fig. 4.5. Physical pendulum: a rigid body with a fixed (horizontal) rotation axis 0’ that does not pass through the center of mass 0. (The plane of drawing is normal to that axis.)

8

Note that according to Eq. (22), other Cartesian components of the angular momentum, Lx and Ly, may be different from zero, and even evolve in time. The corresponding torques x and y, which obey Eq. (33), are automatically provided by the external forces that keep the rotation axis fixed.

Chapter 4

Page 8 of 32

Essential Graduate Physics

CM: Classical Mechanics

Let us drop the perpendicular from point 0 to the rotation axis, and call the oppositely directed vector l – see the dashed arrow in Fig. 5. Then the torque (relative to the rotation axis 0’) of the forces keeping the axis fixed is zero, and the only contribution to the net torque is due to gravity alone:





τ in 0'   r in 0'  F   l  r in 0  mg   m l  g    m r in 0  g  Ml  g .

(4.39)

(The last step used the facts that point 0 is the center of mass, so the second term on the right-hand side equals zero, and that the vectors l and g are the same for all particles of the body.) This result shows that the torque is directed along the rotation axis, and its (only) component z is equal to –Mglsin, where  is the angle between the vectors l and g, i.e. the angular deviation of the pendulum from the position of equilibrium – see Fig. 5 again. As a result, Eq. (38) takes the form,

I'   Mgl sin  ,

(4.40)

where I’ is the moment of inertia for rotation about the axis 0’ rather than about the center of mass. This equation is identical to Eq. (1.18) for the point-mass (sometimes called “mathematical”) pendulum, with small-oscillation frequency Physical pendulum: frequency

1/ 2

1/ 2

 g  I' (4.41) .    , with l ef  Ml  l ef  As a sanity check, in the simplest case when the linear size of the body is much smaller than the suspension length l, Eq. (35) yields I’ = Ml2, i.e. lef = l, and Eq. (41) reduces to the well-familiar formula  = (g/l)1/2 for the point-mass pendulum.  Mgl  Ω   I' 

Now let us discuss the situations when a rigid body not only rotates but also moves as a whole. As was mentioned in the introductory chapter, the total linear momentum of the body, d P   mv   mr   mr , (4.42) dt

C.o.m.: law of motion

satisfies the 2nd Newton’s law in the form (1.30). Using the definition (13) of the center of mass, the momentum may be represented as   MV, P  MR (4.43) so Eq. (1.30) may be rewritten as   F, MV (4.44) where F is the vector sum of all external forces. This equation shows that the center of mass of the body moves exactly like a point particle of mass M, under the effect of the net force F. In many cases, this fact makes the translational dynamics of a rigid body absolutely similar to that of a point particle. The situation becomes more complex if some of the forces contributing to the vector sum F depend on the rotation of the same body, i.e. if its rotational and translational motions are coupled. Analysis of such coupled motion is rather straightforward if the direction of the rotation axis does not change in time, and hence Eqs. (34)-(36) are still valid. Possibly the simplest example is a round cylinder (say, a wheel) rolling on a surface without slippage (Fig. 6). Here the no-slippage condition may be represented as the requirement to the net velocity of the particular wheel’s point A that touches the surface to equal zero – in the reference frame bound to the surface. For the simplest case of plane

Chapter 4

Page 9 of 32

Essential Graduate Physics

CM: Classical Mechanics

surface (Fig. 6a), this condition may be spelled out using Eq. (10), giving the following relation between the angular velocity  of the wheel and the linear velocity V of its center: V  r  0 . (a)



R



0'

r

(b)

V



0

V

0

(4.45)

r

A

A

Fig. 4.6. Round cylinder rolling over (a) a plane surface and (b) a concave surface.

Such kinematic relations are essentially holonomic constraints, which reduce the number of degrees of freedom of the system. For example, without the no-slippage condition (45), the wheel on a plane surface has to be considered as a system with two degrees of freedom, making its total kinetic energy (14) a function of two independent generalized velocities, say V and  : T  Ttran  Trot 

M 2 I 2 V   . 2 2

(4.46)

Using Eq. (45) we may eliminate, for example, the linear velocity and reduce Eq. (46) to T

I M r 2  I  2  ef  2 , 2 2 2

where I ef  I  Mr 2 .

(4.47)

This result may be interpreted as the kinetic energy of pure rotation of the wheel about the instantaneous rotation axis A, with Ief being the moment of inertia about that axis, satisfying Eq. (29). Kinematic relations are not always as simple as Eq. (45). For example, if a wheel is rolling on a concave surface (Fig. 6b), we need to relate the angular velocities of the wheel’s rotation about its axis 0’ (say, ) and that (say, ) of its axis’ rotation about the center 0 of curvature of the surface. A popular error here is to write  = –(r/R) [WRONG!]. A prudent way to derive the correct relation is to note that Eq. (45) holds for this situation as well, and on the other hand, the same linear velocity of the wheel’s center may be expressed as V = (R – r). Combining these formulas, we get the correct relation 

r . Rr

(4.48)

Another famous example of the relation between translational and rotational motion is given by the “sliding-ladder” problem (Fig. 7). Let us analyze it for the simplest case of negligible friction, and the ladder’s thickness being small in comparison with its length l.

l/2

R  X ,Y 

l/2

 0 Chapter 4

l/2



Fig. 4.7. The sliding-ladder problem. Page 10 of 32

Essential Graduate Physics

CM: Classical Mechanics

To use the Lagrangian formalism, we may write the kinetic energy of the ladder as the sum (14) of its translational and rotational parts: M  2 2 I 2 (4.49) T X  Y   , 2 2





where X and Y are the Cartesian coordinates of its center of mass in an inertial reference frame, and I is the moment of inertia for rotation about the z-axis passing through the center of mass. (For the uniformly distributed mass, an elementary integration of Eq. (35) yields I = Ml2/12). In the reference frame with the center in the corner 0, both X and Y may be simply expressed via the angle  : X 

l l cos  , Y  sin  . 2 2

(4.50)

(The easiest way to obtain these relations is to notice that the dashed line in Fig. 7 has length l/2, and the same slope  as the ladder.) Plugging these expressions into Eq. (49), we get I T  ef  2 , 2

2

I ef

1 l  I  M    Ml 2 . 3 2

(4.51)

Since the potential energy of the ladder in the gravity field may be also expressed via the same angle, l U  MgY  Mg sin  , 2

(4.52)

 may be conveniently used as the (only) generalized coordinate of the system. Even without writing the Lagrange equation of motion for that coordinate, we may notice that since the Lagrangian function L  T – U does not depend on time explicitly, and the kinetic energy (51) is a quadratic-homogeneous function of the generalized velocity  , the full mechanical energy,  I ef 2 l Mgl  l 2  (4.53)   Mg sin    sin   , 2 2 2  3g  is conserved, giving us the first integral of motion. Moreover, Eq. (53) shows that the system’s energy (and hence dynamics) is identical to that of a physical pendulum with an unstable fixed point 1 = /2, a stable fixed point at 2 = –/2, and frequency E  T U 

 3g       2l 

1/ 2

(4.54)

of small oscillations near the latter point. (Of course, this fixed point cannot be reached in the simple geometry shown in Fig. 7, where the ladder’s fall on the floor would change its equations of motion. Moreover, even before that, the left end of the ladder may detach from the wall. The analysis of this issue is left for the reader’s exercise.) 4.4. Free rotation Now let us proceed to more complex situations when the rotation axis is not fixed. A good illustration of the complexity arising in this case comes from the case of a rigid body left alone, i.e. not subjected to external forces and hence with its potential energy U constant. Since in this case, according Chapter 4

Page 11 of 32

Essential Graduate Physics

CM: Classical Mechanics

to Eq. (44), the center of mass (as observed from any inertial reference frame) moves with a constant velocity, we can always use a convenient inertial reference frame with the origin at that point. From the point of view of such a frame, the body’s motion is a pure rotation, and Ttran = 0. Hence, the system’s Lagrangian function is just its rotational energy (15), which is, first, a quadratic-homogeneous function of the components j (which may be taken for generalized velocities), and, second, does not depend on time explicitly. As we know from Chapter 2, in this case the mechanical energy, here equal to Trot alone, is conserved. According to Eq. (15), for the principal-axes components of the vector , this means 3

Ij

j 1

2

Trot  

 2j  const .

(4.55)

Rotational energy’s conservation

Next, as Eq. (33) shows, in the absence of external forces, the angular momentum L of the body is conserved as well. However, though we can certainly use Eq. (26) to represent this fact as 3

L   I j  j n j  const ,

(4.56)

j 1

where nj are the principal axes, this does not mean that all components j are constant, because the principal axes are fixed relative to the rigid body, and hence may rotate with it. Before exploring these complications, let us briefly mention two conceptually easy, but practically very important cases. The first is a spherical top (I1 = I2 = I3 = I). In this case, Eqs. (55) and (56) imply that all components of the vector  = L/I, i.e. both the magnitude and the direction of the angular velocity are conserved, for any initial spin. In other words, the body conserves its rotation speed and axis direction, as measured in an inertial frame. The most obvious example is a spherical planet. For example, our Mother Earth, rotating about its axis with angular velocity  = 2/(1 day)  7.310-5 s-1, keeps its axis at a nearly constant angle of 2327’ to the ecliptic pole, i.e. to the axis normal to the plane of its motion around the Sun. (In Sec. 6 below, we will discuss some very slow motions of this axis, due to gravity effects.) Spherical tops are also used in the most accurate gyroscopes, usually with gas-jet or magnetic suspension in vacuum. If done carefully, such systems may have spectacular stability. For example, the gyroscope system of the Gravity Probe B satellite experiment, flown in 2004-2005, was based on quartz spheres – round with a precision of about 10 nm and covered with superconducting thin films (which enabled their magnetic suspension and monitoring). The whole system was stable enough to measure the so-called geodetic effect in general relativity (essentially, the space curving by the Earth’s mass), resulting in the axis’ precession by only 6.6 arc seconds per year, i.e. with an angular velocity of just ~10-11s-1, with experimental results agreeing with theory with a record ~0.3% accuracy.9 The second simple case is that of the symmetric top (I1 = I2  I3) with the initial vector L aligned with the main principal axis. In this case,  = L/I3 = const, so the rotation axis is conserved.10 Such tops, typically in the shape of a flywheel (heavy, flat rotor), and supported by gimbal systems (also called the “Cardan suspensions”) that allow for virtually torque-free rotation about three mutually perpendicular 9

Still, the main goal of this rather expensive (~$750M) project, an accurate measurement of a more subtle relativistic effect, the so-called frame-dragging drift (also called “the Schiff precession”), predicted to be about 0.04 arc seconds per year, has not been achieved. 10 This is also true for an asymmetric top, i.e. an arbitrary body (with, say, I < I < I ), but in this case the 1 2 3 alignment of the vector L with the axis n2 corresponding to the intermediate moment of inertia, is unstable. Chapter 4

Page 12 of 32

Angular momentum’s conservation

Essential Graduate Physics

CM: Classical Mechanics

axes,11 are broadly used in more common gyroscopes. Invented by Léon Foucault in the 1850s and made practical later by H. Anschütz-Kaempfe, such gyroscopes have become core parts of automatic guidance systems, for example, in ships, airplanes, missiles, etc. Even if its support wobbles and/or drifts, the suspended gyroscope sustains its direction relative to an inertial reference frame.12 However, in the general case with no such special initial alignment, the dynamics of symmetric tops is more complicated. In this case, the vector L is still conserved, including its direction, but the vector  is not. Indeed, let us direct the n2-axis normally to the common plane of the vector L and the current instantaneous direction n3 of the main principal axis (in Fig. 8 below, the plane of the drawing); then, in that particular instant, L2 = 0. Now let us recall that in a symmetric top, the axis n2 is a principal one. According to Eq. (26) with j = 2, the corresponding component 2 has to be equal to L2/I2, so it is equal to zero. This means that in the particular instant we are considering, the vector  lies in this plane (the common plane of vectors L and n3) as well – see Fig. 8a.

ω

L

L1

1

n3

L3

 0

nL

(b)

ω

L

3



n1

(a)

n1

 pre  1

n3

 rot 0

Fig. 4.8. Free rotation of a symmetric top: (a) the general configuration of vectors, and (b) calculating the free precession frequency.

Now consider some point located on the main principal axis n3, and hence on the plane [n3, L]. Since  is the instantaneous axis of rotation, according to Eq. (9), the instantaneous velocity v = r of the point is directed normally to that plane. This is true for each point of the main axis (besides only one, with r = 0, i.e. the center of mass, which does not move), so the axis as a whole has to move normally to the common plane of the vectors L, , and n3, while still passing through point 0. Since this conclusion is valid for any moment of time, it means that the vectors  and n3 rotate about the space-fixed vector L together, with some angular velocity pre, at each moment staying within one plane. This effect is called the free (or “torque-free”, or “regular”) precession, and has to be clearly distinguished it from the completely different effect of the torque-induced precession, which will be discussed in the next section. To calculate pre, let us represent the instant vector  as a sum of not its Cartesian components (as in Fig. 8a), but rather of two non-orthogonal vectors directed along n3 and L (Fig. 8b):

ω   rot n 3   pre n L ,

nL 

L . L

(4.57)

11

See, for example, a nice animation available online at http://en.wikipedia.org/wiki/Gimbal. Currently, optical gyroscopes are becoming more popular for all but the most precise applications. Much more compact but also much less accurate gyroscopes used, for example, in smartphones and tablet computers, are based on the effect of rotation on 2D mechanical oscillators (whose analysis is left for the reader’s exercise), and are implemented as micro-electro-mechanical systems (MEMS) – see, e.g., Chapter 22 in V. Kaajakari, Practical MEMS, Small Gear Publishing, 2009. 12

Chapter 4

Page 13 of 32

Essential Graduate Physics

CM: Classical Mechanics

Fig. 8b shows that rot has the meaning of the angular velocity of rotation of the body about its main principal axis, while pre is the angular velocity of rotation of that axis about the constant direction of the vector L, i.e. is exactly the frequency of precession that we are trying to find. Now pre may be readily calculated from the comparison of two panels of Fig. 8, by noticing that the same angle  between the vectors L and n3 participates in two relations: sin  

L1   1 . L  pre

(4.58)

Since the n1-axis is a principal one, we may use Eq. (26) for j = 1, i.e. L1 = I11, to eliminate 1 from Eq. (58), and get a very simple formula L  pre  . (4.59) I1 This result shows that the precession frequency is constant and independent of the alignment of the vector L with the main principal axis n3, while its amplitude (characterized by the angle ) does depend on the initial alignment, and vanishes if L is parallel to n3.13 Note also that if all principal moments of inertia are of the same order, pre is of the same order as the total angular speed    of the rotation. Now let us briefly discuss the free precession in the general case of an “asymmetric top”, i.e. a body with arbitrary I1  I2  I3. In this case, the effect is more complex because here not only the direction but also the magnitude of the instantaneous angular velocity  may evolve in time. If we are only interested in the relation between the instantaneous values of j and Lj, i.e. the “trajectories” of the vectors  and L as observed from the reference frame {n1, n2, n3} of the principal axes of the body, rather than in the explicit law of their time evolution, they may be found directly from the conservation laws. (Let me emphasize again that the vector L, being constant in an inertial reference frame, generally evolves in the frame rotating with the body.) Indeed, Eq. (55) may be understood as the equation of an ellipsoid in the Cartesian coordinates {1, 2, 3 }, so for a free body, the vector  has to stay on the surface of that ellipsoid.14 On the other hand, since the reference frame’s rotation preserves the length of any vector, the magnitude (but not the direction!) of the vector L is also an integral of motion in the moving frame, and we can write 3

3

j 1

j 1

L2   L2j   I 2j  2j  const .

(4.60)

Hence the trajectory of the vector  follows the closed curve formed by the intersection of two ellipsoids, (55) and (60) – the so-called Poinsot construction. It is evident that this trajectory is generally “taco-edge-shaped”, i.e. more complex than a planar circle, but never very complex either.15 The same argument may be repeated for the vector L, for whom the first form of Eq. (60) descries a sphere, and Eq. (55), another ellipsoid: 13

For our Earth, free precession’s amplitude is so small (corresponding to sub-10-m linear deviations of the symmetry axis from the vector L at the surface) that this effect is of the same order as other, more irregular motions of the axis, resulting from turbulent fluid flow effects in the planet’s interior and its atmosphere. 14 It is frequently called the Poinsot’s ellipsoid, named after Louis Poinsot (1777-1859) who has made several important contributions to rigid body mechanics. 15 Curiously, the “wobbling” motion along such trajectories was observed not only for macroscopic rigid bodies but also for heavy atomic nuclei – see, e.g., N. Sensharma et al., Phys. Rev. Lett. 124, 052501 (2020). Chapter 4

Page 14 of 32

Free precession: lab frame

Essential Graduate Physics

CM: Classical Mechanics

3

1 2 L j  const . j 1 2 I j

Trot  

(4.61)

On the other hand, if we are interested in the trajectory of the vector  as observed from an inertial frame (in which the vector L stays still), we may note that the general relation (15) for the same rotational energy Trot may also be rewritten as Trot 

3 1 3   j  I jj ' j ' . 2 j 1 j '1

(4.62)

But according to the Eq. (22), the second sum on the right-hand side is nothing more than Lj, so Trot 

1 3 1  j Lj  ω L .  2 j 1 2

(4.63)

This equation shows that for a free body (Trot = const, L = const), even if the vector  changes in time, its endpoint should stay on a plane normal to the angular momentum L. Earlier, we have seen that for the particular case of the symmetric top – see Fig. 8b, but for an asymmetric top, the trajectory of the endpoint may not be circular. If we are interested not only in the trajectory of the vector  but also in the law of its evolution in time, it may be calculated using the general Eq. (33) expressed in the principal components j. For that, we have to recall that Eq. (33) is only valid in an inertial reference frame, while the frame {n1, n2, n3} may rotate with the body and hence is generally not inertial. We may handle this problem by applying, to the vector L, the general kinematic relation (8):

dL dL in lab  in mov  ω  L. dt dt Combining it with Eq. (33), in the moving frame we get

(4.64)

dL  ωL  τ, dt

(4.65)

where  is the external torque. In particular, for the principal-axis components Lj, related to the components j by Eq. (26), the vector equation (65) is reduced to a set of three scalar Euler equations

I j  j  ( I j "  I j ' ) j ' j "   j ,

Euler equations

(4.66)

where the set of indices { j, j’ , j” } has to follow the usual “right” order – e.g., {1, 2, 3}, etc.16 In order to get a feeling how do the Euler equations work, let us return to the particular case of a free symmetric top (1 = 2 = 3 = 0, I1 = I2  I3). In this case, I1 – I2 = 0, so Eq. (66) with j = 3 yields 3 = const, while the equations for j = 1 and j = 2 take the following simple form:

 1  Ω pre 2 ,

 2  Ω pre1 ,

(4.67)

where pre is a constant determined by both the system parameters and the initial conditions: These equations are of course valid in the simplest case of the fixed rotation axis as well. For example, if  = nz, i.e. x = y = 0, Eq. (66) is reduced to Eq. (38).

16

Chapter 4

Page 15 of 32

Essential Graduate Physics

CM: Classical Mechanics

Ω pre   3

I 3  I1 . I1

(4.68)

The system of two equations (67) has a sinusoidal solution with frequency pre, and describes a uniform rotation of the vector , with that frequency, about the main axis n3. This is just another representation of the free precession analyzed above, but this time as observed from the rotating body. Evidently, pre is substantially different from the frequency pre (59) of the precession as observed from the lab frame; for example, pre vanishes for the spherical top (with I1 = I2 = I3), while pre, in this case, is equal to the rotation frequency.17 Unfortunately, for the rotation of an asymmetric top (i.e., an arbitrary rigid body) the Euler equations (66) are substantially nonlinear even in the absence of external torque, and may be solved analytically only in just a few cases. One of them is a proof of the already mentioned fact: the free top’s rotation about one of its principal axes is stable if the corresponding principal moment of inertia is either the largest or the smallest one of the three. (This proof is easy, and is left for the reader’s exercise.) 4.5. Torque-induced precession The dynamics of rotation becomes even more complex in the presence of external forces. Let us consider the most counter-intuitive effect of torque-induced precession, for the simplest case of an axially-symmetric body (which is a particular case of the symmetric top, I1 = I2  I3), supported at some point A of its symmetry axis, that does not coincide with the center of mass 0 – see Fig. 9. (a)

 pre z

n3

 rot

 l

A

(b)

L 0

L xy

0

Mg

Fig. 4.9. Symmetric top in the gravity field: (a) a side view at the system and (b) the top view at the evolution of the horizontal component of the angular momentum vector.

The uniform gravity field g creates bulk-distributed forces that, as we know from the analysis of the physical pendulum in Sec. 3, are equivalent to a single force Mg applied in the center of mass – in Fig. 9, point 0. The torque of this force relative to the support point A is τ  r0 in A  Mg  Ml n 3  g .

(4.69)

Hence the general equation (33) of the angular momentum evolution (valid in any inertial frame, for example the one with its origin at point A) becomes For our Earth with its equatorial bulge (see Sec. 6 below), the ratio (I3 – I1)/I1 is ~1/300, so that 2/pre is about 10 months. However, due to the fluid flow effects mentioned above, the observed precession is not very regular.

17

Chapter 4

Page 16 of 32

Free precession: body frame

Essential Graduate Physics

CM: Classical Mechanics

L  Ml n 3  g .

(4.70)

Despite the apparent simplicity of this (exact!) equation, its analysis is straightforward only in the limit when the top is spinning about its symmetry axis n3 with a very high angular velocity rot. In this case, we may neglect the contribution to L due to a relatively small precession velocity pre (still to be calculated), and use Eq. (26) to write L  I 3 ω  I 3 rot n 3 .

(4.71)

Then Eq. (70) shows that the vector L is perpendicular to both n3 (and hence L) and g, i.e. lies within a horizontal plane and is perpendicular to the horizontal component Lxy of the vector L – see Fig. 9b. Since, according to Eq. (70), the magnitude of this vector is constant,  L  = Mgl sin, the vector L (and hence the body’s main axis) rotates about the vertical axis with the following angular velocity: Torqueinduced precession: fast-rotation limit

 pre 

L L xy



Mgl sin  Mgl Mgl .   L sin  L I 3 rot

(4.72)

Thus, rather counter-intuitively, the fast-rotating top does not follow the external, vertical force and, in addition to fast spinning about the symmetry axis n3, performs a revolution, called the torqueinduced precession, about the vertical axis.18 Note that, similarly to the free-precession frequency (59), the torque-induced precession frequency (72) does not depend on the initial (and sustained) angle . However, the torque-induced precession frequency is inversely (rather than directly) proportional to rot. This fact makes the above simple theory valid in many practical cases. Indeed, Eq. (71) is quantitatively valid if the contribution of the precession into L is relatively small: Ipre 0):

 y   (r  dF) y    x zz d 2 r  S

S

EI y E 2 2 x d r  , R S R

(7.69)

27

Strictly speaking, that dashed line is the intersection of the neutral surface (the continuous set of such neutral lines for all cross-sections of the rod) with the plane of the drawing. 28 Indeed, for (dx/dz)2 > a. Still, since the deviations of the rod from its unstrained shape may accumulate along its length, Eq. (73) may be used for calculations of large “global” deviations of the rod from equilibrium, on a length scale much larger than a. To describe such deformations, Eq. (73) has to be complemented by conditions of the balance of the bending forces and torques. Unfortunately, a general analysis of such deformations requires a bit more differential geometry than I have time for, so I will only discuss this procedure for the simplest case of relatively small transverse deviations q  qx of an initially horizontal rod from its straight shape that will be used for the z-axis (Fig. 9a), by some forces, possibly including bulk-distributed forces f = nxfx(z). (Again, the simplest example is a uniform gravity field, for which fx = –g = const.) Note that in the forthcoming discussion the reference frame will be global, i.e. common for the whole rod, rather than local (pertaining to each cross-section) as it was in the previous analysis – cf. Fig. 8. First of all, we may write a static relation for the total vertical force F = nxFx(z) exerted on the part of the rod to the left of the considered cross-section – located at point z. The differential form of this relation expresses the balance of vertical forces exerted on a small fragment dz of the rod (Fig. 9a), necessary for the absence of its linear acceleration: Fx(z + dz) – Fx(z) + fx(z)Adz = 0, giving

29

In particular, this is the reason why the usual electric wires are made not of a solid copper core, but rather a twisted set of thinner sub-wires, which may slip relative to each other, increasing the wire flexibility. Chapter 7

Page 18 of 38

Rod bending: curvature vs. torque

Essential Graduate Physics

x

f ( z) Adz

CM: Classical Mechanics

F( z  dz)

 y ( z)

(a)

(b)

F0

q0  0

τ0

 y (z ) q(z ) - F( z ) z

q( z  dz) z  dz

q0 τ0

F0

F  n x F0 τ0

Fig. 7.9. A global picture of rod bending: (a) the forces acting on a small fragment of a rod, and (b) two bending problem examples, each with two typical but different boundary conditions.

dFx   fx A, dz

(7.74)

where A is the cross-section’s area. Note that this vertical component of the internal forces has been neglected in our derivation of Eq. (73), and hence our final results will be valid only if the ratio Fx/A is much smaller than the magnitude of zz described by Eq. (67). However, in reality, these are exactly the forces that create the very torque  = nyy that in turn causes the bending, and thus have to be taken into account in the analysis of the global picture. Such an account may be made by writing the balance of the components of the elementary torque exerted on the same rod fragment of length dz, necessary for the absence of its angular acceleration: dy + Fxdz = 0, so d y   Fx . (7.75) dz These two equations should be complemented by two geometric relations. The first of them is dy/dz = 1/R, which has already been discussed above. We may immediately combine it with the basic result (73) of our local analysis, getting: d y y (7.76)  . dz EI y The final equation is the geometric relation evident from Fig. 9a: dq x  y , dz

(7.77)

which is (as all expressions of our simple analysis) only valid for small bending angles, y  1 so the gravity effects may be neglected. Much more important is another ratio, the Reynolds number (74), which may be rewritten as Re 

vl v 2 / l  ,  v / l 2

(8.77)

and hence is a measure of the relative importance of the fluid particle’s inertia in comparison with the viscosity effects.50 So again, it is natural that for a sphere, the role of the vorticity-creating term (v)v 48

For substantially compressible fluids (e.g., gases), the most important additional dimensionless parameter is the Mach number M  v/vl, where vl = (K/)1/2 is the velocity of the longitudinal sound – which is, as we already know from Chapter 7, the only wave mode possible in an infinite fluid. Especially significant for practice are supersonic effects (including the shock wave in the form of the famous Mach cone with half-angle M = sin-1M-1) that arise at M > 1. For a more thorough discussion of these issues, I have to refer the reader to more specialized texts – either Chapter IX of the Landau-Lifshitz volume cited above or Chapter 15 in I. Cohen and P. Kundu, Fluid Mechanics, 4th ed., Academic Press, 2007 – which is generally a good book on the subject. 49 Named after William Froude (1810-1879), one of the applied hydrodynamics pioneers. 50 Note that the “dynamic” viscosity  participates in this number (and many other problems of fluid dynamics) only in the combination /, which thereby has deserved a special name of kinematic viscosity. Chapter 8

Page 24 of 30

Essential Graduate Physics

CM: Classical Mechanics

becomes noticeable already at Re ~ 1 – see Fig. 15. What is very counter-intuitive is the onset of turbulence in systems where the laminar (turbulence-free) flow is formally an exact solution to the Navier-Stokes equation for any Re. For example, at Re > Ret  2,100 (with l  2R and v  vmax) the laminar flow in a round pipe, described by Eq. (60), becomes unstable, and the resulting turbulence decreases the fluid discharge Q in comparison with the Poiseuille law (62). Even more strikingly, the critical value of Re is rather insensitive to the pipe wall roughness and does not diverge even in the limit of perfectly smooth walls. Since Re >> 1 in many real-life situations, turbulence is very important for practice. (Indeed, the values of  and  for water listed in Table 1 imply that even for a few-meter-sized object, such as a human body or a small boat, Re > 1,000 at any speed above just ~1 mm/s.) However, despite nearly a century of intensive research, there is no general, quantitative analytical theory of this phenomenon, and most results are still obtained either by rather approximate analytical treatments, or by the numerical solution of the Navier-Stokes equations using the approaches discussed in the previous section, or in experiments (e.g., on scaled models51 in wind tunnels). A rare exception is the relatively recent theoretical result by S. Orszag (1971) for the turbulence threshold in a flow of an incompressible fluid through a gap of thickness t between two parallel plane walls (see Fig. 10): Ret  5,772 (for l  t/2 and v  vmax). However, even for this simplest geometry, the analytical theory still cannot predict the turbulence patterns at Re > Ret. Only certain general, semi-quantitative features of turbulence may be understood from simple arguments. For example, Fig. 15 shows that within a very broad range of Reynolds numbers, from ~102 to ~3105, Cd of a thin round disk perpendicular to the incident flow, Cd is very close to 1.1 for any Re > 103, and that of a sphere is not too far away. The approximate equality Cd  1, meaning that the drag force F is close to v02A/2, may be understood (in the picture where the object is moved by an external force F with the velocity v0 through a fluid that was initially at rest) as the equality of the forcedelivered power Fv0 and the fluid’s kinetic energy (v02/2)V created in volume V = v0A in unit time. This relation would be exact if the object gave its velocity v0 to each and every fluid particle its cross-section runs into, for example by dragging all such particles behind itself. In reality, much of this kinetic energy goes into vortices, where the particle velocity may differ from v0, so the equality Cd  1 is only approximate. Another important general effect is that at very high values of Re, fluid flow at the leading surface of solid objects forms a thin, highly turbulent boundary layer that matches the zero relative velocity of the fluid at the surface with its substantial velocity in the outer region, which is almost free of turbulence and many cases, of other viscosity effects. This fact, clearly visible in Fig. 16, enables semi-quantitative analyses of several effects, for example, the so-called Magnus lift force52 Fl exerted (on top of the usual drag force Fd) on rotating objects, and directed across the fluid flow – see Fig. 17. An even more important application of this concept is an approximate analysis of the forces exerted on non-rotating airfoils (such as aircraft wings) with special cross-sections forming sharp angles at their back ends. Such a shape minimizes the airfoil’s contacts with the vortex street it creates in its

51 The crucial condition of correct modeling is the equality of the Reynolds numbers (74) (and if relevant, also of the Froude numbers and/or the Mach numbers) of the object of interest and its model. 52 Named after G. Magnus, who studied this effect in detail in 1852, though it had been described much earlier (in 1672) by I. Newton, and by B. Robins after him (in 1742).

Chapter 8

Page 25 of 30

Essential Graduate Physics

CM: Classical Mechanics

wake, and allows the thin boundary layer to extend over virtually all of its surface, enhancing the lift force. v0

Fl



Fd Fig. 8.17. The Magnus effect.

Unfortunately, due to the time/space restrictions, for a more detailed discussion of these results I have to refer the reader to more specialized literature,53 and will conclude this chapter with a brief discussion of just one issue: can turbulence be explained by a single mechanism? (In other words, can it be reduced, at least on a semi-quantitative level, to a set of simpler phenomena that are commonly considered “well understood”?) Apparently the answer is no,54 though nonlinear dynamics of simpler systems may provide some useful insights. In the middle of the last century, the most popular qualitative explanation of turbulence had been the formation of an “energy cascade” that would transfer the energy from the regular fluid flow to a hierarchy of vortices of various sizes.55 With our background, it is easier to retell that story in the timedomain language (with the velocity v serving as the conversion factor), using the fact that in a rotating vortex, each Cartesian component of a particle’s radius vector oscillates in time, so to some extent the vortex plays the role of an oscillatory motion mode. Let us consider the passage of a solid body between two, initially close, small parts of the fluid. The body pushes them apart, but after its passage, these partial volumes are free to return to their initial positions. However, the dominance of inertia effects at motion with Re >> 1 means that the volumes continue to oscillate for a while about those equilibrium positions. (Since elementary volumes of an incompressible fluid cannot merge, these oscillations take the form of rotating vortices – see Fig. 16 again.) Now, from Sec. 5.8 we know that intensive oscillations in a system with the quadratic nonlinearity, in this case, provided by the convective term (v)v, are equivalent, for small perturbations, to the oscillation of the system’s parameters at the corresponding frequency. On the other hand, as was briefly discussed in Sec. 6.7, in a system with two oscillatory degrees of freedom, a periodic parameter change with frequency p may lead to the non-degenerate parametric excitation (“down-conversion”) of oscillations with frequencies 1,2 satisfying the relation 1 + 2 = p. Moreover, the spectrum of oscillations in such a system also has higher combinational frequencies such as (p + 1), thus pushing the oscillation energy up the frequency scale (“up-conversion”). In the presence of other oscillatory modes, these oscillations may in turn produce, via the same nonlinearity, even higher frequencies, etc. In a fluid, the spectrum of these “oscillatory modes” (actually, vortex 53

See, e.g., P. Davidson, Turbulence, Oxford U. Press, 2004. The following famous quote is attributed to Werner Heisenberg on his deathbed: “When I meet God, I will ask him two questions: Why relativity? And why turbulence? I think he will have an answer for the first question.” Though probably inaccurate, this story reflects rather well the frustration of the fundamental physics community, renown for their reductionist mentality, with the enormous complexity of phenomena that obey simple (e.g., the Navier-Stokes) equations, i.e. from their point of view, do not describe any new physics. 55 This picture was suggested in 1922 by Lewis F. Richardson. 54

Chapter 8

Page 26 of 30

Essential Graduate Physics

CM: Classical Mechanics

structures) is essentially continuous, so the above arguments make very plausible a sequential transfer of the energy from the moving body to a broad range of oscillatory modes – whose frequency spectrum is limited from above by the energy dissipation due to the fluid’s viscosity. When excited, these modes interact (in particular, mutually phase-lock) via the system’s nonlinearity, creating the complex motion we call turbulence. Though not having much quantitative predictive power, such handwaving explanations, which are essentially based on the excitation of a large number of effective degrees of freedom, had been dominating the turbulence reviews until the mid-1960s. At that point, the discovery (or rather rediscovery) of quasi-random motions in classical dynamic systems with just a few degrees of freedom altered the discussion substantially. Since this phenomenon, called the deterministic chaos, extends well beyond the fluid dynamics, I will devote to it a separate (albeit short) next chapter, and in its end will briefly return to the discussion of turbulence. 8.7. Exercise problems 8.1. For a mirror-symmetric but otherwise arbitrary shape of a ship’s hull, derive an explicit expression for the height of its metacenter M – see Fig. 3. Spell out this expression for a rectangular hull. 8.2. Find the stationary shape of the open surface of an incompressible, heavy fluid in a container rotated about its vertical axis with g a constant angular velocity  – see the figure on the right.



8.3. In the first order in the so-called flattening f  (Re – Rp)/Rp h) waves on the surface of a broad, plane layer of thickness h, of an ideal, incompressible fluid, and use it to calculate the longest standing wave modes and frequencies in a layer covering a spherical planet of radius R >> h. Hint: The second task requires some familiarity with the basic properties of spherical harmonics.57 8.15. Calculate the velocity distribution and the dispersion relation of the waves propagating along the horizontal interface of two ideal, incompressible fluids of different densities. 8.16. Derive Eq. (50) for the capillary waves (“ripples”). 8.17. Use the finite-difference approximation for the Laplace operator, with the mesh step h = a/4, to find the maximum velocity and total mass flow Q of an incompressible viscous fluid through a long tube with a square-shaped cross-section of side a. Compare the results with those described in Sec. 5 for the same problem with the mesh step h = a/2, and for a pipe with a circular cross-section of the same area. x h 8.18. A layer, of thickness h, of a heavy, viscous, incompressible fluid flows down a long and wide inclined plane, 0 v( x)  ? under its own weight – see the figure on the right. Find the stationary velocity distribution profile, and the total fluid discharge  , (per unit width.)  g 8.19. An external force moves two coaxial round disks of radius R, with an incompressible viscous fluid in the gap between them, toward each other with a constant velocity u. Calculate the applied force in the limit when the gap’s thickness t is already much smaller than R. 8.20. Calculate the drag torque exerted on a unit length of a solid round cylinder of radius R that rotates about its axis with angular velocity , inside an incompressible fluid with viscosity , kept static far from the cylinder. 8.21. Solve a similar problem for a sphere of radius R, rotating about one of its principal axes. 8.22. Calculate the tangential force (per unit area) exerted by an incompressible fluid, with density  and viscosity , on a broad solid plane placed over its surface and forced to oscillate, along the surface, with amplitude a and frequency . 8.23. A massive barge, with a flat bottom of area A, floats in shallow water, with clearance h 1.

r1  3

r2

r

q ( 2)

q

q

(1)

1  1 r

Fig. 9.2. The fixed points and chaotic regions of the logistic map. Adapted, under the CCO 1.0 Universal Public Domain Dedication, from the original by Jordan Pierce, available at http://en.wikipedia.org/wiki/Lo gistic_map. (A very nice live simulation of the map is also available on this website.)

q ( 2)

r However, at r > r1 = 3, the fixed point q(1) also becomes unstable. To prove that, let us take q n  q (1)  q~n , assume that the deviation q~n from the fixed point q(1) is small, and linearize the map (2) in q~ – just as we repeatedly did for differential equations earlier in this course. The result is n

df q~n 1  dq

Chapter 9

q  q (1 )

q~n  r (1  2q (1) )q~n  (2  r ) q~n .

(9.4)

Page 2 of 14

Essential Graduate Physics

CM: Classical Mechanics

It shows that at 0 < 2 – r < 1, i.e. at 1 < r < 2, the deviations q~n decrease monotonically. At –1 < 2 – r < 0, i.e. in the range 2 < r < 3, the deviations’ sign alternates, but their magnitude still decreases – as in a stable focus, see Sec. 5.6. However, at –1 < 2 – r, i.e. r > r1  3, the deviations grow by magnitude, while still changing their sign, at each step. Since Eq. (2) has no other fixed points, this means that at n  , the values qn do not converge to one point; rather, within the range r1 < r < r2, they approach a limit cycle of alternation of two points, q+(2) and q–(2), which satisfy the following system of algebraic equations: q ( 2 )  f q ( 2 ) , q ( 2 )  f q ( 2)  . (9.5) These points are also plotted in Fig. 2, as functions of the parameter r. What has happened at the point r1 = 3 is called the period-doubling bifurcation. The story repeats at r = r2  1 + 6  3.45, where the system goes from the 2-point limit cycle to a 4-point cycle, then at r = r3  3.54, where the limit cycle begins to consist of 8 alternating points, etc. Most remarkably, the period-doubling bifurcation points rn, at that the number of points in the limit cycle doubles from 2n-1 points to 2n points, become closer and closer to each other. Numerical simulations show that at n  , these points obey the following asymptotic behavior: Feigenbaum bifurcation sequence

rn  r 

C

n

, where r  3.5699...,   4.6692...

(9.6)

The parameter  is called the Feigenbaum constant; for other maps, and some dynamic systems (see the next section), period-doubling sequences follow a similar law, but with different values of . More important for us, however, is what happens at r > r. Numerous numerical experiments, repeated with increasing precision,4 have confirmed that here the system is disordered, with no reproducible limit cycle, though (as Fig. 2 shows) at r  r, all sequential values qn are still confined to a few narrow regions.5 However, as parameter r is increased well beyond r, these regions broaden and merge. This is the so-called deep chaos, with no apparent order at all.6 The most important feature of the chaos (in this and any other system) is the exponential divergence of trajectories. For a 1D map, this means that even if the initial conditions q1 in two map implementations differ by a very small amount q1, the difference qn between the corresponding sequences qn is growing, on average, exponentially with n. Such exponents may be used to characterize chaos. Indeed, an evident generalization of the linearized Eq. (4) to an arbitrary point qn is Δq n 1  en Δq n ,

en 

df dq

q  qn

.

(9.7)

4

The reader should remember that just like the usual (“nature”) experiments, numerical experiments also have limited accuracy, due to unavoidable rounding errors. 5 The geometry of these regions is essentially fractal, i.e. has a dimensionality intermediate between 0 (which any final set of geometric points would have) and 1 (pertinent to a 1D continuum). An extensive discussion of fractal geometries and their relation to the deterministic chaos may be found, e.g., in the book by B. Mandelbrot, The Fractal Geometry of Nature, W. H. Freeman, 1983. 6 This does not mean that chaos’ depth is always a monotonic function of r. As Fig. 2 shows, within certain intervals of this parameter, the chaotic behavior suddenly disappears, being replaced, typically, with a few-point limit cycle, just to resume on the other side of the interval. Sometimes (but not always!) the “route to chaos” on the borders of these intervals follows the same Feigenbaum sequence of period-doubling bifurcations. Chapter 9

Page 3 of 14

Essential Graduate Physics

CM: Classical Mechanics

Let us assume that q1 is so small that N first values qn are relatively close to each other. Then using Eq. (7) iteratively for these steps, we get N

Δq N  Δq1  en ,

so that ln

n 1

N Δq N   ln en . Δq1 n 1

(9.8)

Numerical experiments show that in most chaotic regimes, at N   such a sum fluctuates about an average, which grows as N, with the parameter

  lim Δq 0 lim N  1

1 N

N

 ln e n 1

n

,

(9.9)

being independent of the initial conditions. The bottom panel in Fig. 3 called the Lyapunov shows  as a function of the parameter r for the logistic map (2). (Its top panel shows the same pattern as Fig. 2, which is reproduced here just for the sake of comparison.) exponent,7

q



0

Fig. 9.3. The Lyapunov exponent for the logistic map. Adapted, with permission, from the monograph by Schuster and Just (cited below). © Wiley-VCH Verlag GmbH & Co. KGaA.

Note that at r < r,  is negative, indicating the sequence’s stability, besides the points r1, r2, … where  would become positive if the limit cycle changes (bifurcations) had not brought it back into the negative territory. However, at r > r,  becomes positive, returning to negative values only in limited intervals of stable limit cycles. It is evident that in numerical experiments (which dominate the studies of deterministic chaos) the Lyapunov exponent may be used as a good measure of the chaos’ depth.8 7

After Alexandr Mikhailovich Lyapunov (1857-1918), famous for his studies of the stability of dynamic systems. N-dimensional maps that relate N-dimensional vectors rather than scalars, may be characterized by N Lyapunov exponents rather than one. For chaotic behavior, it is sufficient for just one of them to become positive. For such systems, another measure of chaos, the Kolmogorov entropy, may be more relevant. This measure and its relation with the Lyapunov exponents are discussed, for example, in SM Sec. 2.2.

8

Chapter 9

Page 4 of 14

Lyapunov exponent

Essential Graduate Physics

CM: Classical Mechanics

Despite the abundance of results published for particular maps,9 and several interesting observations (like the already discussed existence of the Feigenbaum bifurcation sequences), to the best of my knowledge, nobody can yet predict the patterns like those shown in Fig. 2 and 3 from just studying the mapping rule itself, i.e. without carrying out actual numerical experiments. Unfortunately, the understanding of deterministic chaos in other systems is not much better. 9.2. Chaos in dynamic systems Proceeding to the discussion of chaos in dynamic systems, it is more natural, with our background, to illustrate this discussion not with the Lorenz equations, but with the system of equations describing a dissipative pendulum driven by a sinusoidal external force, which was repeatedly discussed in Chapter 5. Introducing two new variables, the normalized momentum p  (dq/dt)/0, and the external force’s full phase   t, we may rewrite Eq. (5.42), describing the pendulum, in a form similar to Eq. (1), i.e. as a system of three first-order ordinary differential equations: q   0 p, p   0 sin q  2p  ( f 0 /  0 ) cos ,

(9.10)

  . Figure 4 shows several results of a numerical solution of Eq. (10).10 In all cases, parameters , 0, and f0 are fixed, while the external frequency  is gradually changed. For the case shown on the top two panels, the system still tends to a stable periodic solution, with very low contents of higher harmonics. If the external force frequency is reduced by a just few percent, the 3rd subharmonic may be excited. (This effect has already been discussed in Sec. 5.8 – see, e.g., Fig. 5.15.) The next row shows that just a small further reduction of the frequency  leads to a new tripling of the period, i.e. the generation of a complex waveform with the 9th subharmonic. Finally (see the bottom panels of Fig. 4), even a minor further change of  leads to oscillations without any visible period, e.g., to the chaos. In order to trace this transition, a direct inspection of the oscillation waveforms q(t) is not very convenient, and trajectories on the phase plane [q, p] also become messy if plotted for many periods of the external frequency. In situations like this, the Poincaré (or “stroboscopic”) plane, already discussed in Sec. 5.6, is much more useful. As a reminder, this is essentially just the phase plane [q, p], but with the points highlighted only once a period, e.g., at  = 2n, with n = 1, 2, … On this plane, periodic oscillations of frequency  are represented just as one fixed point – see, e.g. the top panel in the right column of Fig. 4. The 3rd subharmonic generation, shown on the next panel, means the oscillation period’s tripling and is represented as the splitting of the fixed point into three. It is evident that this transition is similar to the period-doubling bifurcation in the logistic map, besides the fact (already discussed in Sec. 5.8) that in systems with an antisymmetric nonlinearity, such as the pendulum (10), the 3rd subharmonic is easier to excite. From this point, the 9th harmonic generation (shown on the 3rd panel of Fig. 4), i.e. one more splitting of the points on the Poincaré plane, may be understood as one more step on the Feigenbaum-like route to chaos – see the bottom panel of that figure. See, e.g., Chapters 2-4 in H. Schuster and W. Just, Deterministic Chaos, 4th ed., Wiley-VCH, 2005, or Chapters 8-9 in J. Thompson and H. Stewart, Nonlinear Dynamics and Chaos, 2nd ed., Wiley, 2002. 10 In the actual simulation, a small term q, with  0, is being changed slowly, without exerting any additional direct force. Calculate the oscillation energy E as a function of m. 10.9. A stiff ball is bouncing vertically from the floor of an elevator whose upward acceleration changes very slowly. Neglecting the energy dissipation, calculate how much the bounce height h changes during the acceleration’s increase from 0 to g. Is your result valid for an equal but abrupt increase of the elevator’s acceleration? 10.10.* A 1D particle of a constant mass m moves in a time-dependent potential U(q, t) = m2(t)q2/2, where (t) is a slow function of time, with    2 . Develop the approximate method for the solution of the corresponding equation of motion, similar to the WKB approximation used in quantum mechanics.24 Use the approximation to confirm the conservation of the action variable (40) for this system. Hint: You may like to look for the solution to the equation of motion in the form q t   exp t   i t  ,

where  and  are some real functions of time, and then make proper approximations in the resulting equations for these functions.

24

See, e.g., QM Sec. 2.4.

Chapter 10

Page 16 of 16

Konstantin K. Likharev

Essential Graduate Physics Lecture Notes and Problems Beta version Open online access at http://commons.library.stonybrook.edu/egp/ and https://sites.google.com/site/likharevegp/

Part EM: Classical Electrodynamics Last corrections: October 4, 2023

A version of this material was published in 2018 under the title

Classical Electrodynamics: Lecture notes IOPP, Essential Advanced Physics – Volume 3, ISBN 978-0-7503-1405-3, with the model solutions of the exercise problems published under the title

Classical Electrodynamics: Problems with solutions IOPP, Essential Advanced Physics – Volume 4, ISBN 978-0-7503-1408-4 However, by now this online version of the lecture notes and the problem solutions available from the author, have been better corrected

About the author: https://you.stonybrook.edu/likharev/

© K. Likharev

Essential Graduate Physics

EM: Classical Electrodynamics

Table of Contents Chapter 1. Electric Charge Interaction (20 pp.) 1.1. The Coulomb law 1.2. The Gauss law 1.3. Scalar potential and electric field energy 1.4. Exercise problems (20) Chapter 2. Charges and Conductors (68 pp.) 2.1. Polarization and screening 2.2. Capacitance 2.3. The simplest boundary problems 2.4. Using other orthogonal coordinates 2.5. Variable separation – Cartesian coordinates 2.6. Variable separation – polar coordinates 2.7. Variable separation – cylindrical coordinates 2.8. Variable separation – spherical coordinates 2.9. Charge images 2.10. Green’s functions 2.11. Numerical approach 2.12. Exercise problems (47) Chapter 3. Dipoles and Dielectrics (28 pp.) 3.1. Electric dipole 3.2. Dipole media 3.3. Polarization of dielectrics 3.4. Electrostatics of linear dielectrics 3.5. Electric field energy in a dielectric 3.6. Exercise problems (30) Chapter 4. DC Currents (16 pp.) 4.1. Continuity equation and the Kirchhoff laws 4.2. The Ohm law 4.3. Boundary problems 4.4. Energy dissipation 4.5. Exercise problems (15) Chapter 5. Magnetism (42 pp.) 5.1. Magnetic interaction of currents 5.2. Vector potential and the Ampère law 5.3. Magnetic flux, energy, and inductance 5.4. Magnetic dipole moment, and magnetic dipole media 5.5. Magnetic materials 5.6. Systems with magnetic materials 5.7. Exercise problems (29)

Table of Contents

Page 2 of 4

Essential Graduate Physics

EM: Classical Electrodynamics

Chapter 6. Electromagnetism (38 pp.) 6.1. Electromagnetic induction 6.2. Magnetic energy revisited 6.3. Quasistatic approximation, and the skin effect 6.4. Electrodynamics of superconductivity, and the gauge invariance 6.5. Electrodynamics of macroscopic quantum phenomena 6.6. Inductors, transformers, and ac Kirchhoff laws 6.7. Displacement currents 6.8. Finally, the full Maxwell equation system 6.9. Exercise problems (31) Chapter 7. Electromagnetic Wave Propagation (70 pp.) 7.1. Plane waves 7.2. Attenuation and dispersion 7.3. Reflection 7.4. Refraction 7.5. Transmission lines: TEM waves 7.6. Waveguides: H and E waves 7.7. Dielectric waveguides, optical fibers, and paraxial beams 7.8. Resonance cavities 7.9. Energy loss effects 7.10. Exercise problems (43) Chapter 8. Radiation, Scattering, Interference, and Diffraction (38 pp.) 8.1. Retarded potentials 8.2. Electric dipole radiation 8.3. Wave scattering 8.4. Interference and diffraction 8.5. The Huygens principle 8.6. Fresnel and Fraunhofer diffraction patterns 8.7. Geometrical optics placeholder 8.8. Fraunhofer diffraction from more complex scatterers 8.9. Magnetic dipole and electric quadrupole radiation 8.10. Exercise problems (28) Chapter 9. Special Relativity (56 pp.) 9.1. Einstein postulates and the Lorentz transform 9.2. Relativistic kinematic effects 9.3. 4-vectors, momentum, mass, and energy 9.4. More on 4-vectors and 4-tensors 9.5. The Maxwell equations in the 4-form 9.6. Relativistic particles in electric and magnetic fields 9.7. Analytical mechanics of charged particles 9.8. Analytical mechanics of electromagnetic field 9.9. Exercise problems (42)

Table of Contents

Page 3 of 4

Essential Graduate Physics

EM: Classical Electrodynamics

Chapter 10. Radiation by Relativistic Charges (40 pp.) 10.1. Liénard-Wiechert potentials 10.2. Radiation power 10.3. Synchrotron radiation 10.4. Bremsstrahlung 10.5. Coulomb losses 10.6. Density effects and the Cherenkov radiation 10.7. Radiation’s back-action 10.8. Exercise problems (15) *** Additional file (available from the author upon request): Exercise and Test Problems with Model Solutions (300 + 50 = 350 problems; 464 pp.) ***

Author’s Remarks Generally, the structure of this classical electrodynamics course is quite traditional. Namely, in order to address the most important subjects of the field, which involve not only charged point particles, but also conducting, dielectric, and magnetic media, the electromagnetic interactions are discussed in parallel with simple models of the electric and magnetic properties of materials. Also following tradition, I use this part of my series (notably Chapter 2) as a convenient platform for the discussion of various methods of the solution of partial differential equations, including the use of the most important systems of curvilinear orthogonal coordinates and special functions. One more traditional part of classical electrodynamics is an introduction to special relativity (in Chapter 9), because although this topic includes a substantial classical mechanics component, it is the electrodynamics that makes the relativistic approach necessary.

Table of Contents

Page 4 of 4

Essential Graduate Physics

EM: Classical Electrodynamics

Chapter 1. Electric Charge Interaction This chapter reviews the basics of electrostatics – the description of interactions between stationary (or relatively slowly moving) electric charges. Much of this material should be known to the reader from their undergraduate studies;1 because of that, the explanations are very brief. 1.1. The Coulomb law A quantitative discussion of classical electrodynamics, starting from the electrostatics, requires common agreement on the meaning of the following notions:2 - electric charges qk, as revealed, most explicitly, by observation of electrostatic interaction between the charged particles; - point charges – the charged particles so small that their position in space, for the given problem, may be completely described (in the given reference frame) by their radius-vectors rk; and - electric charge conservation – the fact that the algebraic sum of all charges qk inside any closed volume is conserved unless the charged particles cross the volume’s border. I will assume that these notions are well known to the reader. Using them, the Coulomb law3 for the interaction of two stationary point charges may be formulated as follows: Fkk'   q k q k' Coulomb law

rk  rk' rk  rk '

with R kk'  rk  rk' , n kk'

3



q k q k' n kk' , 2 Rkk'

R  kk ' , Rkk '  R kk ' , Rkk '

(1.1)

where Fkk’ denotes the electrostatic (Coulomb) force exerted on the charge number k by the charge number k’, separated from it by distance Rkk’ – see Fig. 1. Fk'k

qk ' rk '

n kk'

R kk'  rk  rk ' rk

0

qk

Fkk '

Fig. 1.1. Coulomb force directions (for the case qkqk’ > 0).

For remedial reading, I can recommend, for example, D. Griffiths, Introduction to Electrodynamics, 4th ed., Pearson, 2015. 2 On top of the more general notions of the classical Newtonian space, point particles and forces, as used in classical mechanics – see, e.g., CM Sec. 1.1. 3 Formulated in 1785 by Charles-Augustin de Coulomb, on basis of his earlier experiments, in turn rooting in prior studies of electrostatic phenomena, with notable contributions by William Gilbert, Otto von Guericke, Charles François de Cisternay Du Fay, Benjamin Franklin, and Henry Cavendish. 1

© K. Likharev

Essential Graduate Physics

EM: Classical Electrodynamics

I am confident that this law is very familiar to the reader, but a few comments may still be due: (i) Flipping the indices k and k’, we see that Eq. (1) complies with the 3rd Newton law: the reciprocal force is equal in magnitude but opposite in direction: Fk’k = –Fkk’. (ii) Since the vector Rkk’  rk – rk’, by its definition, is directed from point rk’ toward point rk (Fig. 1), Eq. (1) correctly describes the experimental fact that charges of the same sign (i.e. with qkqk’ > 0) repulse, while those with opposite signs (qkqk’ < 0) attract each other. (iii) In some textbooks, the Coulomb law (1) is given with the qualifier “in free space” or “in vacuum”. However, actually, Eq. (1) remains valid even in the presence of any other charges – for example, of internal charges in a quasi-continuous medium that may surround the two charges (number k and k’) under consideration. The confusion stems from the fact, to be discussed in detail in Chapter 3 below, that in some cases it is convenient to formally represent the effect of the other charges as an effective (rather than actual!) modification of the Coulomb law. (iv) The constant  in Eq. (1) depends on the system of units we use. In the Gaussian units,  is set to 1, for the price of introducing a special unit of charge (the statcoulomb) that would make experimental data compatible with Eq. (1) if the force Fkk’ is measured in the Gaussian units (dynes). On the other hand, in the International System (“SI”) of units, the charge’s unit is one coulomb (abbreviated C), and  is different from 1: 1  SI  , (1.2) 4 0

where 0  8.85410-12 is called the electric constant.4 Unfortunately, the continuing struggle between zealous proponents of these two systems of units bears all not-so-nice features of a religious war, with a similarly slim chance for any side to win it in any foreseeable future. In my humble view, each of these systems has its advantages and handicaps (to be noted on several occasions below), and every educated physicist should have no problem with using any of them. Following insisting recommendations of international scientific unions, I am using the SI units throughout my series. However, for the readers’ convenience, in this course (where the difference between the Gaussian and SI systems is especially significant) I will write the most important formulas with the constant (2) clearly displayed – for example, the combination of Eqs. (1) and (2) as Fkk' 

1 4 0

q k q k'

rk  rk' rk  rk '

3

,

(1.3)

so the transfer to the Gaussian units may be performed just by the formal replacement 40  1. (In the rare cases when the transfer is not obvious, I will duplicate formulas in the Gaussian units.) Besides Eq. (3), another key experimental law of electrostatics is the linear superposition principle: the electrostatic forces exerted on some point charge (say, qk) by other charges add up as vectors, forming the net force 4

Since 2018, one coulomb is defined, in the “legal” metrology, as a certain exactly fixed number of the fundamental electric charges e, and the “legal” SI value of 0 is not more exactly equal to 107/4c2 (where c is the speed of light) as it was before, but remains extremely close to that fraction, with the relative difference of the order of 10-10 – see appendix UCA: Selected Units and Constants. In this series, this minute difference is ignored. Chapter 1

Page 2 of 20

 in SI units

Essential Graduate Physics

EM: Classical Electrodynamics

Fk   Fkk ' ,

(1.4)

k ' k

where the summation is extended over all charges but qk, and the partial force Fkk’ is described by Eq. (3). The fact that the sum is restricted to k’  k means that a point charge, in statics, does not interact with itself. This fact may look obvious from Eq. (3), whose right-hand side diverges at rk  rk’, but becomes less evident (though still true) in quantum mechanics – where the charge of even an elementary particle is effectively spread around some volume, together with the particle’s wavefunction.5 Now we may combine Eqs. (3) and (4) to get the following expression for the net force F acting on a probe charge q located at point r: r  rk' 1 . (1.5) q k' F r   q  3 4 0 rk ' r r  rk' This equality implies that it makes sense to introduce the notion of the electric field (as an entity independent of q), whose distribution in space is characterized by the following vector: Electric field: definition

Electric field of point charges

Er  

Fr  , q

(1.6)

formally called the electric field strength – but much more frequently, just the “electric field”. In these terms, Eq. (5) becomes r  rk' 1 qk ' . (1.7) E(r )   3 4 0 rk ' r r  rk' Just convenient is electrostatics, the notion of the field becomes virtually unavoidable for the description of time-dependent phenomena (such as electromagnetic waves, see Chapter 7 and on), where the electromagnetic field shows up as a specific form of matter, different from the usual “material” particles – even though quantum electrodynamics (to be reviewed in QM Chapter 9) offers their joint description. Many real-world problems involve multiple point charges located so closely that it is possible to approximate them with a continuous charge distribution. Indeed, let us consider a group of many (dN >> 1) close charges, located at points rk’, all within an elementary volume d3r’. For relatively distant field observation points, with r – rk’  >> dr’, the geometrical factor in the corresponding terms of Eq. (7) is essentially the same. As a result, these charges may be treated as a single elementary charge dQ(r’). Since at dN >> 1, this elementary charge is proportional to the elementary volume d3r’, we can define the local 3D charge density (r’) by the following relation:

 (r' )d 3 r'  dQ(r ' ) 

q

rk 'd 3r'

k'

,

(1.8)

and rewrite Eq. (7) as an integral (over the volume containing all essential charges): Electric field of continuous charge

E(r ) 

1 4 0

  (r' )

r  r' r  r'

3

d 3 r' .

(1.9)

5

Note that some widely used approximations, e.g., the density functional theory (DFT) of multiparticle systems, essentially violate this law, thus limiting their accuracy and applicability – see, e.g., QM Sec. 8.4. Chapter 1

Page 3 of 20

Essential Graduate Physics

EM: Classical Electrodynamics

Note that for a continuous, smooth charge density (r’), the integral in Eq. (9) does not diverge at R  r – r’  0, because in this limit, the fraction under the integral increases as R-2, i.e. slower than the decrease of the elementary volume d3r’, proportional to R3. Let me emphasize the dual use of Eq. (9). In the case when (r) is a continuous function representing the average charge defined by Eq. (8), Eq. (9) is not valid at distances r – rk’  of the order of the distance between the adjacent point charges, i.e. does not describe rapid variations of the electric field at these distances. Such approximate, smoothly changing field E(r), is called macroscopic; we will repeatedly return to this notion in the following chapters. On the other hand, Eq. (9) may be also used for the description of the exact (frequently called microscopic) field of discrete point charges, by employing the notion of Dirac’s delta function, which is the mathematical description of a very sharp function equal to zero everywhere but one point, and still having a finite integral (equal to 1).6 Indeed, in this formalism, a set of point charges qk’ located in points rk’ may be represented by the pseudocontinuous density  (r' )   q k ' (r'  rk ' ). (1.10) k'

Plugging this expression into Eq. (9), we return to its exact, discrete version (7). In this sense, Eq. (9) is exact, and we may use it as the general expression for the electric field. 1.2. The Gauss law Due to the extension of Eq. (9) to point (“discrete”) charges, it may seem that we do not need anything besides it for solving any problem of electrostatics. In practice, however, this is not quite true – first of all, because the direct use of Eq. (9) frequently leads to complex calculations. Indeed, let us try to solve a problem that is conceptually very simple: find the electric field induced by a sphericallysymmetric charge distribution with density (r’) – see Fig. 2.

 (r' )d 3 r' r' 0

' h

a

R ra

dE z  dE cos r



dE

Fig. 1.2. One of the simplest problems of electrostatics: the electric field produced by a spherically-symmetric charge distribution.

We may immediately use the problem’s symmetry to argue that the electric field should be also spherically-symmetric, with only one component in the spherical coordinates: E(r)= E(r)nr, where nr  r/r is the unit vector in the direction of the field observation point r. Taking this direction for the polar

See, e.g., MA Sec. 14. The 2D (areal) charge density  and the 1D (linear) density  may be defined absolutely similarly to the 3D (volumic) density  : dQ = d2r, dQ = dr. Note that the approximations in that either   0 or   0 imply that  is formally infinite at the charge location; for example, the model in that a plane z = 0 is charged with areal density   0, means that  = (z), where (z) is Dirac’s delta function. 6

Chapter 1

Page 4 of 20

Essential Graduate Physics

EM: Classical Electrodynamics

axis of a spherical coordinate system, we can use the evident axial symmetry of the system to reduce Eq. (9) to    (r' ) 1 2  sin  'd '  r' 2 dr' 2 cos  , E (1.11) 4 0 R 0 0 where , ’, and R are the geometrical parameters marked in Fig. 2. Since  and R may be readily expressed via r’ and ’, using the auxiliary parameters a and h, cos  

ra , R

R 2  h 2  (r  r' cos  ) 2 ,

where a  r' cos ' , h  r' sin ' ,

(1.12)

Eq. (11) may be eventually reduced to an explicit integral over r’ and ’, and worked out analytically, but that would require some effort. For other problems, the integral (9) may be much more complicated, defying an analytical solution. One could argue that with the present-day abundance of computers and numerical algorithm libraries, one can always resort to numerical integration. This argument may be enhanced by the fact that numerical integration is based on the replacement of the required integral by a discrete sum, and the summation is much more robust to the (unavoidable) rounding errors than the finite-difference schemes typical for the numerical solution of differential equations. These arguments, however, are only partly justified, since in many cases the numerical approach runs into a problem sometimes called the curse of dimensionality – the exponential dependence of the number of needed calculations on the number of independent parameters of the problem.7 Thus, despite the proliferation of numerical methods in physics, analytical results have an everlasting value, and we should try to get them whenever we can. For our current problem of finding the electric field generated by a fixed set of electric charges, large help may come from the so-called Gauss law. To derive it, let us consider a single point charge q inside a smooth closed surface S (Fig. 3), and calculate the product End2r, where d2r is an elementary area of the surface (which may be well approximated with a plane fragment of that area), and En  En is the component of the electric field at that point, normal to the plane.

En  S

r

E d 2r

(a)

(b)

Eout

d 2 r'

S

d  0

 d r cos 2

q

d

Ein d  0

q Fig. 1.3. Deriving the Gauss law: a point charge q (a) inside the volume V, and (b) outside of that volume.

7

For a more detailed discussion of this problem, see, e.g., CM Sec. 5.8.

Chapter 1

Page 5 of 20

Essential Graduate Physics

EM: Classical Electrodynamics

This component may be calculated as Ecos, where  is the angle between the vector E and the unit vector n normal to the surface. Now let us notice that the product cos d2r is nothing more than the area d2r’ of the projection of d2r onto the plane normal to the vector r connecting the charge q with the considered point of the surface (Fig. 3), because the angle between the elementary areas d2r’ and d2r is also equal to . Using the Coulomb law for E, we get E n d 2 r  E cos  d 2 r 

1

q 2 d r' . 4 0 r 2

(1.13)

But the ratio d2r’/r2 is nothing more than the elementary solid angle d under which the areas d2r’ and d2r are seen from the charge point, so End2r may be represented just as a product of d by a constant (q/40). Summing these products over the whole surface, we get

E d n

2

r

S

q 4 0

q

 dΩ   S

,

(1.14)

0

since the full solid angle equals 4. (The integral on the left-hand side of this relation is called the flux of electric field through the surface S.) Relation (14) expresses the Gauss law for one point charge. However, it is only valid if the charge is located inside the volume V limited by the surface S. To find the flux created by a charge located outside of this volume, we still can use Eq. (13), but have to be careful with the signs of the elementary contributions EndA. Let us use the common convention to direct the unit vector n out of the closed volume we are considering (the so-called outer normal), so the elementary product End2r = (En)d2r and hence d = End2r’/r2 is positive if the vector E is pointing out of the volume (like in the example shown in Fig. 3a and at the upper-right area in Fig. 3b), and negative in the opposite case (for example, at the lower-left area in Fig. 3b). As the latter panel shows, if the charge is located outside of the volume, for each positive contribution d there is always an equal and opposite contribution to the integral. As a result, at the integration over the solid angle, the positive and negative contributions cancel exactly, so 2 (1.15)  En d r  0. S

The real power of the Gauss law is revealed by its generalization to the case of several, especially many charges. Since the calculation of flux is a linear operation, the linear superposition principle (4) means that the flux created by several charges is equal to the (algebraic) sum of individual fluxes from each charge, for which either Eq. (14) or Eq. (15) are valid, depending on whether the charge is in or out of the volume. As the result, for the total flux we get: 2  En d r  S

QV

0



1

0



r jV

qj 

1

0

  (r' )d

3

r' ,

(1.16)

V

where QV is the net charge inside volume V. This is the full version of the Gauss law.8 In order to appreciate the problem-solving power of the law, let us revisit the problem shown in Fig. 2, i.e. the field of a spherical charge distribution. Due to its symmetry, which had already been 8

The law is named after the famed Carl Gauss (1777-1855), even though it was first formulated earlier (in 1773) by Joseph-Louis Lagrange who was also the father-founder of analytical mechanics – see, e.g., CM Chapter 2. Chapter 1

Page 6 of 20

Gauss law

Essential Graduate Physics

EM: Classical Electrodynamics

discussed above, if we apply Eq. (16) to a sphere of a certain radius r, the electric field has to be normal to the sphere at each its point (i.e., En = E), and its magnitude has to be the same at all points: En = E(r). As a result, the flux calculation is elementary: . (1.17) 2 2  E n d r  4 r E (r ) Now applying the Gauss law (16), we get:

4 r E (r ) 

1

2

so, finally,

E (r ) 

0

1 r 0 2

  (r' )d

3

r' 

r ' r r

 r'

2

 (r' )dr' 

0

4

0

r

 r'

2

 (r' )dr' ,

(1.18)

0

Qr , 4 0 r 2 1

(1.19)

where Qr is the full charge inside the sphere of radius r: r

Qr 

3 2   r' d r'  4   (r' )r' dr'.

r'  r

(1.20)

0

In particular, this formula shows that the field outside of a sphere of a finite radius R is exactly the same as if all its charge Q = Q(R) is concentrated in the sphere’s center. (Note that this important result is only valid for a spherically-symmetric charge distribution.) For the field inside the sphere, finding the electric field still requires the explicit integration (20), but this 1D integral is much simpler than the 2D integral (11), and in some important cases may be readily worked out analytically. For example, if the charge Q is uniformly distributed inside a sphere of radius R,

 (r' )   

Q Q  , V (4 / 3) R 3

(1.21)

then the integration is elementary:

E (r ) 

r  r 2 1 Qr r' dr'   . 2  3 0 4 0 R 3 r 0 0

(1.22)

We see that in this case, the field is growing linearly from the center to the sphere’s surface, and only at r > R starts to decrease in agreement with Eq. (19) with constant Q(r) = Q. Note also that the electric field is continuous for all r (including r = R) – as for all systems with finite volumic density, In order to underline the importance of the last condition, let us consider one more elementary but very important example of Gauss law’s application. Let a thin plane sheet (Fig. 4) be charged uniformly, with a finite areal density  = const. In this case, it is fruitful to use the Gauss volume in the form of a planar “pillbox” of thickness 2z (where z is the Cartesian coordinate perpendicular to the plane) and certain area A – see the dashed lines in Fig. 4. Due to the symmetry of the problem, it is evident that the electric field should be: (i) directed along the z-axis, (ii) constant on each of the upper and bottom sides of the pillbox, (iii) equal and opposite on these sides, and (iv) parallel to the side surfaces of the box. As a result, the full electric field flux through the pillbox’s surface is just 2AE(z), so the Gauss law (16) yields 2AE(z) = QA/0  A/0, and we get a very simple but important formula E( z) 

Chapter 1

  const. 2 0

(1.23)

Page 7 of 20

Essential Graduate Physics

EM: Classical Electrodynamics

E

z

 A

z

Fig. 1.4. The electric field of a charged plane.

E

Notice that, somewhat counter-intuitively, the field magnitude does not depend on the distance from the charged plane. From the point of view of the Coulomb law (5), this result may be explained as follows: the farther the observation point from the plane, the weaker the effect of each elementary charge, dQ = d2r, but the more such elementary charges give contributions to the z-component of vector E, because they are “seen” from the observation point at relatively small angles to the z-axis. Note also that though the magnitude E   E of this electric field is constant, its component En normal to the plane (for our coordinate choice, Ez) changes its sign at the plane, experiencing a discontinuity (jump) equal to

E z  E z z  0   E z z  0  

 . 0

(1.24)

This jump disappears if the surface is not charged. Returning for a split second to our charged sphere problem (Fig. 2), solving it we have considered the volumic charge density  to be finite everywhere, including the sphere’s surface, so on it  = 0, and the electric field should be continuous – as it is. Admittedly, the integral form (16) of the Gauss law is immediately useful only for highly symmetrical geometries, such as in the two problems discussed above. However, it may be recast into an alternative, differential form whose field of useful applications is much wider. This form may be obtained from Eq. (16) using the divergence theorem of the vector algebra, which is valid for any spacedifferentiable vector, in particular E, and for the volume V limited by any closed surface S:9

E d n

S

2

r   (  E)d 3 r ,

(1.25)

V

where  is the del (or “nabla”) operator of spatial differentiation.10 Combining Eq. (25) with the Gauss law (16), we get   3 (1.26) V    E   0 d r  0. For a given spatial distribution of electric charge (and hence of its electric field), this equation should be valid for any choice of the volume V. This can hold only if the function under the integral vanishes at each point, i.e. if11

9

See, e.g., MA Eq. (12.2). Note also that the scalar product under the volumic integral in Eq. (25) is nothing else than the divergence of the vector E – see, e.g., MA Eq. (8.4), hence the theorem’s name. 10 See, e.g., MA Secs. 8-10.

In the Gaussian units, just as in the initial Eq. (6), 0 has to be replaced with 1/4, so the Maxwell equation (27) looks like E = 4, while Eq. (28) stays the same.

11

Chapter 1

Page 8 of 20

Essential Graduate Physics

EM: Classical Electrodynamics

Inhomogeneous Maxwell equation for E

 E 

 . 0

(1.27)

Note that in sharp contrast with the integral form (16), Eq. (27) is local: it relates the electric field’s divergence to the charge density at the same point. This equation, being the differential form of the Gauss law, is frequently called one of the famed Maxwell equations12 – to be discussed again and again later in this course.

Homogeneous Maxwell equation for E

In the mathematical terminology, Eq. (27) is inhomogeneous, because it has a right-hand side independent (at least explicitly) of the field E that it describes. Another, homogeneous Maxwell equation’s “embryo” (this one valid for the stationary case only!) may be obtained by noticing that the curl of the point charge’s field, and hence that of any system of charges, equals zero:13 E  0.

(1.28)

(We will arrive at two other Maxwell equations, for the magnetic field, in Chapter 5, and then generalize all the equations to their full, time-dependent form at the end of Chapter 6. However, Eq. (27) will stay the same.) Just to get a better gut feeling of Eq. (27), let us apply it to the same example of a uniformly charged sphere (Fig. 2). Vector algebra tells us that the divergence of a spherically symmetric vector function E(r) = E(r)nr may be simply expressed in spherical coordinates:14 E = [d(r2E)/dr]/r2. As a result, Eq. (27) yields a linear ordinary differential equation for the scalar function E(r):   /  0 , for r  R, 1 d 2 (r E )   2 for r  R, r dr 0,

(1.29)

which may be readily integrated on each of these segments: E (r ) 

1 1    r 2 dr   r 3 / 3  c1 , for r  R,   0 r 2 c 2 , for r  R.

(1.30)

To determine the integration constant c1, we can use the following boundary condition: E(0) = 0. (It follows from the problem’s spherical symmetry: in the center of the sphere, the electric field has to vanish, because otherwise, where would it be directed?) This requirement gives c1 = 0. The second constant, c2, may be found from the continuity condition E(R – 0) = E(R + 0), which has already been discussed above, giving c2 = R3/3  Q/4. As a result, we arrive at our previous results (19) and (22). We can see that in this particular, highly symmetric case, using the differential form of the Gauss law is a bit more complex than its integral form. (For our second example, shown in Fig. 4, it would be even less natural.) However, Eq. (27) and its generalizations are more convenient for asymmetric charge

12

Named after the genius of classical electrodynamics and statistical physics, James Clerk Maxwell (1831-1879). This follows, for example, from the direct application of MA Eq. (10.11) to any spherically-symmetric vector function of type f(r) = f(r)nr (in particular, to the electric field of a point charge placed at the origin), giving f = f = 0 and fr/ = fr/ = 0 so all components of the vector  f vanish. Since nothing prevents us from placing the reference frame’s origin at the point charge’s location, this result remains valid for any position of the charge. 14 See, e.g., MA Eq. (10.10) for the particular case / = / = 0. 13

Chapter 1

Page 9 of 20

Essential Graduate Physics

EM: Classical Electrodynamics

distributions, and are invaluable in the cases where the distribution (r) is not known a priori and has to be found in a self-consistent way. (We will start discussing such cases in the next chapter.) 1.3. Scalar potential and electric field energy One more help for solving problems of electrostatics (and electrodynamics as a whole) may be obtained from the notion of the electrostatic potential, which is just the electrostatic potential energy U of a probe point charge q placed into the field in question, normalized by its charge:



U . q

(1.31)

Electrostatic potential

As we know from classical mechanics,15 the notion of U (and hence ) makes the most sense for the case of potential forces – for example, those depending just on the particle’s position. Eqs. (6) and (9) show that stationary electric fields fall into this category. For such a field, the potential energy may be defined as a scalar function U(r) that allows the force to be calculated as its gradient (with the opposite sign): F  U . (1.32) Dividing both sides of this equation by the probe charge, and using Eqs. (6) and (31), we get16

E   .

(1.33)

Electrostatic field as a gradient

To calculate the scalar potential, let us start from the simplest case of a single point charge q placed at the origin. For it, Eq. (7) takes the simple form E

1 4 0

q

n r 1 q 2r .  3 4 0 r r

(1.34)

It is straightforward to verify that the last fraction in the last form of Eq. (34) is equal to –(1/r).17 Hence, according to the definition (33), for this particular case



1 q . 4 0 r

(1.35)

(In the Gaussian units, this result is spectacularly simple:  = q/r.) Note that we could add an arbitrary constant to this potential (and indeed to any other distribution of  discussed below) without changing the field, but it is convenient to define the potential energy so it would approach zero at infinity. In order to justify the introduction and the forthcoming exploration of U and , let me demonstrate (I hope, unnecessarily :-) how useful the notions are, on a very simple example. Let two similar charges q be launched from afar, with the same initial speed v0 R1 + R2. 1.20. Calculate the electrostatic energy U of a (generally, thick) spherical shell, with charge Q uniformly distributed through its volume – see the figure on the right. Analyze and interpret the dependence of U on the inner cavity’s radius R1, at fixed Q and R2.

28

Q

R1 R2

This is only the simplest one of several reciprocity theorems in electromagnetism – see, e.g., Sec. 6.8 below.

Chapter 1

Page 20 of 20

Essential Graduate Physics

EM: Classical Electrodynamics

Chapter 2. Charges and Conductors This chapter starts our discussion of the very common situations when the electric charge distribution in space is not known a priori, but rather should be calculated in a self-consistent way together with the electric field it creates. The simplest situations of this kind involve conductors and lead to the so-called boundary problems in that the partial differential equations describing the field distribution have to be solved with appropriate boundary conditions. Such problems are also typical for other parts of electrodynamics (and indeed for other fields of physics as well), so following tradition, I will use this chapter’s material as a playground for a discussion of various methods of boundary problem solution, and the special functions most frequently encountered on that way. 2.1. Polarization and screening The basic principles of electrostatics outlined in Chapter 1 present the conceptually full solution of the problem of finding the electrostatic field (and hence Coulomb forces) induced by electric charges distributed over space with some density (r). However, in most practical situations, this function is not known but should be found self-consistently with the field. For example, if a sample of relatively dense material is placed into an external electric field, it is typically polarized, i.e. acquires some local charges of its own, which contribute to the total electric field E(r) inside, and even outside it – see Fig. 1a. (a)

(b)

Fig. 2.1. Two typical electrostatic situations involving conductors: (a) polarization by an external field, and (b) re-distribution of conductor’s own charge over its surface – schematically. Here and below, the red and blue points denote charges of opposite signs.

The full solution of such problems should satisfy not only the fundamental Eq. (1.7) but also the so-called constitutive relations between the macroscopic variables describing the sample’s material. Due to the atomic character of real materials, such relations may be very involved. In this part of my series, I will have time to address these relations, for various materials, only rather superficially,1 focusing on their simple approximations. Fortunately, in most practical cases such approximations work very well. In particular, for the polarization of good conductors, a very reasonable approximation is given by the so-called macroscopic model, in which the free charges in the conductor are is treated as a charged continuum that is free to move under the effect of the force F = qE exerted by the macroscopic electric field E, i.e. the field averaged over space on the atomic scale – see also the discussion at the end 1

A more detailed discussion of the electrostatic field screening may be found, e.g., in SM Sec. 6.4. (Alternatively, see either Sec. 13.5 of J. Hook and H. Hall, Solid State Physics, 2nd ed., Wiley, 1991; or Chapter 17 of N. Ashcroft and N. Mermin, Solid State Physics, Brooks Cole, 1976.) © K. Likharev

Essential Graduate Physics

EM: Classical Electrodynamics

of Sec. 1.1. In electrostatics (which excludes the case dc currents, to be discussed in Chapter 4 below), there should be no such motion, so everywhere inside the conductor the macroscopic electric field should vanish: E  0. (2.1a) This is the electric field screening2 effect, meaning, in particular, that conductors’ polarization in an external electric field has the extreme form shown (rather schematically) in Fig. 1a, with the field of the induced surface charges completely compensating the external field in the conductor’s bulk. Note that Eq. (1a) may be rewritten in another, frequently more convenient form:

  const ,

(2.1b)

where  is the macroscopic electrostatic potential related to the macroscopic field by Eq. (1.33).3 (If a problem includes several unconnected conductors, the constant in Eq. (1b) may be specific for each of them.) Now let us examine what we can say about the electric field in free space just outside a conductor, within the same macroscopic model. At close proximity, any smooth surface (in our current case, that of a conductor) looks planar. Let us integrate Eq. (1.28) over a narrow (d 0, so that the electron’s charge equals (–e). 7 See, e.g., SM Sec. 3.1. Chapter 2

Page 3 of 68

Essential Graduate Physics

EM: Classical Electrodynamics

 U (r )  ne r   n exp ,  k BT 

(2.4)

where T is the absolute temperature in kelvins (K), and kB  1.3810-23 J/K is the Boltzmann constant. As a result, the net charge density is   e r    (2.5)  r   en 1  exp   .  k BT    The penetrating electric field polarizes the atoms as well. As will be discussed in the next chapter, such polarization results in the reduction of the electric field by a material-specific dimensionless factor  (larger, but typically not too much larger than 1), called the dielectric constant. As a result, the Poisson equation (1.41) takes the so-called Poisson-Boltzmann form,8  e   d 2 en     exp   1, 2   0  0   k BT   dz

(2.6)

where we have taken advantage of the 1D geometry of the system to simplify the Laplace operator, with the z-axis normal to the surface. Even with this simplification, Eq. (6) is a nonlinear differential equation allowing an analytical but rather bulky solution. Since our current goal is just to estimate the field penetration depth , let us simplify the equation further by considering the low-field limit: e  ~ e E > kBT, called the Fermi energy. In these conditions, the screening of relatively low electric field may be described by replacing Eq. (5) with

  en  ne   eg (EF )(U )  e 2 g (EF ) ,

(2.9)

where g(E) is the density of quantum states (per unit volume per unit energy) at the electron’s energy E. At the Fermi surface, the density is of the order of n/EF.11 As a result, we again get the second of Eqs. (7), but with a different characteristic scale , defined by the following relation: ThomasFermi screening length

2TF 

 0 2

e g (EF )

~

 0EF e2n

,

(2.10)

and called the Thomas-Fermi screening length. Since for most good metals, n is of the order of 1029 m-3, and EF is of the order of 10 eV, Eq. (10) typically gives TF close to a few a0, and makes the ThomasFermi screening theory valid at least semi-quantitatively. To summarize, the electric field penetration into good conductors is limited to a depth  ranging from a fraction of a nanometer to a few nanometers, so for problems with a characteristic linear size much larger than that scale, the macroscopic model (1) gives a very good accuracy, and we will use them in the rest of this chapter. However, the reader should remember that in many situations involving semiconductors, as well as at some nanoscale experiments with metals, the electric field penetration should be taken into account. Another important condition of the macroscopic model’s validity is imposed on the electric field’s magnitude, which is especially significant for semiconductors. Indeed, as Eq. (6) shows, Eq. (7) is only valid if e   > a), the capacitance diverges, but very weakly. Such logarithmic divergence may be cut by any minuscule additional effect, for example by the finite length l of the system. This fact yields the following very useful estimate of the self-capacitance of a single round wire of radius a: 2 0 l C , for l  a . (2.50) ln (l / a) On the other hand, if the gap d between the conductors is very narrow: d  b – a > a, it coincides with Eq. (17) for the self-capacitance of the inner conductor. On the other hand, if the gap d between two conductors is narrow, d  b – a > 1 (i.e. r >> R). In the opposite limit, the surface of constant  = 0 describes our thin disk of radius R, with the coordinate  describing the distance   (x2 + y2)1/2 = Rsin of its point from the z-axis. It is almost evident (and easy to prove) that the curvilinear coordinates (59) are also orthogonal; the Laplace operator in them is:  1       cosh    cosh       1 2  .   2  R (cosh 2   sin 2  )   2     1 1   1   sin         sin 2  cosh 2    2   sin   

(2.60)

Though this expression may look a bit intimidating, let us notice that since in our current problem, the boundary conditions depend only on :25 24 25

For solution of some problems, it is convenient to use Eqs. (59) with – <  < + and 0    /2. I have called the disk’s potential V, to distinguish it from the potential  at an arbitrary point of space.

Chapter 2

Page 16 of 68

Essential Graduate Physics

EM: Classical Electrodynamics





 0  V ,

  

0,

(2.61)

there is every reason to assume that the electrostatic potential in all space is a function of  alone; in other words, that all ellipsoids  = const are the equipotential surfaces. Indeed, acting on such a function () by the Laplace operator (60), we see that the two last terms in the square brackets vanish, and the Laplace equation (35) is reduced to a simple ordinary differential equation d d

d   cosh  d   0.

(2.62)

Integrating it twice, just as we did in the three previous problems, we get

 ( )  c1 

d . cosh 

(2.63)

This integral may be readily worked out using the substitution   sinh (which gives d  cosh d, i.e. d = d/cos, and cosh2 = 1 + sinh2  1 +  2):

 ( )  c1

sinh 

d  c 2  c1 tan 1 (sinh )  c 2 . 2 1   0



(2.64)

The integration constants c1,2 may be simply found from the boundary conditions (61), and we arrive at the following final expression for the electrostatic potential:  

 ( )  V 1 

 2V  1  tan 1 (sinh )  tan 1  .     sinh  2

(2.65)

This solution satisfies both the Laplace equation and the boundary conditions. Mathematicians tell us that the solution of any boundary problem of the type (35) is unique, so we do not need to look any further. Now we may use Eq. (3) to find the surface density of electric charge, but in the case of a thin disk, it is more natural to add up such densities on its top and bottom surfaces at the same distance  = (x2 + y2)1/2 from the disk’s center. The densities are evidently equal, due to the problem symmetry about the plane z = 0, so the total density is  = 20Enz=+0. According to Eq. (65), and the last of Eqs. (59), the electric field on the upper surface is En

z 0



  ( ) 2 1 2 1  V  V 2  , z   0   0  ( R sinh  cos  )  R cos   ( R   2 )1 / 2 z

(2.66)

and we see that the charge is distributed over the disk very nonuniformly:



4



 0V

1 , ( R   2 )1 / 2 2

(2.67)

with a singularity at the disk edge. Below we will see that such singularities are very typical for sharp edges of conductors. Fortunately, in our current case the divergence is integrable, giving a finite disk charge:

Chapter 2

Page 17 of 68

Essential Graduate Physics

EM: Classical Electrodynamics

R

Q

2   d     (  )2 d 

disk surface

0

4



2d d  4 0VR   8 0 RV . 2 2 1/ 2 1/ 2 0 (R   ) 0 1   

R

1

 0V 

(2.68)

Thus, for the disk’s self-capacitance we get a very simple result, C  8 0 R 

2



4 0 R,

(2.69)

a factor of /2  1.57 lower than that for the conducting sphere of the same radius, but still complying with the general estimate (18). Can we always find such a “good” system of orthogonal coordinates? Unfortunately, the answer is no, even for highly symmetric geometries. This is why the practical value of this approach is limited, and other, more general methods of boundary problem solution are clearly needed. Before proceeding to their discussion, however, let me note that in the case of 2D problems (i.e. cylindrical geometries26), the orthogonal coordinate method gets much help from the following conformal mapping approach. Let us consider a pair of Cartesian coordinates {x, y} of the cylinder’s cross-section plane as a complex variable z  x + iy,27 where i is the imaginary unit (i2 = –1), and let w(z) = u + iv be an analytic complex function of z.28 For our current purposes, the most important property of an analytic function is that its real and imaginary parts obey the following Cauchy-Riemann relations:29

u v  , x y For example, for the function

v u  . x y

(2.70)



(2.71)



w  z 2   x  iy   x 2  y 2  2ixy , 2

whose real and imaginary parts are u  Re w  x 2  y 2 ,

v  Im w  2 xy ,

(2.72)

we immediately see that u/x = 2x = v/y, and v/x = 2y = –u/y, in accordance with Eq. (70). Let us differentiate the first of Eqs. (70) over x again, then change the order of differentiation, and after that use the latter of those equations:  2u  u  v  v  u  2u        , x 2 x x x y y x y y y 2

(2.73)

26

Let me remind the reader that the term cylindrical describes any surface formed by a translation, along a straight line, of an arbitrary curve, and hence more general than the usual circular cylinder. (In this terminology, for example, a prism is also a cylinder of a particular type, formed by a translation of a polygon.) 27 The complex variable z should not be confused with the (real) 3rd spatial coordinate z! We are considering 2D problems now, with the potential independent of z. 28 An analytic (or “holomorphic”) function may be defined as the one that may be expanded into the Taylor series in its complex argument, i.e. is infinitely differentiable in the given point. (Almost all “regular” functions, such as zn, z1/n, exp z, ln z, etc., and their linear combinations are analytic at all z, maybe besides certain special points.) If the reader needs to brush up on their background on this subject, I can recommend a popular textbook by M. Spiegel et al., Complex Variables, 2nd ed., McGraw-Hill, 2009. 29 These relations may be used, in particular, to prove the Cauchy integral formula – see, e.g., MA Eq. (15.1). Chapter 2

Page 18 of 68

Essential Graduate Physics

EM: Classical Electrodynamics

and similarly for v. This means that the sum of second-order partial derivatives of each of the real functions u(x, y) and v(x, y) is zero, i.e. that both functions obey the 2D Laplace equation. This mathematical fact opens a nice way of solving problems of electrostatics for (relatively simple) 2D geometries. Imagine that for a particular boundary problem we have found a function w(z) for that either u(x, y) or v(x, y) is constant on all electrode surfaces. Then all lines of constant u (or v) represent equipotential surfaces, i.e. the problem of the potential distribution has been essentially solved. As a simple example, let us consider a problem important for practice: the quadrupole electrostatic lens – a system of four cylindrical electrodes with hyperbolic cross-sections, whose boundaries are described by the following relations:  a 2 , for the left and right electrodes, x y  2   a , for the top and bottom electrodes, 2

2

(2.74)

voltage-biased as shown in Fig. 9a. (a)

y

V / 2

a a 0 V / 2 a

V / 2

a

plane z

y

v

a 0 a

x

 a2 x

plane w

0

 a2

(b)

u

V / 2 Fig. 2.9. (a) The quadrupole electrostatic lens’ cross-section and (b) its conformal mapping.

Comparing these relations with Eqs. (72), we see that each electrode surface corresponds to a constant value of the real part u(x, y) of the function given by Eq. (71): u = a2. Moreover, the potentials of both surfaces with u = +a2 are equal to +V/2, while those with u = –a2 are equal to –V/2. Hence we may conjecture that the electrostatic potential at each point is a function of u alone; moreover, a simple linear function,   c1u  c 2  c1 ( x 2  y 2 )  c 2 , (2.75) is a valid (and hence the unique) solution of our boundary problem. Indeed, it does satisfy the Laplace equation, while the constants c1,2 may be readily selected in a way to satisfy all the boundary conditions shown in Fig. 9a: V x2  y2 . (2.76)  2 a2 so the boundary problem has been solved. According to Eq. (76), all equipotential surfaces are hyperbolic cylinders, similar to those of the electrode surfaces. What remains is to find the electric field at an arbitrary point inside the system: Ex  

Chapter 2

x y    V 2 , E y   V 2 . x y a a

(2.77)

Page 19 of 68

Essential Graduate Physics

EM: Classical Electrodynamics

These formulas show, in particular, that if charged particles (e.g., electrons in an electron-optics system) are launched to fly ballistically through such a lens, along the z-axis, they experience a force pushing them toward the symmetry axis and proportional to the particle’s deviation from the axis (and thus equivalent in action to an optical lens with a positive refraction power) in one direction, and a force pushing them out (negative refractive power) in the perpendicular direction. One can show that letting the particles fly through several such lenses, with alternating voltage polarities, in series, enables beam focusing.30 Hence, we have reduced the 2D Laplace boundary problem to that of finding the proper analytic function w(z). This task may be also understood as that of finding a conformal map, i.e. a correspondence between components of any point pair, {x, y} and {u, v}, residing, respectively, on the initial Cartesian plane z and the plane w of the new variables. For example, Eq. (71) maps the real electrode configuration onto a plane capacitor of an infinite area (Fig. 9b), and the simplicity of Eq. (75) is due to the fact that for the latter system the equipotential surfaces are just parallel planes u = const. For more complex geometries, the suitable analytic function w(z) may be hard to find. However, for conductors with piece-linear cross-section boundaries, substantial help may be obtained from the following Schwarz-Christoffel integral dz w ( z)  const   . (2.78) k1 ( z  x1 ) ( z  x 2 ) k 2 ...( z  x N 1 ) k N 1 that provides a conformal mapping of the interior of an arbitrary N-sided polygon onto the plane w = u + iv, onto the upper-half (y > 0) of the plane z = x + iy. In Eq. (78), xj (j = 1, 2, N – 1) are the points of the y = 0 axis (i.e., of the boundary of the mapped region on plane z) to which the corresponding polygon vertices are mapped, while kj are the exterior angles at the polygon vertices, measured in the units of , with –1  kj  +1 – see Fig. 10.31 Of the points xj, two may be selected arbitrarily (because their effects may be compensated by the multiplicative constant in Eq. (78), and the additive constant of integration), while all the others have to be adjusted to provide the correct mapping. y

plane w

plane z

k3

w3

k 2 w1 k1

w2

x1

x2

0

x3

x

Fig. 2.10. The Schwartz-Christoffel mapping of a polygon’s interior onto the upper half-plane.

See, e.g., textbook by P. Grivet, Electron Optics, 2nd ed., Pergamon, 1972. integral (78) includes only (N – 1) rather than N poles because a polygon’s shape is fully determined by (N – 1) positions wj of its vertices and (N – 1) angles kj. In particular, since the algebraic sum of all external angles of a polygon equals 2, the last angle parameter kj = kN is uniquely determined by the set of the previous ones.

30

31 The

Chapter 2

Page 20 of 68

Essential Graduate Physics

EM: Classical Electrodynamics

In the general case, the complex integral (78) may be hard to tackle. However, in some important cases, in particular those with right angles (kj = ½) and/or with some points wj at infinity, the integrals may be readily worked out, giving explicit analytical expressions for the mapping functions w(z). For example, let us consider a semi-infinite strip defined by restrictions –1  u  +1 and 0  v, on the wplane – see the left panel of Fig. 11. plane w

v

plane z

y

w 3  i

Fig. 2.11. A semiinfinite strip mapped onto the upper halfplane.

w2

w1

1

u

1

0

x1  1

x

x2  1

0

The strip may be considered as a triangle, with one vertex at the infinitely distant vertical point w3 = 0 + i . Let us map the polygon onto the upper half of plane z, shown on the right panel of Fig. 11, with the vertex w1 = –1 + i 0 mapped onto the point z1 = –1 + i 0, and the vertex w2 = +1 + i 0 mapped onto the point z2 = +1 + i 0. Since the external angles at both these vertices are equal to +/2, and hence k1 = k2 = +½, Eq. (78) is reduced to

w ( z )  const  

dz dz dz  const   2  const  i  . 1/ 2 1/ 2 ( z  1) ( z  1) ( z  1) (1  z 2 )1 / 2 1/ 2

(2.79)

This complex integral may be worked out, just as for real z, with the substitution z = sin, giving

w( z )  const' 

sin -1 z

 d  c sin

-1

1

(2.80)

z  c2 .

Determining the constants c1,2 from the required mapping, i.e. from the conditions w(-1 + i 0) = –1 + i 0 and w(+1+ i 0)= +1+ i 0 (see the arrows in Fig. 11), we finally get32

w (z ) 

2



sin 1 z ,

i.e. z  sin

πw . 2

(2.81a)

Using the well-known expression for the sine of a complex argument,33 we may rewrite this elegant result in either of the following two forms for the real and imaginary components of z and w: u

2



sin

1

( x  1)

2

 y2



1/ 2

2x



 ( x  1) 2  y 2



1/ 2

, v

2



cosh

-1

( x  1)

2

 y2



1/ 2



 ( x  1) 2  y 2 2



1/ 2

,

Note that this function differs only by a linear transformation of variables from the function z = c cosh w, which is the canonical form of the definition of the so-called elliptic (not ellipsoidal!) orthogonal coordinates. 33 See, e.g., MA Eq. (3.5). 32

Chapter 2

Page 21 of 68

Essential Graduate Physics

EM: Classical Electrodynamics

x  sin

u 2

cosh

v 2

,

y  cos

u 2

sinh

v 2

.

(2.81b)

It is amazing how perfectly does the last formula manage to keep y  0 at the different borders of our wregion (Fig. 11): at its side borders (u = 1, 0  v < ), this is performed by the first multiplier, while at the bottom border (–1  u  +1, v = 0), the equality is enforced by the second multiplier. This mapping may be used to solve several electrostatics problems with the geometry shown in Fig. 11a; probably the most surprising of them is the following one. A straight gap of width 2t is cut in a very thin conducting plane, and voltage V is applied between the resulting half-planes – see the bold straight lines in Fig. 12. y

V / 2

V / 2

t

t

x

Fig. 2.12. The equipotential surfaces of the electric field between two thin conducting semi-planes (or rather their cross-sections by the plane z = const).

Selecting a Cartesian coordinate system with the z-axis directed along the cut, the y-axis normal to the plane, and the origin in the middle of the cut (Fig. 12), we can write the boundary conditions of this Laplace problem as for x  t , y  0,   V / 2, (2.82)  for x  t , y  0.   V / 2, (Due to the problem’s symmetry, we may expect that in the middle of the gap, i.e. at –t < x < +t and y = 0, the electric field is parallel to the plane and hence /y = 0.) The comparison of Figs. 11 and 12 shows that if we normalize our coordinates {x, y} to t, Eqs. (81) provide the conformal mapping of our system onto the plane z to a plane capacitor on the plane w, with the voltage V between two conducting planes located at u = 1. Since we already know that in that case  = (V/2)u, we may immediately use the first of Eqs. (81b) to write the final solution of the problem:34



V V u  sin 1 2  (x  t)2  y 2





1/ 2

2x



 (x  t) 2  y 2



1/ 2

.

(2.83)

The thin lines in Fig. 12 show the corresponding equipotential surfaces;35 it is evident that the electric field concentrates at the gap edges, just as it did at the edge of the thin disk (Fig. 8). Let me 34

This result may be also obtained by the Green’s function method, to be discussed in Sec. 10 below. Another graphical representation of the electric field distribution, by field lines, is less convenient. (It is more useful for the magnetic field, which may be represented by a scalar potential only in particular cases, so there is no surprise that the field lines were introduced only by Michael Faraday in the 1830s.) As a reminder, the field 35

Chapter 2

Page 22 of 68

Essential Graduate Physics

EM: Classical Electrodynamics

leave the remaining calculation of the surface charge distribution and the mutual capacitance between the half-planes (per unit length of the system in the z-direction) for the reader’s exercise. 2.5. Variable separation – Cartesian coordinates The general approach of the methods discussed in the last two sections was to satisfy the Laplace equation by a function of a single variable that also satisfies the boundary conditions. Unfortunately, in many cases this cannot be done – at least, using reasonably simple functions. In this case, a very powerful method called the variable separation,36 may work, typically producing “semi-analytical” results in the form of series (infinite sums) of either elementary or well-studied special functions. Its main idea is to look for the solution of the boundary problem (35) as the sum of partial solutions,

   ck k ,

(2.84)

k

where each function k satisfies the Laplace equation, and then select the set of coefficients ck to satisfy the boundary conditions. More specifically, in the variable separation method, the partial solutions k are looked for in the form of a product of functions, each depending on just one spatial coordinate. Let us discuss this approach on the classical example of a rectangular box with conducting walls (Fig. 13), with the same potential (that I will take for zero) at all its sidewalls and the lower lid, but a different potential V at the top lid (z = c). Moreover, to demonstrate the power of the variable separation method, let us carry out all the calculations for a more general case when the top lid’s potential is an arbitrary 2D function V(x, y).37

z c

  V ( x, y ) b

0

a

 0

y

Fig. 2.13. The standard playground for the variable separation discussion: a rectangular box with five conducting, grounded walls and a fixed potential distribution V(x, y) on the top lid.

x

For this geometry, it is natural to use the Cartesian coordinates {x, y, z}, representing each of the partial solutions in Eq. (84) as the following product line is the curve to which the field vectors are tangential at each point. Hence the electric field lines are always normal to the equipotential surfaces, so it is always straightforward to sketch them, if desirable, from the equipotential surface pattern – like the one shown in Fig. 12. 36 This method was already discussed in CM Sec. 6.5 and then used also in Secs. 6.6 and 8.4 of that course. However, it is so important that I need to repeat its discussion in this part of my series, for the benefit of the readers who have skipped the Classical Mechanics course for whatever reason. 37 Such voltage distributions may be implemented in practice, for example, using the so-called mosaic electrodes consisting of many electrically-insulated and individually-biased panels.

Chapter 2

Page 23 of 68

Essential Graduate Physics

EM: Classical Electrodynamics

 k  X ( x)Y ( y ) Z ( z ) .

(2.85)

Plugging it into the Laplace equation expressed in the Cartesian coordinates,  2 k  2 k  2 k   0, x 2 y 2 z 2

(2.86)

1 d 2 X 1 d 2Y 1 d 2 Z    0. X dx 2 Y dy 2 Z dz 2

(2.87)

and dividing the result by XYZ, we get

Here comes the punch line of the variable separation method: since the first term of this sum may depend only on x, the second one only of y, etc., Eq. (87) may be satisfied everywhere in the volume only if each of these terms equals a constant. In a minute we will see that for our current problem (Fig. 13), these constant x- and y-terms have to be negative; hence let us denote these variable separation constants as (-2) and (-2), respectively. Now Eq. (87) shows that the constant z-term has to be positive; denoting it as 2 we get the following relation:

2  2   2 .

(2.88)

Now the variables are separated in the sense that for the functions X(x), Y(y), and Z(z) we have got separate ordinary differential equations,

d2X   2 X  0, dx 2

d 2Y   2Y  0, dy 2

d 2Z   2 Z  0, dz 2

(2.89)

which are related only by Eq. (88) for their constant parameters. Let us start from the equation for X(x). Its general solution is the sum of functions sinx and cosx, multiplied by arbitrary coefficients. Let us select these coefficients to satisfy our boundary conditions. First, since   X should vanish at the back vertical wall of the box (i.e., with the coordinate origin choice shown in Fig. 13, at x = 0 for any y and z), the coefficient at cosx should be zero. The remaining coefficient (at sinx) may be included in the general factor ck in Eq. (84), so we may take X in the form X  sin x . (2.90) This solution satisfies the boundary condition at the opposite wall (x = a) only if the product a is a multiple of , i.e. if  is equal to any of the following numbers (commonly called eigenvalues):38

n 

 a

n,

with n  1, 2,...

(2.91)

(Terms with negative values of n would not be linearly-independent from those with positive n, and may be dropped from the sum (84). The value n = 0 is formally possible, but would give X = 0, i.e. k = 0, at 38

Note that according to Eqs. (91)-(92), as the spatial dimensions a and b of the system are increased, the distances between the adjacent eigenvalues tend to zero. This fact implies that for spatially-infinite systems, the eigenvalue spectra are continuous, so the sums of the type (84) become integrals; however, the general approach remains the same. A few problems of this type are provided in Sec. 9 for the reader’s exercise. Chapter 2

Page 24 of 68

Essential Graduate Physics

EM: Classical Electrodynamics

any x, i.e. no contribution to sum (84), so it may be dropped as well.) Now we see that we indeed had to take  real, i.e. 2 positive – otherwise, instead of the oscillating function (90) we would have a sum of two exponential functions, which cannot equal zero at two independent points of the x-axis. Since the equation (89) for function Y(y) is similar to that for X(x), and the boundary conditions on the walls perpendicular to axis y (y = 0 and y = b) are similar to those for x-walls, the absolutely similar reasoning gives  (2.92) Y  sin  y,  m  m, with m  1, 2,... , b where the integer m may be selected independently of n. Now we see that according to Eq. (88), the separation constant  depends on two integers n and m, so the relationship may be rewritten as

 nm     2 n



2 1/ 2 m

 n  2  m  2           a   b  

1/ 2

.

(2.93)

The corresponding solution of the differential equation for Z may be represented as a linear combination of two exponents exp{nmz}, or alternatively of two hyperbolic functions, sinhnmz and coshnmz, with arbitrary coefficients. At our choice of coordinate origin, the latter option is preferable because coshnmz cannot satisfy the zero boundary condition at the bottom lid of the box (z = 0). Hence we may take Z in the form Z  sinh  nm z , (2.94) which automatically satisfies that condition. Now it is the right time to merge Eqs. (84)-(85) and (90)-(94), replacing the temporary index k with the full set of possible eigenvalues, in our current case of two integer indices n and m: Variable separation in Cartesian coordinates (example)



 ( x, y , z ) 

c

n , m 1

nm

sin

nx a

sin

my b

sinh  nm z ,

(2.95)

where nm is given by Eq. (93). This solution satisfies not only the Laplace equation but also the boundary conditions on all walls of the box, besides the top lid, for arbitrary coefficients cnm. The only job left is to choose these coefficients from the top-lid requirement:

 ( x, y , c ) 



c

n , m 1

nm

sin

nx a

sin

my b

sinh  nm c  V ( x, y ) .

(2.96)

It may look bad to have just one equation for the infinite set of coefficients cnm. However, the decisive help comes from the fact that the functions of x and y that participate in Eq. (96), form full, orthogonal sets of 1D functions. The last term means that the integrals of the products of the functions with different integer indices over the region of interest equal zero. Indeed, a direct integration gives a

 sin 0

nx a

sin

n'x a

dx 

a  nn' , 2

(2.97)

where nn’ is the Kronecker symbol, and similarly for y (with the evident replacements a  b, and n  m). Hence, a fruitful way to proceed is to multiply both sides of Eq. (96) by the product of the basis functions, with arbitrary indices n’ and m’, and integrate the result over x and y:

Chapter 2

Page 25 of 68

Essential Graduate Physics



a

 cnm sinh  nm c  sin

n , m 1

0

EM: Classical Electrodynamics

nx a

sin

n'x a

b

dx  sin 0

my b

sin

m'y b

a

b

dy   dx  dy V ( x, y ) sin 0

0

n'x a

sin

m'y b

. (2.98)

Due to Eq. (97), all terms on the left-hand side of the last equation, besides those with n = n’ and m = m’, vanish, and (replacing n’ with n, and m’ with m, for notation brevity) we finally get

nx my 4 sin . dx  dy V ( x, y ) sin  ab sinh  nm c 0 0 a b a

c nm 

b

(2.99)

The relations (93), (95), and (99) give the complete solution of the posed boundary problem; we can see both good and bad news here. The first bit of bad news is that in the general case we still need to work out the integrals (99) – formally, the infinite number of them. In some cases, it is possible to do this analytically, in one shot. For example, if the top lid in our problem is a single conductor, i.e. has a constant potential V0, we may take V(x,y) = V0 = const, and both 1D integrations are elementary; for example a n  nx a a  2 , for n odd, dx d sin sin      (2.100) 0 a n 0  n  0, for n even, and similarly for the integral over y, so

c nm 

16V0 1,   nm sinh  nm c 0, 2

if both n and m are odd, otherwise.

(2.101)

The second bad news is that even on such a happy occasion, we still have to sum up the series (95), so our result may only be called analytical with some reservations because in most cases we need to perform numerical summation to get the final numbers or plots. Now the first good news. Computers are very efficient for both operations (95) and (99), i.e. for the summation and integration. (As was discussed in Sec. 1.2, random errors are averaged out at these operations.) As an example, Fig. 14 shows the plots of the electrostatic potential in a cubic box (a = b = c), with an equipotential top lid (V = V0 = const), obtained by a numerical summation of the series (95), using the analytical expression (101). The remarkable feature of this calculation is a very fast convergence of the series; for the middle cross-section of the cubic box (z/c = 0.5), already the first term (with n = m = 1) gives an accuracy of about 6%, while the sum of four leading terms (with n, m = 1, 3) reduces the error to just 0.2%. (For a longer box, c > a, b, the convergence is even faster – see the discussion below.) Only very close to the corners between the top lid and the sidewalls, where the potential changes rapidly, several more terms are necessary to get a reasonable accuracy. The related piece of good news is that our “semi-analytical” result allows its ultimate limits to be explored analytically. For example, Eq. (93) shows that for a very flat box (with c R) from the cylinder, its effect on the electric field should vanish, and the potential should approach that of the uniform external field E = E0nx:

   E0 x   E0  cos  , for    .

(2.113)

This is only possible if in Eq. (112), b0 = 0, and also all coefficients an with n  1 vanish, while the product a1c1 should be equal to (–E0). Thus the solution is reduced to the following form 

 (  ,  )   E 0  cos    n 1

Bn

n

cos n ,

(2.114)

in which the coefficients Bn  bncn should be found from the boundary condition at  = R:

 ( R,  )  0 .

(2.115)

This requirement yields the following equation,  B B     E 0 R  1  cos    nn cos n  0, R  n 2 R

(2.116)

which should be satisfied for all . This equality, read backward, may be considered as an expansion of a function identically equal to zero into a series over mutually orthogonal functions cosn. It is evidently valid if all coefficients of the expansion, including (–E0R + B1/R), and all Bn for n  2 are equal to zero. Moreover, mathematics tells us that such expansions are unique, so this is the only possible solution of Eq. (116). So, B1 = E0R2, and our final answer (valid only outside of the cylinder, i.e. for   R), is   R2  R2  x .  cos    E 0 1  2  (  ,  )   E0    (2.117) 2      x y  This result, which may be graphically represented with the equipotential surfaces shown in Fig. 15b, shows a smooth transition between the uniform field (113) far from the cylinder, to the equipotential surface of the cylinder (with  = 0). Such smoothening is very typical for Laplace equation solutions. Indeed, as we know from Chapter 1, these solutions correspond to the lowest integral of the potential gradient’s square, i.e. to the lowest potential energy (1.65) possible at the given boundary conditions. To complete the problem, let us use Eq. (3) to calculate the distribution of the surface charge density over the cylinder’s cross-section:

   0 En

surface

  0

 

 R

  0 E 0 cos 

  R2       2 0 E 0 cos  .       R

(2.118)

This very simple formula shows that with the field direction shown in Fig. 15a (E0 > 0), the surface charge is positive on the right-hand side of the cylinder and negative on its left-hand side, thus creating a field directed from the right to the left, which exactly compensates the external field inside the conductor, where the net field is zero. (Please take one more look at the schematic Fig. 1a.) Note also that the net electric charge of the cylinder is zero, in correspondence with the problem symmetry.

Chapter 2

Page 30 of 68

Essential Graduate Physics

EM: Classical Electrodynamics

Another useful by-product of the calculation (118) is that the surface electric field equals 2E0cos, and hence its largest magnitude is twice the field far from the cylinder. Such electric field concentration is very typical for all convex conducting surfaces. The last observation gets additional confirmation from the second possible topology, when Eq. (110) is used to describe problems with no angular periodicity. A typical example of this situation is a cylindrical conductor with a cross-section that features a corner limited by two straight-lines segments (Fig. 16). Indeed, we may argue that at  < R (where R is the radial extension of the planar sides of the corner, see Fig. 16), the Laplace equation may be satisfied by a sum of partial solutions R()F(), if the angular components of the products satisfy the boundary conditions on the corner sides. Taking (just for the simplicity of notation) the conductor’s potential to be zero, and one of the corner’s sides as the xaxis ( = 0), these boundary conditions are

F (0)  F (  )  0 ,

(2.119)

where the angle  may be anywhere between 0 and 2 – see Fig. 16. (a)

(b)

  R

R

0

 

Fig. 2.16. The cross-sections of cylindrical conductors with (a) a corner and (b) a wedge.

0

Comparing this condition with Eq. (110), we see that it requires s0 and all c to vanish, and  to take one of the values of the following discrete spectrum:

 m   m,

with m  1, 2,... .

(2.120)

Hence the full solution of the Laplace equation for this geometry takes the form 

   a m  m /  sin m 1

m , 

for   R, 0     ,

(2.121)

where the constants s have been incorporated into am. The set of coefficients am cannot be universally determined, because it depends on the exact shape of the conductor outside the corner, and the externally applied electric field. However, whatever the set is, in the limit   0, the solution (121) is almost41 always dominated by the term with the lowest m = 1:

  a1   /  sin

 , 

(2.122)

because the higher terms go to zero faster. This potential distribution corresponds to the surface charge density 41

Exceptions are possible only for highly symmetric configurations when the external field is specially crafted to make a1 = 0. In this case, the solution at   0 is dominated by the first nonzero term of the series (121).

Chapter 2

Page 31 of 68

Essential Graduate Physics

EM: Classical Electrodynamics

   0 En

  0

surface

a  /   1    0 1  .   const ,    0  (  ) 

(2.123)

(It is similar, with the opposite sign, on the opposite face of the angle.) The result (123) shows that if we are dealing with a concave corner ( < , see Fig. 16a), the charge density (and the surface electric field) tends to zero. On the other case, at a “convex corner” with  >  (actually, a wedge – see Fig. 16b), both the charge and the field’s strength concentrate, formally diverging at   0. (So, do not sit on a roof’s ridge during a thunderstorm; rather hide in a ditch!) We have already seen qualitatively similar effects for the thin round disk and the split plane. 2. 7. Variable separation – cylindrical coordinates Now, let us discuss how to generalize the approach discussed in the previous section to problems whose geometry is still axially-symmetric, but where the electrostatic potential depends not only on the radial and angular coordinates but also on the axial coordinate: /z  0. The classical example of such a problem is shown in Fig. 17. Here the sidewall and the bottom lid of a hollow round cylinder are kept at a fixed potential (say,  = 0), but the potential V fixed at the top lid is different. Evidently, this problem is qualitatively similar to the rectangular box problem solved above (Fig. 13), and we will also try to solve it first for the case of arbitrary voltage distribution over the top lid: V = V(, ). z

  V ( , )

l

R

0

x

y

Fig. 2.17. A cylindrical volume with conducting walls.

 0

Following the main idea of the variable separation method, let us require that each partial function k in Eq. (84) satisfies the Laplace equation, now in the full cylindrical coordinates {, , z}:42 1    k     

 1  2 k  2 k   2   0. 2 z 2   

(2.124)

Plugging k in the form of the product R()F()Z(z) into Eq. (124) and then dividing all resulting terms by this product, we get 1 d  dR  1 d 2F 1 d 2Z   2     0. (2.125) R d  d   F d 2 Z dz 2 Since the first two terms of Eq. (125) can only depend on the polar variables  and , while the third term, only on z, at least that term should equal a constant. Denoting it (just like we did in the rectangular box problem) by 2, we get the following set of two equations: 42

See, e.g., MA Eq. (10.3).

Chapter 2

Page 32 of 68

Essential Graduate Physics

EM: Classical Electrodynamics

d 2Z   2Z , 2 dz 1 d  dR  R d  d

 1 d 2F    2  2  0.  F d 2 

(2.126) (2.127)

Now, multiplying all the terms of Eq. (127) by 2, we see that the last term of the result, (d2F/d2)/F, may depend only on , and thus should equal a constant. Calling that constant 2 (just as in Sec. 6 above), we separate Eq. (127) into an angular equation,

d 2F  2F  0 , 2 d

(2.128)

and a radial equation: d 2R 1 dR 2 2   (  2 )R  0 . d 2  d 

Bessel equation

(2.129)

We see that the ordinary differential equations for the functions Z(z) and F() (and hence their solutions) are identical to those discussed earlier in this chapter. However, Eq. (129) for the radial function R() (called the Bessel equation) is more complex than in the 2D case and depends on two independent constant parameters,  and . The latter challenge may be readily overcome if we notice that any change of  may be reduced to the corresponding re-scaling of the radial coordinate . Indeed, introducing a dimensionless variable   ,43 Eq. (129) may be reduced to an equation with just one parameter, : d 2R 1 dR   2  R  0 . (2.130)   1  d 2  d   2  Moreover, we already know that for angle-periodic problems, the spectrum of eigenvalues of Eq. (128) is discrete:  = n, with integer n. Unfortunately, even in this case, Eq. (130), which is the canonical form of the Bessel equation, cannot be satisfied by a single “elementary” function. Its solutions that we need for our current problem are called the Bessel function of the first kind of order , commonly denoted as J(). Let me review in brief those properties of these functions that are most relevant for our problem – and many other problems discussed in this series.44 First of all, the Bessel function of a negative integer order is very simply related to that with the positive order: J  n ( )  (1) n J n ( ) , (2.131) enabling us to limit our discussion to the functions with n  0. Figure 18 shows four of these functions with the lowest positive n. Note that this normalization is specific for each value of the variable separation parameter . Also, please notice that the normalization is meaningless for  = 0, i.e. for the case Z(z) = const. However, if we need the partial solutions for this particular value of , we can always use Eqs. (108)-(109). 44 For a more complete discussion of these functions, see the literature listed in MA Sec. 16, for example, Chapter 6 (written by F. Olver) in the famous collection compiled and edited by Abramowitz and Stegun, available online. 43

Chapter 2

Page 33 of 68

Essential Graduate Physics

EM: Classical Electrodynamics

1

0.5

J n ( ) 0

n

0.5

1

0

0 1 2 3

5

10



15

Fig. 2.18. Several Bessel functions Jn() of integer order. The dashed lines show the envelope of the asymptotes (135).

20

As its argument is increased, each function is initially close to a power law: J0()  1, J1()  /2, J2()  2/8, etc. This behavior follows from the Taylor series   J n ( )    2

2k

n 

(1) k      ,  k  0 k!( n  k )! 2 

(2.132)

which is formally valid for any , and may even serve as an alternative definition of the functions Jn(). However, the series is converging fast only at small arguments,  > 1, but may be used for reasonable estimates starting already from n = 1. For example, max [J1()] is close to 0.58 and is reached at   2.4, just about 30% away from the values given by the asymptotic formulas. Chapter 2

Page 34 of 68

Essential Graduate Physics

EM: Classical Electrodynamics

Now we are ready for our case study (Fig. 17). Since the functions the Z(z) have to satisfy not only Eq. (126) but also the bottom-lid boundary condition Z(0) = 0, they are proportional to sinhz – cf. Eq. (94). Then Eq. (84) becomes 

   J n   c n cos n  s n sin n  sinh  z .

(2.136)

n 0 

Next, we need to satisfy the zero boundary condition at the cylinder’s side wall ( = R). This may be ensured by taking J n (R)  0 . (2.137) Since each function Jn(x) has an infinite number of positive zeros (see Fig. 18 again), which may be numbered by an integer index m = 1, 2, …, Eq. (137) may be satisfied with an infinite number of discrete values of the parameter :

 nm 

 nm R

,

(2.138)

where nm is the m-th zero of the function Jn(x) – see the top numbers in the cells of Table 1. (Very soon we will see what we need the bottom numbers for.) Table 2.1. Approximate values of a few first zeros, nm, of a few lowest-order Bessel functions Jn() (the top number in each cell), and the values of dJn()/d at these points (the bottom number).

n=0 1 2 3 4 5

m=1

2

3

4

5

6

2.40482 -0.51914 3.83171 -0.40276 5.13562 -0.33967 6.38016 -0.29827 7.58834 -0.26836 8.77148 -0.24543

5.52008 +0.34026 7.01559 +0.30012 8.41724 +0.27138 9.76102 +0.24942 11.06471 +0.23188 12.33860 +0.21743

8.65372 -0.27145 10.17347 -0.24970 11.61984 -0.23244 13.01520 -0.21828 14.37254 -0.20636 15.70017 -0.19615

11.79215 +0.23245 13.32369 +0.21836 14.79595 +0.20654 16.22347 +0.19644 17.61597 +0.18766 18.98013 +0.17993

14.93091 -0.20654 16.47063 -0.19647 17.95982 -0.18773 19.40942 -0.18005 20.82693 -0.17323 22.21780 -0.16712

18.07106 +0.18773 19.61586 +0.18006 21.11700 +0.17326 22.58273 +0.16718 24.01902 +0.16168 25.43034 +0.15669

Hence, Eq. (136) may be represented in a more explicit form: Variable separation in cylindrical coordinates (example)





 

 (  ,  , z )   J n   nm n  0 m 1



z  c nm cos n  s nm sin n sinh   nm  . R R 

(2.139)

46

Eq. (135) and Fig. 18 clearly show the close analogy between the Bessel functions and the usual trigonometric functions, sine and cosine. To emphasize this similarity, and help the reader to develop more gut feeling of the Bessel functions, let me mention one result of the elasticity theory: while the sinusoidal functions describe, in particular, transverse standing waves on a guitar string, the functions Jn() describe, in particular, transverse standing waves on an elastic round membrane (say, a round drum), with J0() describing their lowest (fundamental) mode – the only mode with a nonzero amplitude of the membrane center’s oscillations. Chapter 2

Page 35 of 68

Essential Graduate Physics

EM: Classical Electrodynamics

Here the coefficients cnm and snm have to be selected to satisfy the only remaining boundary condition – that on the top lid:    l     (  ,  , l )   J n   nm  c nm cos n  s nm sin n  sinh  nm   V (  ,  ) . (2.140) R R   n  0 m 1 To use it, let us multiply both sides of Eq. (140) by the product Jn(nm’/R) cos n’, integrate the result over the lid area, and use the following property of the Bessel functions: 1

1  J  s J  s  sds  2 J n

nm

n

nm '

( nm )  mm ' . 2

n 1

(2.141)

0

As a small but important detour, the last relation expresses a very specific (“2D”) orthogonality of the Bessel functions with different indices m – do not confuse them with the function orders n, please!47 Since it relates two Bessel functions of the same order n, it is natural to ask why its right-hand side contains the function with a different order (n + 1). Some gut feeling of that may come from one more very important property of the Bessel functions, the so-called recurrence relations:48 2nJ n ( ) J n 1 ( )  J n 1 ( )  , (2.142a)



J n 1 ( )  J n 1 ( )  2

dJ n ( ) , d

(2.142b)

which in particular yield the following formula (convenient for working out some Bessel function integrals): d n  J n ( )   n J n 1 ( ) . (2.143) d





Let us apply the recurrence relations at the special points nm. At these points, Jn vanishes, and the system of two equations (142) may be readily solved to get, in particular, J n 1  nm   

dJ n  nm  , d

(2.144)

so the square bracket on the right-hand side of Eq. (141) is just (dJn/d)2 at  = nm. Thus the values of the Bessel function derivatives at the zero points of the function, given by the lower numbers in the cells of Table 1, are as important for boundary problem solutions as the zeros themselves. Now returning to our problem: since the angular functions cos n are also orthogonal – both to each other, 47

The Bessel functions of the same argument but different orders are also orthogonal, but differently: 

J 0

n

( ) J n' ( )

d





1  nn' . n  n'

These relations provide, in particular, a convenient way for numerical computation of all Jn() – after J0() has been computed. (The latter task is usually performed using Eq. (132) for smaller  and an extension of Eq. (135) for larger .) Note that most mathematical software packages, including all those listed in MA Sec. 16(iv), include ready subroutines for calculation of the functions Jn() and other special functions used in this lecture series. In this sense, the conditional line separating these “special functions” from “elementary functions” is rather fine. 48

Chapter 2

Page 36 of 68

Essential Graduate Physics

EM: Classical Electrodynamics

2

 cos(n ) cos(n' ) d  

nn '

,

(2.145)

0

and to all functions sinn, the integration over the lid area kills all terms of both series in Eq. (140), besides just one term proportional to cn’m’, and hence gives an explicit expression for that coefficient. The counterpart coefficients sn’m’ may be found by repeating the same procedure with the replacement of cos n’ by sin n’. This evaluation (left for the reader’s exercise) completes the solution of our problem for an arbitrary lid potential V(,). Still, before leaving the Bessel functions behind (for a while only :-), let me address two important issues. First, we have seen that in our cylinder problem (Fig. 17), the set of functions Jn(nm/R) with different indices m (which characterize the degree of Bessel function’s stretch along axis ) play a role similar to that of functions sin(nx/a) in the rectangular box problem shown in Fig. 13. In this context, what is the analog of functions cos(nx/a) – which may be important for some boundary problems? In a more formal language, are there any functions of the same argument   nm/R, that would be linearly independent of the Bessel functions of the first kind, while satisfying the same Bessel equation (130)? The answer is yes. For the definition of such functions, we first need to generalize our prior formulas for Jn(), and in particular Eq. (132), to the case of arbitrary, not necessarily real order . Mathematics says that the generalization may be performed in the following way:  

  J ( )    2

2k

(1) k     ,  k  0 k! (  k  1)  2 

(2.146)

where (s) is the so-called gamma function that may be defined as49 

( s )    s 1e  d .

(2.147)

0

The simplest, and the most important property of the gamma function is that for integer values of its argument, it gives the factorial of the number smaller by one:

(n  1)  n !  1  2  ...  n ,

(2.148)

so it is essentially a generalization of the notion of the factorial to all real numbers. The Bessel functions defined by Eq. (146) satisfy, after the replacements n   and n!  (n + 1), virtually all the relations discussed above, including the Bessel equation (130), the asymptotic formula (135), the orthogonality condition (141), and the recurrence relations (142). Moreover, it may be shown that   n, functions J() and J-() are linearly independent of each other, and hence their linear combination may be used to represent the general solution of the Bessel equation. Unfortunately, as Eq. (131) shows, for  = n this is not true, and a solution linearly independent of Jn() has to be formed differently. The most common way to do that is first to define, for all   n, the following functions: J ( ) cos  J  ( ) Y ( )   , (2.149) sin 49

See, e.g., MA Eq. (6.7a). Note that (s)   at s  0, –1, –2,…

Chapter 2

Page 37 of 68

Essential Graduate Physics

EM: Classical Electrodynamics

called the Bessel functions of the second kind, or more often the Weber functions,50 and then to follow the limit   n. At this, both the numerator and denominator of the right-hand side of Eq. (149) tend to zero, but their ratio tends to a finite value called Yn(x). It may be shown that the resulting functions are still the solutions of the Bessel equation and are linearly independent of Jn(x), though are related just as those functions if the sign of n changes: Y n ( )  (1) n Yn ( ) .

(2.150)

Figure 19 shows a few Weber functions of the lowest integer orders. 1

n 0

1 2 3

0.5

Yn ( ) 0

0.5

1

0

5

10



15

20

Fig. 2.19. A few Bessel functions of the second kind (a.k.a. the Weber functions, a.k.a. the Neumann functions).

The plots show that their asymptotic behavior is very much similar to that of the functions Jn(  ): 1/ 2

 2   n   for   , (2.151) Yn ( )    sin    , 4 2      but with the phase shift necessary to make these Bessel functions orthogonal to those of the fist order – cf. Eq. (135). However, for small values of argument , the Bessel functions of the second kind behave completely differently from those of the first kind:  2      ln 2    ,    Yn ( )   n  (n  1)!    ,   2

for n  0, (2.152) for n  0,

where  is the so-called Euler constant, defined as follows:

50

Sometimes, they are called the Neumann functions and denoted as N().

Chapter 2

Page 38 of 68

Essential Graduate Physics

EM: Classical Electrodynamics

 

1 2

1 3

 

1 n

  lim n  1    ...   ln n   0.577157 

(2.153)

As Eqs. (152) and Fig. 19 show, the functions Yn(  ) diverge at   0 and hence cannot describe the behavior of any physical variable, in particular the electrostatic potential. One may wonder: if this is true, when do we need these functions in physics? Figure 20 shows an example of a simple boundary problem of electrostatics, whose solution by the variable separation method involves both functions Jn(  ) and Yn(  ). (a)

 0

R1

(b)

R2

1  2 0

R1

R2



Fig. 2.20. A simple boundary problem that cannot be solved using just one kind of Bessel functions.

Here two round, conducting coaxial cylindrical tubes are kept at the same (say, zero) potential, but at least one of two lids has a different potential. The problem is almost completely similar to that discussed above (Fig. 17), but now we need to find the potential distribution in the free space between the tubes, i.e. for R1 <  < R2. If we use the same variable separation as in the simpler counterpart problem, we need the radial functions R() to satisfy two zero boundary conditions: at  = R1 and  = R2. With the Bessel functions of just the first kind, Jn(), it is impossible to do, because the two boundaries would impose two independent (and generally incompatible) conditions, Jn(R1) = 0, and Jn(R2) = 0, on one “stretching parameter” . The existence of the Bessel functions of the second kind immediately saves the day, because if the radial function solution is represented as a linear combination,

R  c J J n ( )  cY Yn ( ),

(2.154)

two zero boundary conditions give two equations for  and the ratio c  cY/cJ.51 (Due to the oscillating character of both Bessel functions, these conditions would be typically satisfied by an infinite set of discrete pairs {, c}.) Note, however, that generally none of these pairs would correspond to zeros of either Jn or Yn, so having an analog of Table 1 for the latter function would not help much. Hence, even the simplest problems of this kind (like the one shown in Fig. 20) typically require the numerical solution of transcendental algebraic equations.

51

A pair of independent linear functions, used for the representation of the general solution of the Bessel equation, may be also chosen differently, using the so-called Hankel functions

H n(1, 2 ) ( )  J n ( )  iYn ( ) . For representing the general solution of Eq. (130), this alternative is completely similar, for example, to using the pair of complex functions exp{ix}  cos x  isin x instead of the pair of real functions {cos x, sin x} for the representation of the general solution of Eq. (89) for X(x). Chapter 2

Page 39 of 68

Essential Graduate Physics

EM: Classical Electrodynamics

In order to complete the discussion of variable separation in the cylindrical coordinates, one more issue to address is the so-called modified Bessel functions: of the first kind, I(), and of the second kind, K(). They are two linearly-independent solutions of the modified Bessel equation, d 2R 1 dR   2   1  d 2  d   2

 R  0 , 

(2.155)

which differs from Eq. (130) “only” by the sign of one of its terms. Figure 21 shows a simple problem that leads (among many others) to this equation: a round thin conducting cylindrical pipe is sliced, perpendicular to its axis, to rings of equal height h, which are kept at equal but sign-alternating potentials. z

  V / 2

h

  V / 2 t

  V / 2

Fig. 2.21. A typical boundary problem whose solution may be conveniently described in terms of the modified Bessel functions.

If the system is very long (formally, infinite) in the z-direction, we may use the variable separation method for the solution of this problem, but now evidently need periodic (rather than exponential) solutions along the z-axis, i.e. linear combinations of sin kz and cos kz with various real values of the constant k. Separating the variables, we arrive at a differential equation similar to Eq. (129), but with the negative sign before the separation constant:

2 d 2R 1 dR 2   ( k  )R  0 . 2 d 2  d

(2.156)

The same radial coordinate’s normalization,   k, immediately leads us to Eq. (155), and hence (for  = n) to the modified Bessel functions In() and Kn(). Figure 22 shows the behavior of such functions, of a few lowest orders. One can see that at   0 the behavior is virtually similar to that of the “usual” Bessel functions – cf. Eqs. (132) and (152), with Kn() multiplied (by purely historical reasons) by an additional coefficient, /2: n

I n ( ) 

1     , n!  2 

      ln    , for n  0,  2  K n ( )    n  (n  1)!    , for n  0,  2  2 

(2.157)

However, the asymptotic behavior of the modified functions is very much different, with In(x) exponentially growing, and Kn() exponentially dropping at   :

Chapter 2

Page 40 of 68

Modified Bessel equation

Essential Graduate Physics

EM: Classical Electrodynamics

 1   I n ( )    2  3

3

2

2

I n ( )

n0

1

  K n ( )     2 

e ,

1/ 2

e  .

(2.158)

K n ( )

2

1

0

1/ 2

1

0

1



2

3

0

0

1

2



3

Fig. 2.22. The modified Bessel functions of the first kind (left panel) and the second kind (right panel).

This behavior is completely natural in the context of the problem shown in Fig. 21, in which the electrostatic potential may be represented as a sum of terms proportional to In() inside the thin pipe, and of terms proportional to Kn() outside it. To complete our brief survey of the Bessel functions, let me note that all of them discussed so far may be considered as particular cases of Bessel functions of the complex argument, say Jn(z) and Yn(z), or, alternatively, Hn(1,2)(z)  Jn(z)  iYn(z).52 At that, the “usual” Bessel functions Jn() and Yn() may be considered as the sets of values of these generalized functions on the real axis (z = ), while the modified functions as their particular case on the imaginary axis, i.e. at z = i, also with real : I ( )  i  J (i ),

K ( ) 



i 1 H(1) (i ) .

(2.159) 2 Moreover, this generalization of the Bessel functions to the whole complex plane z enables the use of their values along other directions on that plane, for example under angles /4  /2. As a result, one arrives at the so-called Kelvin functions:

ber   i bei   J (e i / 4 ), ker   i kei   i

 2

H(1) (e i3 / 4 ),

(2.160)

which are also useful for some important problems in physics and engineering. Unfortunately, I do not have time/space to discuss these problems in this course.53 complex functions still obey the general relations (143) and (146), with  replaced with z. In the QM part of in this series we will run into the so-called spherical Bessel functions jn() and yn(), which may be expressed via the Bessel functions of semi-integer orders. Surprisingly enough, these functions turn out to be simpler than Jn() and Yn().

52 These 53

Chapter 2

Page 41 of 68

Essential Graduate Physics

EM: Classical Electrodynamics

2. 8. Variable separation – spherical coordinates The spherical coordinates are very important in physics, because of the (at least approximate) spherical symmetry of many physical objects – from nuclei and atoms, to water drops in clouds, to planets and stars. Let us again require each component k of Eq. (84) to satisfy the Laplace equation. Using the full expression for the Laplace operator in spherical coordinates,54 we get 1   2  k r r r 2 r 

 1     sin  k  2   r sin   

 2 k 1    0.  2 2 2  r sin  

(2.161)

Let us look for a solution of this equation in the following variable-separated form:

k 

R (r ) P (cos  )F ( ) , r

(2.162)

Separating the variables one by one, starting from , just like this has been done in cylindrical coordinates, we get the following equations for the partial functions participating in this solution: d 2R l (l  1)  R  0, r2 dr 2 d d

 2 dP (1   ) d 

  2  l ( l 1 )    P  0 ,   1 2   

d 2F  2F  0 , 2  d

(2.163) (2.164) (2.165)

where   cos is a new variable used in lieu of  (so –1    +1), while 2 and l(l+1) are the separation constants. (The reason for selection of the latter one in this form will be clear in a minute.) One can see that Eq. (165) is very simple, and is absolutely similar to the Eq. (107) we have got for the polar and cylindrical coordinates. Moreover, the equation for the radial functions is simpler than in the cylindrical coordinates. Indeed, let us look for its partial solution in the form cr – just as we have done with Eq. (106). Plugging this solution into Eq. (163), we immediately get the following condition on the parameter :    1  l (l  1) . (2.166) This quadratic equation has two roots,  = l + 1 and  = – l, so the general solution of Eq. (163) is R  al r l 1 

bl . rl

(2.167)

However, the general solution of Eq. (164) (called either the general or associated Legendre equation) cannot be expressed via what is usually called elementary functions.55 Let us start its discussion from the axially-symmetric case when / =0. This means F() = const, and thus  = 0, so Eq. (164) is reduced to the so-called Legendre differential equation:

54 55

See, e.g., MA Eq. (10.9). Again, there is no generally accepted line between the “elementary” and “special” functions.

Chapter 2

Page 42 of 68

Essential Graduate Physics

EM: Classical Electrodynamics

d d

Legendre equation

 2 dP (1   ) d 

   l (l  1)P  0 . 

(2.168)

One can readily verify that the solutions of this equation for integer values of l are specific (Legendre) polynomials56 that may be described by the following Rodrigues’ formula: Legendre polynomials

P l ( ) 

1 dl ( 2  1) l , with l  0, 1, 2,... . 2 l l! d l

(2.169)

According to this formula, the first few Legendre polynomials are pretty simple:

P 0 ( )  1, P1 ( )   ,





1 (2.170) 3 2  1 , 2 1 P 3 ( )  5 3  3 , 2 1 P 4 ( )  35 4  30 2  3,.., 8 though such explicit expressions become more and more bulky as l is increased. As Fig. 23 shows, all these polynomials, which are defined on the [-1, +1] segment, end at the same point: Pl(+1) = + 1, while starting either at the same point or at the opposite point: Pl(-1) = (-1)l. Between these two endpoints, the lth Legendre polynomial has l zeros. It is straightforward to use Eq. (169) for proving that these polynomials form a full, orthogonal set of functions, with the following normalization rule:

P 2 ( ) 

1

 P ( )P l

l'





( )d 

2  ll , 2l  1 '

1

(2.171)

so any function f() defined on the segment [-1, +1] may be represented as a unique series over the polynomials.57 Variable separation in spherical coordinates (for axial symmetry)

Thus, taking into account the additional division by r in Eq. (162), the general solution of any axially-symmetric Laplace problem may be represented as 



l 0



 (r , )    al r l 

bl  P l (cos  ) . r l 1 

(2.172)

Note a strong similarity between this solution and Eq. (112) for the 2D Laplace problem in the polar coordinates. However, besides the difference in the angular functions, there is also a difference (by one) in the power of the second radial function, and this difference immediately shows up in problem solutions. 56

Just for the reference: if l is not an integer, the general solution of Eq. (2.168) may be represented as a linear combination of the so-called Legendre functions (not polynomials!) of the first and second kind, Pl() and Ql(). 57 This is why, at least for the purposes of this course, there is no good reason for pursuing (more complicated) solutions to Eq. (168) for non-integer values of l, mentioned in the previous footnote. Chapter 2

Page 43 of 68

Essential Graduate Physics

EM: Classical Electrodynamics

1

l 0 l 3

P l ( )

l4

0

l2

Fig. 2.23. A few lowest Legendre polynomials Pl().

l 1 1 1

0

1



Indeed, let us solve a problem similar to that shown in Fig. 15: find the electric field around a conducting sphere of radius R, placed into an initially uniform external field E0 (whose direction I will now take for the z-axis) – see Fig. 24a. (a)

(b)

E0

z

 0

R



Fig. 2.24. Conducting sphere in a uniform electric field: (a) the problem’s geometry, and (b) the equipotential surface pattern given by Eq. (176). The pattern is qualitatively similar but quantitatively different from that for the conducting cylinder in a perpendicular field – cf. Fig. 15.

If we select the arbitrary constant in the electrostatic potential so that z=0 = 0, then in Eq. (172) we should take a0 = b0 = 0. Now, just as has been argued for the cylindrical case, at r >> R the potential should approach that of the uniform field:

   E 0 z   E 0 r cos  ,

(2.173)

so in Eq. (172), only one of the coefficients al survives: al = –E0l,1. As a result, from the boundary condition on the surface, (R,) = 0, we get the following equation for the coefficients bl: b b     E 0 R  12  cos    ll1 P l (cos  )  0 . R   l 2 R Chapter 2

(2.174)

Page 44 of 68

Essential Graduate Physics

EM: Classical Electrodynamics

Now repeating the argumentation that led to Eq. (117), we may conclude that Eq. (174) is satisfied if bl  E 0 R 3 l ,1 ,

so, finally, Eq. (172) is reduced to



   E0  r  

(2.175)

R3   cos  . r 2 

(2.176)

This distribution, shown in Fig. 24b, is very much similar to Eq. (117) for the cylindrical case (cf. Fig. 15b, with the account for a different plot orientation), but with a different power of the radius in the second term. This difference leads to a quantitatively different distribution of the surface electric field: En  

 r

r R

 3E 0 cos  ,

(2.177)

so its maximal value is a factor of 3 (rather than 2) larger than the external field. Now let me briefly (mostly just for the reader’s reference) mention the Laplace equation solutions in the general case – with no axial symmetry. If the conductor-free space surrounds the origin from all sides, the solutions to Eq. (165) have to be 2-periodic, and hence  = n = 0, 1, 2,… Mathematics says that Eq. (164) with integer  = n and a fixed integer l has a solution only for a limited range of n:58  l  n  l . (2.178) These solutions are called associated Legendre functions (generally, they are not polynomials). For n  0, these functions may be defined via the Legendre polynomials, using the following formula:59

P l n    (1) n (1   2 ) n / 2

dn P l ( ) . d n

(2.179)

On the segment   [-1, +1], each set of the associated Legendre functions with a fixed index n and nonnegative values of l form a full, orthogonal set, with the normalization relation, 1

P

n l

( )P l n' ( )d 

1

2 (l  n)!  ll' , 2l  1 (l  n)!

(2.180)

that is evidently a generalization of Eq. (171). Since these relations may seem a bit intimidating, let me write down explicit expressions for a few Pl (cos) with the three lowest values of l and n  0, which are most important for applications. n

l  0:

P 00 cos    1 ;

(2.181)

58

In quantum mechanics, the letter n is typically reserved for the “principal quantum number”, while the azimuthal functions are numbered by index m. However, here I will keep using n as their index because for this course’s purposes, this seems more logical, in view of the similarity of the spherical and cylindrical functions. 59 Note that some texts use different choices for the front factor (called the Condon-Shortley phase) in the functions Plm, which do not affect the final results for the spherical harmonics Ylm. Chapter 2

Page 45 of 68

Essential Graduate Physics

EM: Classical Electrodynamics

l  1:

 P10 cos    cos  ,  1 P1 cos     sin  ;



(2.182)



P 20 (cos  )  3 cos 2   1 / 2,  l  2 : P 21 (cos  )  2 sin  cos  ,  P 2 (cos  )  3 cos 2  .  2

(2.183)

The reader should agree there is not much to fear in these functions – they are just certain sums of products of cos   and sin  (1 – 2)1/2. Fig. 25 shows the plots of a few lowest functions Pln (). P l n  

1 l l l l l

0 1 2 3 4

 n0 n 1 n2 n3 n4

1

Fig. 2.25. A few lowest associated Legendre functions. (Adapted from an original by Geek3, available at https://en.wikipedia.org/wiki/Ass ociated_Legendre_polynomials, under the GNU Free Documentation License.)

Using the associated Legendre functions, the general solution (162) to the Laplace equation in the spherical coordinates may be expressed as 



l 0



 (r , ,  )    al r l 

bl  l n   P l (cos  )Fn ( ), r l 1  n  0

Fn ( )  c n cos n  s n sin n .

(2.184)

Since the difference between the angles  and  is somewhat artificial, physicists prefer to think not in terms of the functions P and F in separation, but directly about their products that participate in this solution.60 60

In quantum mechanics, it is more convenient to use a slightly different alternative set of basic functions of the same problem, namely the following complex functions called the spherical harmonics:

 2l  1 (l  n)! Yl ( ,  )     4 (l  n)! n

1/ 2

P l n (cos  )e in ,

which are defined for both positive and negative n (within the limits –l  n  +l) – see, e.g., QM Secs. 3.6 and 5.6. (Note again that in that field, our index n is traditionally denoted as m, and called the magnetic quantum number.) Chapter 2

Page 46 of 68

Variable separation in spherical coordinates (general case)

Essential Graduate Physics

EM: Classical Electrodynamics

As a rare exception for my courses, to save time I will skip giving an example of using the associated Legendre functions in electrostatics, because quite a few examples of these functions’ applications will be given in the quantum mechanics part of this series. 2.9. Charge images So far, we have discussed various methods of solution of the Laplace boundary problem (35). Let us now move on to the discussion of its generalization, the Poisson equation (1.41). We need it when besides conductors, we also have stand-alone charges with a known spatial distribution (r). (Its discussion will also allow us, better equipped, to revisit the Laplace problem in the next section.) Let us start from a somewhat limited, but very useful charge image (or “image charge”) method. Consider a very simple problem: a single point charge near a conducting half-space – see Fig. 26. z

d

q

r1

r

r'

r2

0

 0

 d

Fig. 2.26. The simplest problem readily solvable by the charge image method. The points’ colors are used, as before, to denote the charges of the original (red) and opposite (blue) sign.

r" q

Let us prove that its solution, above the conductor’s surface (z  0), may be represented as:

 (r ) 

q 1 q q     4 0  r1 r2  4 0

 1 1    r  r' r  r" 

 ,  

(2.185)

or in a more explicit form, using the cylindrical coordinates shown in Fig. 26:

 (r ) 

q  1  2 4 0    ( z  d ) 2





1/ 2



 , 1/ 2    2  (z  d )2



1



(2.186)

where  is the distance of the field observation point r from the “vertical” line on which the charge is located. Indeed, this solution satisfies both the boundary condition  = 0 at the surface of the conductor (z = 0), and the Poisson equation (1.41), with the single -functional source at point r’ = {0, 0, +d} on its right-hand side, because the second singularity of the solution, at point r” = {0, 0, –d}, is outside the region of the solution’s validity (z  0). Physically, this solution may be interpreted as the sum of the fields of the actual charge (+q) at point r’, and an equal but opposite charge (–q) at the “mirror image” point r” (Fig. 26). This is the basic idea of the charge image method. Before moving on to more complex problems, let us discuss the situation shown in Fig. 26 in a little bit more detail, due to its fundamental importance. First, we can use Eqs. (3) and (186) to calculate the surface charge density:

Chapter 2

Page 47 of 68

Essential Graduate Physics

   0

 z

z 0



EM: Classical Electrodynamics

1 q   2  4 z    ( z  d ) 2





1/ 2





1 2

 (z  d )2

 2d q   2  4   d 2  z 0





1/ 2



3/ 2

. (2.187)

From this, the total surface charge is 

Q    d 2 r  2   (  ) d   S

0



q 2d  2 0 2  d2





3/ 2

d .

(2.188)

This integral may be easily worked out using the substitution   2/d2 (giving d = 2d/d2): 

q d Q    q. 2 0   13 / 2

(2.189)

This result is very natural: the conductor brings as much surface charge from its interior to the surface as necessary to fully compensate for the initial charge (+q) and hence kill the electric field at large distances as efficiently as possible, hence reducing the total electrostatic energy (1.65) to the lowest possible value. For a better feeling of this polarization charge of the surface, let us take our calculations to the extreme – to the q equal to one elementary change e, and place a particle with this charge (for example, a proton) at a macroscopic distance – say 1 m – from the conductor’s surface. Then, according to Eq. (189), the total polarization charge of the surface equals that of an electron, and according to Eq. (187), its spatial extent is of the order of d2 = 1 m2. This means that if we consider a much smaller part of the surface, A R.) At large distances, d >> R, this attractive force decreases as 1/d3. This unusual dependence arises because, as Eq. (199) specifies, the induced charge of the sphere, responsible for the force, is not constant but decreases as 1/d. In the next chapter, we will see that such force is also typical for the interaction between a point charge and a dipole. All previous formulas were for a sphere that is grounded to keep its potential equal to zero. But what if we keep the sphere galvanically insulated, so its net charge is fixed, for example, equals zero? Instead of solving this problem from scratch, let us use (again!) the almighty linear superposition principle. For that, we may add to the previous problem an additional charge, equal to –Q = –q’, to the sphere, and argue that this addition gives, at all points, an additional, spherically-symmetric potential that does not depend on the potential induced by the external charge q, and was calculated in Sec. 1.2 – see Eq. (1.19). For the interaction force, such addition yields F

qq' qq' q2    4 0 4 0 (d  d ' ) 2 4πε 0 d 2

 Rd R  3.  2 2 2 d   (d  R )

(2.201)

At large distances, the two terms proportional to 1/d3 cancel each other, giving F  1/d5, so the potential energy of such interaction behaves as U  1/d4. Such a rapid force decay is due to the fact that the field of the uncharged sphere is equivalent to that of two (equal and opposite) induced charges +q’ and –q’, and the distance between them (d – d’ = d – R2/d) tends to zero at d  . 2.10. Green’s functions I have spent so much time/space discussing potential distributions created by a single point charge in various conductor geometries because for any of the geometries, the generalization of these results to the arbitrary distribution (r) of free charges is straightforward. Namely, if a single charge q, located at some point r’, creates at point r the electrostatic potential

 (r ) 

1 4 0

qG (r, r' ) ,

(2.202)

then, due to the linear superposition principle, an arbitrary charge distribution creates the potential

Chapter 2

Page 53 of 68

Essential Graduate Physics

EM: Classical Electrodynamics

 (r ) 

1

1

4 0

 q G(r, r )  4   (r' )G(r, r' )d j

3

j

j

r' .

(2.203)

0

The function G(r, r’) is called the (spatial) Green’s function – the notion very fruitful and hence popular in all fields of physics.68 Evidently, as Eq. (1.35) shows, in the unlimited free space G (r, r' ) 

1 , r  r'

(2.204)

i.e. the Green’s function depends only on one scalar argument – the distance between the fieldobservation point r and the field-source (charge) point r’. However, as soon as there are conductors around, the situation changes. In this course, I will only discuss the Green’s functions defined to vanish as soon as the radius-vector r points to the surface (S) of any conductor:69

G (r, r' ) r S  0.

(2.205)

With this definition, it is straightforward to deduce the Green’s functions for the solutions of the last section’s problems in which conductors were grounded, i.e. had potential  = 0. For example, for a semi-space z  0 limited by a grounded conducting plane z = 0 (Fig. 26), Eq. (185) yields 1 1  , r  r' r  r"

G

with ρ"  ρ' and z"   z' ,

(2.206)

where  is the 2D radius vector. We see that in the presence of conductors (and, as we will see later, any other polarizable media), the Green’s function may depend not only on the difference r – r’, but on each of these two arguments in a specific way. So far, this is just re-naming our old results. The really non-trivial result of the Green’s function formalism in electrostatics is that, somewhat counter-intuitively, the knowledge of this function for a system with grounded conductors (Fig. 30a) enables the calculation of the field created by voltagebiased conductors (Fig. 30b), with the same geometry. To show this, let us use the so-called Green’s theorem of the vector calculus.70 The theorem states that for any two scalar, differentiable functions f(r) and g(r), and any volume V,

f 

V

2



g  g 2 f d 3 r    f g  gf n d 2 r ,

(2.207)

S

where S is the surface limiting the volume. Applying the theorem to the electrostatic potential (r) and the Green’s function G (also considered as a function of r), let us use the Poisson equation (1.41) to replace 2 with (-/0), and notice that G, considered as a function of r, obeys the Poisson equation with the -functional source:  2 G (r, r' )  4 (r  r' ) .

(2.208)

68

See, e.g., CM Sec. 5.1, QM Secs. 2.2 and 7.4, and SM Sec. 5.5. Note that the numerical coefficient in Eq. (202) (and hence all resulting formulas) is the matter of convention; this choice does not affect the final results. 69 G so defined is sometimes called the Dirichlet function. 70 See, e.g., MA Eq. (12.3). Actually, this theorem is a ready corollary of the better-known divergence (“Gauss”) theorem, MA Eq. (12.2). Chapter 2

Page 54 of 68

Spatial Green’s function

Essential Graduate Physics

EM: Classical Electrodynamics

(Indeed, according to its definition (202), this function may be formally considered as the field of a point charge q = 40.) Now swapping the notation of the radius-vectors, r  r’, and using the Green’s function symmetry, G(r, r’) = G(r’, r),71 we get   (r' )  G (r, r' )  (r' )  2   G (r, r' )d 3 r'    (r' ) d r' .  G (r, r' )  4 (r )     0  n' n'  V S  (a)

 0

(b)

  1

 0

q2

q2

qj

q1

(2.209)

  2 qj

q1

Fig. 2.30. Green’s function method allows the solution of a simpler boundary problem (a) to be used for the solution of a more complex problem (b), for the same conductor geometry.

Let us apply this relation to the volume V of free space between the conductors, and the boundary S drawn immediately outside of their surfaces. In this case, by its definition, Green’s function G(r, r’) vanishes at the conductor surface, i.e. at r  S – see Eq. (205). Now changing the sign of ∂n’ (so it would be the outer normal for conductors, rather than free space volume V), dividing all terms by 4, and partitioning the total surface S into the parts (numbered by index j) corresponding to different conductors (possibly, kept at different potentials k), we finally arrive at the famous result:72

 (r ) 

Potential via Green’s function

1

  (r' )G (r, r' )d

4 0 V

3

r' 

1 4

G (r, r' ) 2 d r' . n'  Sk

  k

k

(2.210)

While the first term on the right-hand side of this relation is a direct and evident expression of the superposition principle, given by Eq. (203), the second term is highly non-trivial: it describes the effect of conductors with arbitrary potentials k (Fig. 30b), using the Green’s function calculated for the similar system with grounded conductors, i.e. with all k = 0 (Fig. 30a). Let me emphasize that since our volume V excludes conductors, the first term on the right-hand side of Eq. (210) includes only the standalone charges in the system (in Fig. 30, marked q1, q2, etc.), but not the surface charges of the conductors – which are taken into account, indirectly, by the second term. In order to illustrate what a powerful tool Eq. (210) is, let us use to calculate the electrostatic field in two systems. In the first of them, a plane, circular, conducting disk of radius R, separated with a very thin cut from the remaining conducting plane, is biased with potential  = V, while the rest of the plane is grounded – see Fig. 31.

71

This symmetry, evident for the particular cases (204) and (206), may be readily proved for the general case by applying Eq. (207) to the functions f (r)  G(r, r’) and g(r)  G(r, r”). With this substitution, the left-hand side of that equality becomes equal to –4[G(r”, r’) – G(r’, r”)], while the right-hand side is zero, due to Eq. (205). 72 In some textbooks, the sign before the surface integral is negative, because their authors use the outer normal to the free-space region V rather than those occupied by conductors – as I do. Chapter 2

Page 55 of 68

Essential Graduate Physics

EM: Classical Electrodynamics

z

r' r

 V

 0

0

 0 

R

r" Fig. 2.31. A voltage-biased conducting disk separated from the rest of a conducting plane.

If the width of the gap between the disk and the rest of the plane is negligible, we may apply Eq. (210) without stand-alone charges, (r’) = 0, and the Green’s function for the uncut plane – see Eq. (206).73 In the cylindrical coordinates, with the origin at the disk’s center (Fig. 31), the function is 1

G (r, r' ) 



2





 ' 2  2 ' cos(  ' )  ( z  z' ) 2 1

2

 ' 2  2 ' cos(  ' )  ( z  z' ) 2



1/ 2



1/ 2

.

(2.211)

(The sum of the first three terms under each square root in Eq. (211) is just the squared distance between the horizontal projections  and ’ of the vectors r and r’ (or r”) correspondingly, while the last terms are the squares of their vertical displacements.) Now we can readily calculate the derivative participating in Eq. (210), for z  0: G n'

S



G z'

z'  0





2z 2

 '  2 ' cos(  ' )  z 2 2



3/ 2

.

(2.212)

Due to the axial symmetry of the system, we may take  for zero. With this, Eqs. (210) and (212) yield V  4

Vz G (r, r ' ) 2 S n' d r'  2

2

R

 d'   0

0

2

'd' .  '  2 ' cos '  z 2  3 / 2 2

(2.213)

This integral is not overly pleasing, but may be readily worked out at least for points on the symmetry axis ( = 0, z  0):74 R R2 / z2   V d z 'd ' (2.214)   V 1  2   Vz  , 3 / 2 3 / 2 2 1 / 2  2 2 2   R z     1     ' z    0 0 This result shows that if z  0, the potential tends to V (as it should), while at z >> R,

 V

R2 . 2z 2

(2.215)

73

Indeed, if all parts of the cut plane are grounded, a narrow cut does not change the field distribution, and hence the Green’s function, significantly. 74 There is no need to repeat the calculation for z  0: from the symmetry of the problem, (–z) = (z). Chapter 2

Page 56 of 68

Essential Graduate Physics

EM: Classical Electrodynamics

Now let us use the same Eq. (210) to solve the (in :-)famous problem of the cut sphere (Fig. 32). Again, if the gap between the two conducting semi-spheres is very thin (t > d), straight wires.) It is especially crucial to note the “vortex” character of the field: its lines go around the wire, forming rings with the centers on the current line. This is in sharp contrast to the electrostatic field lines, which can only begin and end on electric charges and never form closed loops (otherwise the Coulomb force qE would not be conservative). In the magnetic case, the vortex structure of the field may be reconciled with the potential character of the magnetic forces, which is evident from Eq. (1), due to the vector products in Eqs. (14)-(15).

Chapter 5

Page 6 of 42

Essential Graduate Physics

EM: Classical Electrodynamics

Now we may readily use Eq. (15), or rather its thin-wire version F  I  dr  B(r ) ,

(5.21)

l

to apply Eq. (20) to the two-wire problem (Fig. 2). Since for the second wire, the vectors dr and B are perpendicular to each other, we immediately arrive at our previous result (7), which was obtained directly from Eq. (1). The next important example of the application of the Biot-Savart law (14) is the magnetic field at the axis of a circular current loop (Fig. 4b). Due to the problem’s symmetry, the net field B has to be directed along the axis, but each of its elementary components dB is tilted by the angle  = tan-1(z/R) to this axis, so its axial component is  I dr' R . (5.22) dB z  dB cos   0 2 2 4 R  z R 2  z 2 1 / 2





Since the denominator of this expression remains the same for all wire components dr’, the integration over r’ is easy (dr’ = 2R), giving finally B

0 I 2

R

R2 2

 z2



3/ 2

.

(5.23)

Note that the magnetic field in the loop’s center (i.e., for z = 0), B

0 I

, (5.24) 2R is  times higher than that due to a similar current in a straight wire, at distance d = R from it. This difference is readily understandable, since all elementary components of the loop are at the same distance R from the observation point, while in the case of a straight wire, all its points but one are separated from the observation point by distances larger than d. Another notable fact is that at large distances (z2 >> R2), the field (23) is proportional to z-3: B

0 I R 2 2

z

3



 0 2m , 4 z 3

with m  IA ,

(5.25)

where A = R2 is the loop area. Comparing this expression with Eq. (3.13), for the particular case  = 0, we see that such field is similar to that of an electric dipole (at least along its direction), with the replacement of the electric dipole moment magnitude p with the m so defined – besides the front factor. Indeed, such a plane current loop is the simplest example of a system whose field, at distances much larger than R, is that of a magnetic dipole, with a dipole moment m – the notions to be discussed in much more detail in Sec. 4 below. 5.2. Vector potential and the Ampère law The reader could see that the calculations of the magnetic field using Eq. (14) or (18) are still somewhat cumbersome even for the very simple systems we have examined. As we saw in Chapter 1, similar calculations in electrostatics, at least for several important highly symmetric systems, could be

Chapter 5

Page 7 of 42

Essential Graduate Physics

EM: Classical Electrodynamics

substantially simplified using the Gauss law (1.16). A similar relation exists in magnetostatics as well, but has a different form, due to the vortex character of the magnetic field. To derive it, let us notice that in an analogy with the scalar case, the vector product under the integral (14) may be transformed as j (r' )  (r  r' ) j(r' )   , (5.26) 3 r  r' r  r' where the operator  acts in the r-space. (This equality may be readily verified by Cartesian components, noticing that the current density is a function of r’ and hence its components are independent of r.) Plugging Eq. (26) into Eq. (14), and moving the operator  out of the integral over r’, we see that the magnetic field may be represented as the curl of another vector field – the so-called vector potential, defined as:12

B(r )    A(r ) , and in our current case equal to A(r ) 

 0 j(r' ) 3 d r' . 4 V' r  r'

(5.27) Vector potential

(5.28)

Please note a beautiful analogy between Eqs. (27)-(28) and, respectively, Eqs. (1.33) and (1.38).13 This analogy implies that the vector potential A plays, for the magnetic field, essentially the same role as the scalar potential  plays for the electric field (hence the name “potential”), with due respect to the vortex character of B. This notion will be discussed in more detail below. Now let us see what equations we may get for the spatial derivatives of the magnetic field. First, vector algebra says that the divergence of any curl is zero.14 In application to Eq. (27), this means that  B  0 .

(5.29)

Comparing this equation with Eq. (1.27), we see that Eq. (29) may be interpreted as the absence of a magnetic analog of an electric charge, on which magnetic field lines could originate or end. Numerous searches for such hypothetical magnetic charges, called magnetic monopoles, using very sensitive and sophisticated experimental setups,15 have not given any reliable evidence of their existence in Nature. Proceeding to the alternative, vector derivative of the magnetic field, i.e. to its curl, and using Eq. (28), we obtain   j(r' ) 3  (5.30)   B (r )  0       d r'  r  r' 4 V'   This expression may be simplified by using the following general vector identity:16     c      c    2 c ,

(5.31)

applied to vector c(r)  j(r’)/r – r’: In the Gaussian units, Eq. (27) remains the same, and hence in Eq. (28), 0/4 is replaced with 1/c. In Eq. (1.38), there was no real need for the additional clarification provided by the integration volume label V’. 14 See, e.g., MA Eq. (11.2). 15 For a recent example, see B. Acharya et al., Nature 602, 63 (2022). 16 See, e.g., MA Eq. (11.3). 12 13

Chapter 5

Page 8 of 42

No magnetic monopoles

Essential Graduate Physics

B 

EM: Classical Electrodynamics

0  1 1   j(r' )   d 3 r'  0  j(r' ) 2 d 3 r' . 4 V' r  r' 4 V' r  r'

(5.32)

As was already discussed during our study of electrostatics in Sec. 3.1, 2

1  4 (r  r' ) , r  r'

(5.33)

so the last term of Eq. (32) is just 0j(r). On the other hand, inside the first integral we can replace  with (-’), where prime means differentiation in the space of the radius vectors r’. Integrating that term by parts, we get  '  j(r' ) 3 1   B   0   j n (r' ) d 2 r'    (5.34) d r'   0 j(r ) . 4 S ' r  r' r  r ' V' Applying this equality to the volume V’ limited by a surface S’ either sufficiently distant from the field concentration, or with no current crossing it, we may neglect the first term on the right-hand side of Eq. (34), while the second term always equals zero in statics, due to the dc charge continuity – see Eq. (4.6). As a result, we arrive at a very simple differential equation17   B  0 j .

(5.35)

This is (the dc form of) the inhomogeneous Maxwell equation – which in magnetostatics plays a role similar to Eq. (1.27) in electrostatics. Let me display, for the first time in this course, this fundamental system of equations (at this stage, for statics only), and give the reader a minute to stare, in silence, at their beautiful symmetry – which has inspired so much of the later development of physics:   B   0 j,

  E  0,

Maxwell equations: statics

 E 

 , 0

  B  0.

(5.36)

Their only asymmetry, two zeros on the right-hand sides (for the magnetic field’s divergence and electric field’s curl), is due to the absence in the Nature of magnetic monopoles and their currents. I will discuss these equations in more detail in Sec. 6.7, after the first two equations (for the fields’ curls) have been generalized to their full, time-dependent versions. Returning now to our current, more mundane but important task of calculating the magnetic field induced by simple current configurations, we can benefit from an integral form of Eq. (35). For that, let us integrate this equation over an arbitrary surface S limited by a closed contour C, and apply to the result the Stokes theorem.18 The resulting expression,

 B  dr    j d

Ampère law

n

0

C

2

r  0 I ,

(5.37)

S

where I is the net electric current crossing surface S, is called the Ampère law. As in all earlier formulas for the magnetic field, in the Gaussian units, the coefficient 0 in this relation is replaced with 4/c. 18 See, e.g., MA Eq. (12.1) with f = B. 17

Chapter 5

Page 9 of 42

Essential Graduate Physics

EM: Classical Electrodynamics

As the first example of its application, let us return to the current in a straight wire (Fig. 4a). With the Ampère law in our arsenal, we can readily pursue an even more ambitious goal than that achieved in the previous section, namely to calculate the magnetic field both outside and inside of a wire of an arbitrary radius R, with an arbitrary (albeit axially-symmetric) current distribution j() – see Fig. 5.

z

R

j

C  R

B

C  R

Fig. 5.5. The simplest application of the Ampère law: the magnetic field of a straight current.

Selecting the Ampère-law contour C in the form of a ring of some radius in the plane normal to the wire’s axis z, we have Bdr = Bd, where  is the azimuthal angle, so Eq. (37) yields:   for   R, 2  j ( ' )  'd' ,  0 2 B    0   R  2 j ( ' ) ' d'  I , for   R.  0

(5.38)

Thus we have not only recovered our previous result (20), with the notation replacement d  , in a much simpler way but also have been able to calculate the magnetic field’s distribution inside the wire. In the most common particular case when the current is uniformly distributed along its cross-section, j() = const, the first of Eqs. (38) immediately yields B   for   R. Another important system is a straight, long solenoid (Fig. 6a), with dense winding: n2A >> 1, where n is the number of wire turns per unit length, and A is the area of the solenoid’s cross-section.

I

(a)

(b)

B

C1 B



N

l

C2

 I

I

Fig. 5.6. Calculating magnetic fields of (a) straight and (b) toroidal solenoids.

From the symmetry of this problem, the longitudinal (in Fig. 6a, vertical) component Bz of the magnetic field may only depend on the distance  of the observation point from the solenoid’s axis. First taking a plane Ampère contour C1, with both long sides outside the solenoid, we get Bz(2) – Bz(1) = 0,

Chapter 5

Page 10 of 42

Essential Graduate Physics

EM: Classical Electrodynamics

because the total current piercing the contour equals zero. This is only possible if Bz = 0 at any  outside of the solenoid, provided that it is infinitely long.19 With this result on hand, from the Ampère law applied to the contour C2, we get the following relation for the only (z-) component of the internal field:

Bl   0 NI ,

(5.39)

where N is the number of wire turns passing through the contour of length l. This means that regardless of the exact position of the internal side of the contour, the result is the same:

B  0

N I   0 nI . l

(5.40)

Thus, the field inside an infinitely long solenoid (with an arbitrary shape of its cross-section) is uniform; in this sense, a long solenoid is a magnetic analog of a wide plane capacitor, explaining why this system is so widely used in physical experiment. As should be clear from its derivation, the obtained results, especially that the field outside of the solenoid equals zero, are conditional on the solenoid length being very large in comparison with its lateral size. (From Eq. (25), we may predict that for a solenoid of a finite length l, the close-range external field is a factor of ~A/l2 lower than the internal one.) A much better suppression of such “fringe” fields may be obtained using toroidal solenoids (Fig. 6b). The application of the Ampère law to this geometry shows that in the limit of dense winding (N >> 1), there is no fringe field at all (for any relation between the two radii of the torus), while inside the solenoid, at distance  from the system’s axis,  NI . (5.41) B 0 2 We see that a possible drawback of this system for practical applications is that the internal field does depend on , i.e. is not quite uniform; however, if the torus is relatively thin, this deficiency is minor. Next let us discuss a very important question: how can we solve the problems of magnetostatics for systems whose low symmetry does not allow getting easy results from the Ampère law? (The examples are of course too numerous to list; for example, we cannot use this approach even to reproduce Eq. (23) for a round current loop.) From the deep analogy with electrostatics, we may expect that in this case, we could calculate the magnetic field by solving a certain boundary problem for the field’s potential – in our current case, the vector potential A defined by Eq. (28). However, despite the similarity of this formula and Eq. (1.38) for , which was noticed above, there is an additional issue we should tackle in the magnetic case – besides the obvious fact that calculating the vector potential distribution means determining three scalar functions (say, Ax, Ay, and Az), rather than just one (). To reveal the issue, let us plug Eq. (27) into Eq. (35):

    A    0 j ,

(5.42)

and then apply to the left-hand side of this equation the same identity (31). The result is

Applying the Ampère law to a circular contour of radius , coaxial with the solenoid, we see that the field outside (but not inside!) it has an azimuthal component B, similar to that of the straight wire (see Eq. (38) above) and hence (at N >> 1) much weaker than the longitudinal field inside the solenoid – see Eq. (40). 19

Chapter 5

Page 11 of 42

Essential Graduate Physics

EM: Classical Electrodynamics

 (  A)   2 A   0 j .

(5.43)

On the other hand, as we know from electrostatics (please compare Eqs. (1.38) and (1.41)), the vector potential A(r) given by Eq. (28) has to satisfy a simpler (“vector-Poisson”) equation  2 A   0 j ,

(5.44)

Poisson equation for A

which is just a set of three usual Poisson equations for each Cartesian component of A. To resolve the difference between these results, let us note that Eq. (43) is reduced to Eq. (44) if A = 0. In this context, let us discuss what discretion we have in the choice of the potential. In electrostatics, we may add, to the scalar function ’ that satisfies Eq. (1.33) for the given field E, not only an arbitrary constant but even an arbitrary function of time:   '  f (t )  '  E .

(5.45)

Similarly, using the fact that the curl of the gradient of any scalar function equals zero,20 we may add to any vector function A’ that satisfies Eq. (27) for the given field B, not only any constant but even a gradient of an arbitrary scalar function (r, t), because

  ( A'   )    A'    ( )    A'  B .

(5.46)

Such additions, which keep the fields intact, are called gauge transformations.21 Let us see what such a transformation does to A’:   ( A'   )    A'   2  . (5.47) For any choice of such a function A’, we can always choose the function  in such a way that it satisfies the Poisson equation 2 = –A’, and hence makes the divergence of the transformed vector potential, A  A’ + , equal to zero everywhere, A  0, (5.48) thus reducing Eq. (43) to Eq. (44). To summarize, the set of distributions A’(r) that satisfy Eq. (27) for a given field B(r), is not limited to the vector potential A(r) given by Eq. (44), but is reduced to it upon the additional Coulomb gauge condition (48). However, as we will see in a minute, even this condition still leaves some degrees of freedom in the choice of the vector potential. To illustrate this fact, and also to get a better gut feeling of the vector potential’s distribution in space, let us calculate A(r) for two very basic cases. First, let us revisit the straight wire problem shown in Fig. 5. As Eq. (28) shows, in this case the vector potential A has just one component (along the axis z). Moreover, due to the problem’s axial symmetry, its magnitude may only depend on the distance from the axis: A = nzA(). Hence, the gradient of A is directed across the z-axis, so Eq. (48) is satisfied at all points. For our symmetry (/ = /z = 0), the Laplace operator, written in cylindrical coordinates, has just one term,22 reducing Eq. (44) to 20

See, e.g., MA Eq. (11.1). The use of the term “gauge” (originally meaning “a measure” or “a scale”) in this context is purely historic, so the reader should not try to find too much hidden sense in it. 22 See, e.g., MA Eq. (10.3). 21

Chapter 5

Page 12 of 42

Coulomb gauge

Essential Graduate Physics

EM: Classical Electrodynamics

1 d  dA     0 j( ) .   d  d 

(5.49)

Multiplying both sides of this equation by  and integrating them over the coordinate once, we get 

dA     0  j ( ' ) 'd'  const . d 0

(5.50)

Since in the cylindrical coordinates, for our symmetry, B = –dA/d,23 Eq. (50) is nothing else than our old result (38) for the magnetic field.24 However, let us continue the integration, at least for the region outside the wire, where the function A() depends only on the full current I rather than on the current distribution. Dividing both parts of Eq. (50) by , and integrating them over this argument again, we get

A   

0 I ln   const, 2

R

where I  2  j (  ) d , for   R .

(5.51)

0

As a reminder, we had similar logarithmic behavior for the electrostatic potential outside a uniformly charged straight line. This is natural because the Poisson equations for both cases are similar. Now let us find the vector potential for the long solenoid (Fig. 6a), with its uniform magnetic field. Since Eq. (28) tells us that the vector A should follow the direction of the inducing current, we may start with looking for it in the form A = n A(). (This is especially natural if the solenoid’s crosssection is circular.) With this orientation of A, the same general expression for the curl operator in cylindrical coordinates yields A = nz(1/)d(A)/d. According to Eq. (27), this expression should be equal to B – in our current case to nzB, with a constant B – see Eq. (40). Integrating this equality, and selecting such integration constant that A(0) is finite, we get A  

B , 2

i.e. A 

B n . 2

(5.52)

Plugging this result into the general expression for the Laplace operator in the cylindrical coordinates,25 we see that the Poisson equation (44) with j = 0 (i.e. the Laplace equation) is satisfied again – which is natural since, for this distribution, the Coulomb gauge condition (48) is satisfied: A = 0. However, Eq. (52) is not the unique (or even the simplest) vector potential that gives the same uniform field B = nzB. Indeed, using the well-known expression for the curl operator in Cartesian coordinates,26 it is straightforward to check that each of the vector functions A’ = nyBx and A”= –nxBy also has the same curl, and also satisfies the Coulomb gauge condition (48).27 If such solutions do not look very natural because of their anisotropy in the [x, y] plane, please consider the fact that they represent the uniform magnetic field regardless of its source – for example, regardless of the shape of the long solenoid’s cross-section. Such choices of the vector potential may be very convenient for some

See, e.g., MA Eq. (10.5) with / = /z = 0. Since the magnetic field at the wire’s axis has to be zero (otherwise, being normal to the axis, where would it be directed?), the integration constant in Eq. (50) has to equal zero. 25 See, e.g., MA Eq. (10.6). 26 See, e.g., MA Eq. (8.5). 27 The axially-symmetric vector potential (52) is just a weighted sum of these two functions: A = (A’ + A”)/2. 23 24

Chapter 5

Page 13 of 42

Essential Graduate Physics

EM: Classical Electrodynamics

problems, for example for the quantum-mechanical analysis of the 2D motion of a charged particle in the perpendicular magnetic field, giving the famous Landau energy levels.28 5.3. Magnetic energy, flux, and inductance Considering the currents flowing in a system as generalized coordinates, the magnetic forces (1) between them are their unique functions, and in this sense, the energy U of their magnetic interaction may be considered the potential energy of the system. The apparent (but somewhat deceptive) way to derive an expression for this energy is to use the analogy between Eq. (1) and its electrostatic analog, Eq. (2). Indeed, Eq. (2) may be transformed into Eq. (1) with just three replacements: (i) (r)’(r’) should be replaced with [j(r)j’(r’)], (ii) 0 should be replaced with 1/0, and (iii) the sign before the double integral has to be replaced with the opposite one. Hence we may avoid repeating the calculation made in Chapter 1, by making these replacements in Eq. (1.59), which gives the electrostatic potential energy of the system with (r) and ’(r’) describing the same charge distribution, i.e. with ’(r) = (r), to get the following expression for the magnetic potential energy in the system with, similarly, j’(r) = j(r):29 Uj 

0 1 3 j(r )  j(r' ) d r  d 3 r' .  r  r' 4 2

(5.53)

But this is not the unique answer! Indeed, Eq. (53) describes the proper potential energy of the system (in particular, giving the correct result for the current interaction forces) only in the case when the interacting currents are fixed – just as Eq. (1.59) is adequate when the interacting charges are fixed. Here comes a substantial difference between electrostatics and magnetostatics: due to the fundamental fact of electric charge conservation (already discussed in Secs. 1.1 and 4.1), keeping electric charges fixed does not require external work, while the maintenance of currents generally does. As a result, Eq. (53) describes the energy of the magnetic interaction plus of the system keeping the currents constant – or rather of its part depending on the system under our consideration. In this situation, using the terminology already used in Sec. 3.5 (see also a general discussion in CM Sec. 1.4.), Uj may be called the Gibbs potential energy of our magnetic system. Now to exclude from Uj the contribution due to the interaction with the current-supporting system(s), i.e. calculate the potential energy U of our system as such, we need to know this contribution. The simplest way to do this is to use the Faraday induction law that describes this interaction and will be discussed at the beginning of the next chapter. This is why let me postpone the derivation until that point, and for now ask the reader to believe me that the removal of the interaction leads to an expression similar to Eq. (53), but with the opposite sign: U

0 1 3 j(r )  j(r' ) d r  d 3 r' ,  r  r' 4 2

(5.54)

28

See, e.g., QM Sec. 3.2. Just as in electrostatics, for the interaction of two independent current distributions j(r) and j’(r’), the factor ½ should be dropped. 29

Chapter 5

Page 14 of 42

Magnetic interaction energy

Essential Graduate Physics

EM: Classical Electrodynamics

I will prove this result in Sec. 6.2, but actually, this sign dichotomy should not be quite surprising to the attentive reader, in the context of a similar duality of Eqs. (3.73) and (3.81) for the electrostatic energies including and excluding the interaction with the field source. Due to the importance of Eq. (54), let us rewrite it in several other forms, convenient for various applications. First of all, just as in electrostatics, it may be recast into a potential-based form. Indeed, with the definition (28) of the vector potential A(r), Eq. (54) becomes

U

1 j(r )  A(r ) d 3 r .  2

(5.55)

This formula, which is a clear magnetic analog of Eq. (1.60) of electrostatics, is very popular among field theorists, because it is very handy for their manipulations; it is also useful for some practical applications. However, for many calculations, it is more convenient to have a direct expression of the energy via the magnetic field. Again, this may be done very similarly to what had been done for electrostatics in Sec. 1.3, i.e. by plugging into Eq. (55) the current density expressed from Eq. (35) and then transforming it as30 U

1 1 1 1 j  Ad 3 r  A      d 3 r  B     d 3 r    ( A  B)d 3 r . (5.56)     2 2 0 2 0 2 0

Now using the divergence theorem, the second integral may be transformed into a surface integral of (AB)n. According to Eqs. (27)-(28) if the current distribution j(r) is localized, this vector product drops, at large distances, faster than 1/r2, so if the integration volume is large enough, the surface integral is negligible. In the remaining first integral in Eq. (56), we may use Eq. (27) to rewrite A as B. As a result, we get a very simple and fundamental formula. U

Magnetic field energy

1 20

B d r. 2

3

(5.57a)

Just as with the electric field, this expression may be interpreted as a volume integral of the magnetic energy density u: 1 (5.57b) U   u r d 3 r , with u r   B 2 r  , 2 0

clearly similar to Eq. (1.65).31 Again, the conceptual choice between the spatial localization of magnetic energy – either at the location of electric currents only, as implied by Eqs. (54) and (55), or in all regions where the magnetic field exists, as apparent from Eq. (57b), cannot be done within the framework of magnetostatics, and only the electrodynamics gives a decisive preference for the latter choice. For the practically important case of currents flowing in several thin wires, Eq. (54) may be first integrated over the cross-section of each wire, just as was done at the derivation of Eq. (4). As before, since the integral of the current density over the kth wire's cross-section is just the current Ik in the wire, and cannot change along its length, it may be taken from the remaining integrals, giving

For that, we may use MA Eq. (11.7) with f = A and g = B, giving A(B) = B(A) – (AB). The transfer to the Gaussian units in Eqs. (57) may be accomplished by the usual replacement 0  4, thus giving, in particular, u = B2/8. 30 31

Chapter 5

Page 15 of 42

Essential Graduate Physics

EM: Classical Electrodynamics

U

0 1 dr  drk ' , Ik Ik'   k  4 2 k ,k ' l l rk  rk '

(5.58)

k k'

th

where lk is the full length of the k wire loop. Note that Eq. (58) is valid if all currents Ik are independent of each other, because the double sum counts each current pair twice, compensating the coefficient ½ in front of the sum. It is useful to decompose this relation as U

1  I k I k ' Lkk ' , 2 k ,k '

(5.59)

where the coefficients Lkk’ are independent of the currents: Lkk' 

0 4

drk  drk '   rk  rk ' , l l

(5.60)

k k'

The coefficient Lkk’ with k  k’, is called the mutual inductance between current the kth and k’th loops, while the diagonal coefficient Lk  Lkk is called the self-inductance (or just inductance) of the kth loop.32 From the symmetry of Eq. (60) with respect to the index swap, k  k’, it is evident that the matrix of coefficients Lkk’ is symmetric:33 Lkk'  Lk'k , (5.61) so for the practically most important case of two interacting currents I1 and I2, Eq. (59) reads U

1 1 L1 I 12  MI 1 I 2  L2 I 22 , 2 2

(5.62)

where M  L12 = L21 is the mutual inductance coefficient. These formulas clearly show the importance of the self- and mutual inductances, so I will demonstrate their calculation for at least a few basic geometries. Before doing that, however, let me recast Eq. (58) into one more form that may facilitate such calculations. Namely, let us notice that for the magnetic field induced by current Ik in a thin wire, Eq. (28) is reduced to A k (r ) 

0 drk , Ik  4 l' r  rk

(5.63)

so Eq. (58) may be rewritten as U

1  I k A k ' rk   drk ' . 2 k ,k ' l

(5.64)

k

But according to the same Stokes theorem that was used earlier in this chapter to derive the Ampère law, and Eq. (27), the integral in Eq. (64) is nothing else than the magnetic field’s flux  (more frequently called just the magnetic flux) through a surface S limited by the contour l : 32

As evident from Eq. (60), these coefficients depend only on the geometry of the system. Moreover, in the Gaussian units, in which Eq. (60) is valid without the factor 0/4, the inductance coefficients have the dimension of length (centimeters). The SI unit of inductance is called the henry, abbreviated H – after Joseph Henry, who in particular discovered the effect of electromagnetic induction (see Sec. 6.1) independently of Michael Faraday. 33 Note that the matrix of the mutual inductances L is very much similar to the matrix of reciprocal capacitance jj’ coefficients pkk’ – for example, compare Eq. (62) with Eq. (2.21). Chapter 5

Page 16 of 42

Mutual inductance coefficients

Essential Graduate Physics

EM: Classical Electrodynamics

 Ar   dr     A

Magnetic flux

l

n

S

d 2 r   Bn d 2 r  Φ

(5.65)

S

– in the particular case of Eq. (64), the flux kk’ of the field induced by the k’th current through the loop of the kth current.34 As a result, Eq. (64) may be rewritten as 1 U   I k Φ kk ' . (5.66) 2 k ,k '

Comparing this expression with Eq. (59), we see that Φ kk ' 

 B  d k' n

2

r  Lkk ' I k ' ,

(5.67)

Sk

This expression not only gives us one more means for calculating the coefficients Lkk’, but also shows their physical sense: the mutual inductance characterizes what part of the magnetic flux (colloquially, “what fraction of field lines”) induced by the current Ik' pierces the kth loop’s area Sk – see Fig. 7.

Sk"  kk'

S k'''

Bk'

Ik' Fig. 5.7. The physical sense of the mutual inductance coefficient Lkk’  kk’/Ik’ – schematically.

Sk

Due to the linear superposition principle, the total flux piercing the kth loop may be represented as Magnetic flux from currents

Φ k   Φ kk '   Lkk ' I k ' k'

(5.68)

k'

For example, for the system of two currents, this expression is reduced to a clear analog of Eqs. (2.19): Φ1  L1 I 1  MI 2 , Φ 2  MI 1  L2 I 2 .

(5.69)

For the even simpler case of a single current,  of a single current

  LI ,

(5.70)

so the magnetic energy of the current may be represented in several equivalent forms: 34

The SI unit of magnetic flux is called weber, abbreviated Wb – after Wilhelm Edward Weber (1804-1891), who in particular co-invented (with Carl Gauss) the electromagnetic telegraph. More importantly for this course, in 1856 he was the first (together with Rudolf Kohlrausch) to notice that the value of (in modern terms) 1/(00)1/2, derived from electrostatic and magnetostatic measurements, coincides with the independently measured speed of light c. This observation gave an important motivation for Maxwell’s theory. Chapter 5

Page 17 of 42

Essential Graduate Physics

EM: Classical Electrodynamics

U

L 2 1 1 2 I  IΦ  Φ . 2 2 2L

(5.71)

U of a single current

These relations, similar to Eqs. (2.14)-(2.15) of electrostatics, show that the self-inductance L of a current loop may be considered a measure of the system’s magnetic energy. However, as we will see in Sec. 6.1, this measure is adequate only if the flux , rather than the current I, is fixed. Now we are well equipped for the calculation of inductance coefficients for particular systems, having three options. The first one is to use Eq. (60) directly.35 The second one is to calculate the magnetic field energy from Eq. (57) as the function of all currents Ik in the system, and then use Eq. (59) to find all coefficients Lkk’. For example, for a system with just one current, Eq. (71) yields L

U . I /2

(5.72)

2

Finally, if the system consists of thin wires, so the loop areas Sk and hence the fluxes kk’ are well defined, we may calculate them from Eq. (65), and then use Eq. (67) to find the inductances. Usually, the third option is simpler, but the first two one may be very useful even for thin-wire systems, if the notion of magnetic flux in them is not quite apparent. As an important example, let us find the self-inductance of a long solenoid – see Fig. 6a again. We have already calculated the magnetic field inside it – see Eq. (40) – so, due to the field uniformity, the magnetic flux piercing each turn of the wire is just  1  BA   0 nIA , (5.73) where A is the area of the solenoid’s cross-section – for example R2 for a round solenoid, though Eq. (40), and hence Eq. (73) are valid for cross-sections of any shape. Comparing Eqs. (73) with Eq. (70), one might wrongly conclude that L = 1/I = 0nA (WRONG!), i.e. that the solenoid’s inductance is independent of its length. Actually, the magnetic flux 1 pierces each wire turn, so the total flux through the whole current loop, consisting of N turns, is

  N1  0 n 2lAI ,

(5.74)

and the correct expression for the long solenoid’s self-inductance is

 N2A    0 n 2 lA  0 , (5.75) I l i.e. at fixed A and l, the inductance scales as N2, not as N. Since this reasoning may seem not quite evident, it is prudent to verify this result by using Eq. (72), with the full magnetic energy inside the solenoid (neglecting minor fringe field contributions), given by Eq. (57) with B = const within the internal volume V = lA, and zero outside of it: L

U

1 2 0

B Al  2

1 2 0

 0 nI 

2

I2 Al   0 n lA . 2 2

(5.76)

Plugging this relation into Eq. (72) immediately confirms the result (75). 35

Numerous applications of that Neumann formula (derived in 1845 by F. Neumann) to electrical engineering problems may be found, for example, in the classical text by F. Grover, Inductance Calculations, Dover, 1946.

Chapter 5

Page 18 of 42

L of a solenoid

Essential Graduate Physics

EM: Classical Electrodynamics

This energy-based approach becomes virtually inevitable for continuously distributed currents. As an example, let us calculate the self-inductance L of a long coaxial cable with the cross-section shown in Fig. 8, 36 and the full current in the outer conductor equal and opposite to that (I) in the inner conductor. a b 0 I

c

Fig. 5.8. The cross-section of a coaxial cable.

I

Let us assume that the current is uniformly distributed over the cross-sections of both conductors. (As we know from the previous chapter, this is indeed the case if both the internal and external conductors are made of a uniform resistive material.) First, we should calculate the radial distribution of the magnetic field – which has only one, azimuthal component because of the axial symmetry of the problem. This distribution may be immediately found applying the Ampère law (37) to circular contours of radii  within four different ranges:

2 B   0 I

piercing the contour

  2 a2 , for ρ  a,  1, for a  ρ  b,   0 I   2 2 2 2 c  b , for b    c, c   0, for c   .





(5.77)



Now, an easy integration yields the magnetic energy per unit length of the cable: 2 2 2 b c   c2   2  0 I 2  a    1   2 1 U 2 2       d         B d r B d d d        a    b   (c 2  b 2 )   0 0 2 0  4  0  a 2  l   0  b c2  c2 c 1  I 2  ln   .  ln  2  a c 2  b 2  c 2  b 2 b 2  2

(5.78)

From here, and Eq. (72), we get the final answer:

L 0  b c2  c2 c 1   ln   . (5.79)  ln  2 2  2 2 l 2  a c  b  c  b b 2  Note that for the particular case of a thin outer conductor, c – b 0, i. e.  > 0) or diamagnets (with m < 0, i.e.  < 0). Table 5.1. Susceptibility (m)SI of a few representative and/or important magnetic materials(a)

“Mu-metal” (75% Ni + 15% Fe + a few %% of Cu and Mo)

~20,000(b)

Permalloy (80% Ni + 20% Fe)

~8,000(b)

“Electrical” (or “transformer”) steel (Fe + a few %% of Si)

~4,000(b)

Nickel

~100

Aluminum

+210-5

Oxygen (at ambient conditions)

+0.210-5

Water

–0.910-5

Diamond

–210-5

Copper

–710-5

Bismuth (the strongest non-superconducting diamagnet)

–1710-5

(a)

The table does not include bulk superconductors, which may be described, in a so-called coarse-scale approximation, as perfect diamagnets (with B = 0, i.e. formally with m = –1 and  = 0), though the actual physics of this phenomenon is different – see Sec. 6.3 below. (b) The exact values of m >> 1 for soft ferromagnetic materials (see, e.g., the upper three rows of the table) depend not only on their composition but also on their thermal processing (“annealing”). Moreover, due to unintentional vibrations, the extremely high values of m of such materials may decay with time, though they may be restored to the original values by new annealing. The reason for such behavior is discussed in the text below.

The reason for this difference is that in dielectrics, two different polarization mechanisms (schematically illustrated by Fig. 3.7) lead to the same sign of the average polarization – see the discussion in Sec. 3.3. One of these mechanisms, illustrated by Fig. 3.7b, i.e. the ordering of spontaneous dipoles by the applied field, is also possible for magnetization – for the atoms and molecules with spontaneous internal magnetic dipoles of magnitude m0 ~ B, due to their net spins. Again, in the absence of an external magnetic field the spins, and hence the dipole moments m0 may be disordered, but according to Eq. (100), the external magnetic field tends to align the dipoles along its direction. As a result, the average direction of the spontaneous elementary moments m0, and hence the direction of the arising magnetization M, is the same as that of the microscopic field B at the points of the dipole location (i.e., for a diluted media, of H  B/0), resulting in a positive susceptibility m, i.e. in the paramagnetism, such as that of oxygen and aluminum – see Table 1. 48

This fact also explains the misleading term “magnetic field” for H.

Chapter 5

Page 27 of 42

Essential Graduate Physics

EM: Classical Electrodynamics

However, in contrast to the electric polarization of atoms/molecules with no spontaneous electric dipoles, which gives the same sign of e   – 1 (see Fig. 3.7a and its discussion), the magnetic materials with no spontaneous atomic magnetic dipole moments have m < 0 – the effect called the orbital (or “Larmor”49) diamagnetism. As the simplest model of this effect, let us consider the orbital motion of an atomic electron about an atomic nucleus as that of a classical particle of mass m0, with an electric charge q, about an immobile attracting center. As classical mechanics tells us, the central attractive force does not change the particle’s angular momentum L  m0rv, but the applied magnetic field B (that may be taken uniform on the atomic scale) does, due to the torque (101) it exerts on the magnetic moment (95): dL q  τ  mB  LB . dt 2m0

(5.114)

The diagram in Fig. 13 shows that in the limit of a relatively weak field, when the magnitude of the angular momentum L may be considered constant, this equation describes the rotation (called the torque-induced precession50) of the vector L about the direction of the vector B, with the angular frequency  = –qB/2m0, independent of the angle . According to Eqs. (91) and (114), the resulting additional (field-induced) magnetic moment m  q  –q2B/m0 has, irrespectively of the sign of q, a direction opposite to the field. Hence, according to Eq. (111) with H  B/0, the susceptibility m    m/H is indeed negative. (Let me leave its quantitative estimate within this classical model for the reader’s exercise.) The quantum-mechanical treatment confirms this qualitative picture of the Larmor diamagnetism, giving only quantitative corrections to the classical result for m.51 B L sin  dL / dt



L

v 0

m

m0 , q

Fig. 5.13. The torque-induced precession of a classical charged particle in a magnetic field.

A simple estimate (also left for the reader’s exercise) shows that in atoms with spontaneous nonzero net spins, the magnetic dipole orientation mechanism prevails over the orbital diamagnetism, so the materials incorporating such atoms usually exhibit net paramagnetism – see Table 1. Due to possible strong quantum interaction between the spin dipole moments, the magnetism of such materials is rather complex, with numerous interesting phenomena and elaborate theories. Unfortunately, all this physics is well outside the framework of this course, and I have to refer the interested reader to special literature,52 but still will mention some key facts. 49

Named after Sir Joseph Larmor who was the first (in 1897) to describe this effect mathematically. For a detailed discussion of this effect see, e.g., CM Sec. 4.5. 51 See, e.g., QM Sec. 6.4. Quantum mechanics also explains why in most common (s-) ground states, the average contribution (95) of the orbital angular momentum L to the net vector m vanishes. 52 See, e.g., D. J. Jiles, Introduction to Magnetism and Magnetic Materials, 2nd ed., CRC Press, 1998, or R. C. O’Handley, Modern Magnetic Materials, Wiley, 1999. 50

Chapter 5

Page 28 of 42

Essential Graduate Physics

EM: Classical Electrodynamics

Most importantly, a sufficiently strong magnetic dipole-dipole interaction may lead to their spontaneous ordering, even in the absence of the applied field. This ordering may correspond to either parallel alignment of the dipoles (ferromagnetism) or anti-parallel alignment of the adjacent dipoles (antiferromagnetism). Evidently, the external effects of ferromagnetism are stronger, because this phase corresponds to a substantial spontaneous magnetization M even in the absence of an external magnetic field. (The corresponding magnitude of B = 0M is called the remanence field, BR.) The direction of the vector BR may be switched by the application of an external magnetic field, with a magnitude above a certain value HC called coercivity, leading to the well-known hysteretic loops on the [H, B] plane (see Fig. 14 for a typical example) – similar to those in ferroelectrics, already discussed in Sec. 3.3.

Fig. 5.14. Experimental magnetization curves of specially processed (cold-rolled) electrical steel – a solid solution of ~10% C and ~6% Si in Fe. (Reproduced from www.thefullwiki.org/Hysteresis under the Creative Commons BY-SA 3.0 license.)

Just as the ferroelectrics, the ferromagnets may also be hard or soft – in the magnetic rather than mechanical sense. In hard ferromagnets (also called permanent magnets), the dipole interaction is so strong that B stays close to BR in all applied fields below HC, so the hysteretic loops are virtually rectangular. Hence, in lower fields, the magnetization M of a permanent magnet may be considered constant, with the magnitude BR/0. Such hard ferromagnetic materials (notably, rare-earth compounds such as SmCo5, Sm2Co17, and especially Nd2Fe14B), with high remanence fields (~1 T) and high coercivity (~106 A/m), have numerous practical applications.53 Let me give just two, most important examples. First, permanent magnets are the core components of most electric motors. By the way, this venerable (~150-years-old) technology is currently experiencing a quiet revolution, driven mostly by the electric car development. In the most advanced type of motors, called permanent-magnet synchronous machines (PMSM), the remanence magnetic field BR of a permanent-magnet rotating part of the machine (called the rotor) interacts with the magnetic field of ac currents passed through wire windings in the external, static part of the motor (called the stator). The resulting torque may drive the rotor to extremely high speeds, exceeding 10,000 rotations per minute, enabling the motor to deliver several kilowatts of mechanical power from each kilogram of its mass. As the second important example, despite the decades of the exponential (Moore’s-law) progress of semiconductor electronics, most computer data storage systems (e.g., in data centers) are still based 53

Currently, the neodymium-iron-boron compound holds nearly 95% percent of the world permanent-magnet application market, due to its combination of high BR and HC with lower fabrication costs.

Chapter 5

Page 29 of 42

Essential Graduate Physics

EM: Classical Electrodynamics

on hard disk drives whose active media are submicron-thin layers of hard ferromagnets, with the data bits stored in the form of the direction of the remanent magnetization of small film spots. This technology has reached fantastic sophistication, with the recorded data density of the order of 1012 bits per square inch.54 (Only recently it started to be seriously challenged by solid-state drives based on the floating-gate semiconductor memories already mentioned in Chapter 3.) 55 In contrast, in soft ferromagnets, with their lower magnetic dipole interactions, the magnetization is constant only inside each of the spontaneously formed magnetic domains, while the volume and shape of the domains are affected by the applied magnetic field. As a result, the hysteresis loop’s shape of soft ferromagnets is dependent on the cycled field’s amplitude and cycling history – see Fig. 14. At high fields, their B (and hence M) is driven into saturation, with B  BR, but at low fields, they behave essentially as linear magnetics with very high values of m and hence  – see the top rows of Table 1. (The magnetic domain interaction, and hence the low-field susceptibility of such soft ferromagnets are highly dependent on the material’s fabrication technology and its post-fabrication thermal and mechanical treatments.) Due to these high values of , soft ferromagnets, especially iron and its alloys (e.g., various special steels), are extensively used in electrical engineering – for example in the cores of transformers – see the next section. Due to the relative weakness of the magnetic dipole interaction in some materials, their ferromagnetic ordering may be destroyed by thermal fluctuations, if the temperature is increased above some value called the Curie temperature TC, specific for each material. The transition between the ferromagnetic and paramagnetic phases at T = TC is a classical example of a continuous phase transition, with the average polarization M playing the role of the so-called order parameter that (in the absence of external fields) becomes different from zero only at T < TC, increasing gradually at the further temperature reduction.56 5.6. Systems with magnetic materials Just as the electrostatics of linear dielectrics, the magnetostatics is very simple in the particular case when all essential stand-alone currents are embedded into a linear magnetic medium with a constant permeability . Indeed, let us assume that we know the solution B0(r) of the magnetic pair of 54

“A magnetic head slider [the read/write head – KKL] flying over a disk surface with a flying height of 25 nm with a relative speed of 20 meters/second [all realistic parameters – KKL] is equivalent to an aircraft flying at a physical spacing of 0.2 µm at 900 kilometers/hour.” B. Bhushan, as quoted in the (generally good) book by G. Hadjipanayis, Magnetic Storage Systems Beyond 2000, Springer, 2001. 55 The high-frequency properties of hard ferromagnets are also very non-trivial. For example, according to Eq. (101), an external magnetic field Bext exerts torque  = MBext on the spontaneous magnetic moment M of a unit volume of a ferromagnet. In some nearly-isotropic, mechanically fixed ferromagnetic samples, this torque causes the precession, around the direction of Bext (very similar to that illustrated in Fig. 13), of not the sample as such, but of the magnetization M inside it, with a certain frequency r. If the frequency  of an additional ac field becomes very close to r, its absorption sharply increases – the so-called ferromagnetic resonance. Moreover, if  is somewhat higher than r, the effective magnetic permeability () of the material for the ac field may become negative, enabling a series of interesting effects and practical applications. Very unfortunately, I could not find time for their discussion in this series and have to refer the interested reader to literature, for example the monograph by A. Gurevich and G. Melkov, Magnetization Oscillations and Waves, CRC Press, 1996. 56 In this series, a quantitative discussion of such transitions is given in SM Chapter 4. Chapter 5

Page 30 of 42

Essential Graduate Physics

EM: Classical Electrodynamics

the genuine (“microscopic”) Maxwell equations (36) in free space, i.e. when the genuine current density j coincides with that of stand-alone currents. Then the macroscopic Maxwell equations (109) and the linear constitutive equation (110) are satisfied with the pair of functions H r  

B 0 r 

0

Br   H r  

,

 B 0 r  . 0

(5.115)

Hence the only effect of the complete filling of a fixed-current system with a uniform, linear magnetic medium is the change of the magnetic field B at all points by the same constant factor /0  1 + m, which may be either larger or smaller than 1. (As a reminder, a similar filling of a system of fixed stand-alone charges with a uniform, linear dielectric always leads to a reduction of the electric field E by a factor of /0  1 + e – the difference whose physics was already discussed at the end of Sec. 4.) However, this simple result is generally invalid in the case of nonuniform (or piecewiseuniform) magnetic samples. To analyze this case, let us first integrate the macroscopic Maxwell equation (107) along a closed contour C limiting a smooth surface S. Now using the Stokes theorem just as at the derivation of Eq. (37), we get the macroscopic version of the Ampère law (37):

 H  dr  I .

Macroscopic Ampère law

(5.116)

C

Let us apply this relation to a sharp boundary between two regions with different magnetic materials, with no stand-alone currents on the interface, similarly to how this was done for the field E in Sec. 3.4 – see Fig. 3.5. The result is similar as well: H   const .

(5.117)

On the other hand, the integration of the Maxwell equation (29) over a Gaussian pillbox enclosing a border fragment (again just as shown in Fig. 3.5 for the field D) yields a result similar to Eq. (3.35): Bn  const .

(5.118)

For linear magnetic media, with B = H, the latter boundary condition is reduced to

H n  const .

(5.119)

Let us use these boundary conditions, first of all, to see what happens with a long cylindrical sample of a uniform magnetic material, placed parallel to a uniform external magnetic field B0 – see Fig. 15. Such a sample cannot noticeably disturb the field in the free space outside it, at most of its length: Bext = B0, Hext = 0Bext= 0B0. Now applying Eq. (117) to the dominating surfaces of the sample, we get Hint = H0.57 For a linear magnetic material, these relations yield Bint = Hint = (/0) B0.58 For the high media, this means that Bint >> B0. This effect may be vividly represented as the concentration of the magnetic field lines in high- samples – see Fig. 15 again. (The concentration affects the external field 57

The independence of H on magnetic properties of the sample in this geometry explains why this field’s magnitude is commonly used as the argument in the plots like Fig. 14: such measurements are typically carried out by placing an elongated sample of the material under study into a long solenoid with a controllable current I, so according to Eq. (116), H0 = nI, regardless of the sample. 58 The reader is highly encouraged to carry out a similar analysis of the fields inside narrow gaps cut in a linear magnetic material, similar to that carried in Sec. 3.3 out for linear dielectrics – see Fig. 3.6 and its discussion. Chapter 5

Page 31 of 42

Essential Graduate Physics

EM: Classical Electrodynamics

distribution only at distances of the order of (/0)t > 0. This minimizes the number of “stray” field lines, and makes the magnetic flux  piercing each wire turn of either coil virtually the same – the equality important for the secondary voltage induction – see the next chapter.

B0 B   /  0 B 0  B 0

t  l

~

l

 t  l 0

Fig. 5. 15. Magnetic field concentration in long, high- magnetic samples (schematically).

Samples of other geometries may create strong perturbations of the external field, extended to distances of the order of the sample’s dimensions. To analyze such problems, we may benefit from a simple, partial differential equation for a scalar function, e.g., the Laplace equation, because in Chapter 2 we have learned how to solve it for many simple geometries. In magnetostatics, the introduction of a scalar potential is generally impossible due to the vortex-like magnetic field lines. However, if there are no stand-alone currents within the region we are interested in, then the macroscopic Maxwell equation (107) for the field H is reduced to   H = 0, similar to Eq. (1.28) for the electric field, showing that we may introduce the scalar potential of the magnetic field, m, using a relation similar to Eq. (1.33): H  –  m .

(5.120)

Combining it with the homogenous Maxwell equation (29) for the magnetic field, B = 0, and Eq. (110) for a linear magnetic material, we arrive at a single differential equation, (m) =0. For a uniform medium ((r) = const), it is reduced to our beloved Laplace equation:  2 m  0 .

(5.121)

Moreover, Eqs. (117) and (119) give us very familiar boundary conditions: the first of them

being equivalent to

 m  const , 

(5.122a)

 m  const ,

(5.122b)

while the second one giving

 m  const . (5.123) n Indeed, these boundary conditions are absolutely similar for (3.37) and (3.56) of electrostatics, with the replacement   .59



This similarity may seem strange because earlier we have seen that the parameter  is physically more similar to 1/. The reason for this paradox is that in magnetostatics, the magnetic potential m is traditionally used to 59

Chapter 5

Page 32 of 42

Essential Graduate Physics

EM: Classical Electrodynamics

Let us analyze the geometric effects on magnetization, first using the (too?) familiar structure: a sphere, made of a linear magnetic material, placed into a uniform external field H0  B0/0. Since the differential equation and the boundary conditions are similar to those of the corresponding electrostatics problem (see Fig. 3.11 and its discussion), we can use the above analogy to reuse the solution we already have – see Eqs. (3.63). Just as in the electric case, the field outside the sphere, with

 m r  R

   0 R3    cos  ,  H0 r  2     2 r 0  

(5.124)

is a sum of the uniform external field H0, with the potential –H0rcos  –H0z, and the dipole field (99) with the following induced magnetic dipole moment of the sphere:60 m  4

  0 3 R H0 .   2 0

(5.125)

On the contrary, the internal field is perfectly uniform, and directed along the external one:

 m r  R

 H 0

H int 3 0 3 0 r cos  , so that  , H0   2 0   2 0

Bint H int 3 . (5.126)   B0  0 H 0   2 0

Note that the field Hint inside the sphere is not equal to the applied external field H0. This example shows that the interpretation of H as the “would-be” magnetic field generated by external stand-alone currents j should not be exaggerated by saying that its distribution is independent of the magnetic bodies in the system. In the limit  >> 0, Eqs. (126) yield Hint/H0 > B0, as was discussed above – see Fig. 15. Now let us calculate the field distribution in a similar, but slightly more complex (and practically important) system: a round cylindrical shell, made of a linear magnetic material, placed into a uniform external field H0 normal to its axis – see Fig. 16. y   sin 

H0

b a

H int 0

x   cos  Fig. 5.16. Cylindrical magnetic shield.



describe the “would-be field” H, while in electrostatics, the potential  describes the actual electric field E. (This tradition persists from the days when H was perceived as a genuine magnetic field.) 60 To derive Eq. (125), we may either calculate the gradient of the  given by Eq. (124), or use the similarity of m Eqs. (3.13) and (99), to derive from Eq. (3.17) a similar expression for the magnetic dipole’s potential:

m 

1 m cos  . 4 r 2

Now comparing this formula with the second term of Eq. (124), we immediately get Eq. (125). Chapter 5

Page 33 of 42

Essential Graduate Physics

EM: Classical Electrodynamics

Since there are no stand-alone currents in the region of our interest, we can again represent the field H(r) by the gradient of the magnetic potential m – see Eq. (120). Inside each of three constant- regions, i.e. at  < b, a <  < b, and b <  (where  is the 2D distance from the cylinder's axis), the potential obeys the Laplace equation (121). In the convenient, polar coordinates (see Fig. 16), we may, guided by the general solution (2.112) of the Laplace equation and our experience in its application to axially-symmetric geometries, look for m in the following form:  H 0   b1' /   cos  ,   m  a1   b1 /   cos  ,   H  cos  , int 

for b   , for a    b, for   a .

(5.127)

Plugging this solution into the boundary conditions (122)-(123) at both interfaces ( = b and  = a), we get the following system of four equations:  H 0 b  b1' / b  a1b  b1 / b,

 0  H 0  b / b H 0   a1  b1 / b , ' 1

2

2

a1a  b1 / a    H int a,

(5.128)

 a1  b1 / a 2     0 H int ,

for four unknown coefficients a1, b1, b1’, and Hint. Solving the system, we get, in particular:

c 1 H int  , H 0  c  a / b 2

   0 with  c      0

2

  . 

(5.129)

According to these formulas, at  > 0, the field in the free space inside the cylinder is lower than the external field. This fact allows using such structures, made of high- materials such as permalloy (see Table 1), for passive shielding61 from unintentional magnetic fields (e.g., the Earth's field) – the task very important for the design of many physical experiments. As Eq. (129) shows, the larger is , the closer is c to 1, and the smaller is the ratio Hint/H0, i.e. the better is the shielding, for the same a/b ratio. On the other hand, for a given magnetic material, i.e. for a fixed parameter c, the shielding is improved by making the ratio a/b < 1 smaller, i.e. the shield thicker. On the other hand, as Fig. 16 shows, smaller a leaves less space for the shielded samples, calling for a compromise. Note that in the limit /0  , both Eq. (126) and Eq. (129), describing different geometries yield Hint/H0  0. Indeed, as it follows from Eq. (119), in this limit the field H tends to zero inside magnetic samples of virtually geometry. (The formal exception is the longitudinal cylindrical geometry shown in Fig. 15, with t/l  0, where Hint = H0 for any finite , but even in it, the last equality holds only if t/l > 0, virtually all field lines are confined to the interior of the core. Then, applying the macroscopic Ampère law (116) to a contour C that follows a magnetic field line inside the core (see, for example, the dashed line in Fig. 17), we get the following approximate expression (exactly valid only in the limit k/0, lk2/Ak  ):

61

Another approach to the undesirable magnetic fields' reduction is the "active shielding" – the external field’s compensation with the counter-field induced by controlled currents in specially designed wire coils.

Chapter 5

Page 34 of 42

Essential Graduate Physics

EM: Classical Electrodynamics

 H dl  l l

k

k

C

H k   lk k

Bk

k

 NI .

(5.130)

However, since the magnetic field lines stay in the core, the magnetic flux k  BkAk should be the same ( ) for each section, so Bk = /Ak. Plugging this condition into Eq. (130), we get Magnetic Ohm law and reluctance

Φ

NI

R

,

where Rk 

k

lk .  k Ak

(5.131)

k

I

I C

lk

N

N

 k   0 A k

Fig. 5.17. Deriving the “magnetic Ohm law” (131).

Note a close analogy of the first of these equations with the usual Ohm law for several resistors connected in series, with the magnetic flux playing the role of electric current, while the product NI, the role of the voltage applied to the chain of resistors. This analogy is fortified by the fact that the second of Eqs. (131) is similar to the expression for the resistance R = l/A of a long, uniform conductor, with the magnetic permeability  playing the role of the electric conductivity . (To sound similar, but still different from the resistance R, the parameter R is called reluctance.) This is why Eq. (131) is called the magnetic Ohm law; it is very useful for approximate analyses of systems like ac transformers, magnetic energy storage systems, etc. Now let me proceed to a brief discussion of systems with permanent magnets. First of all, using the definition (108) of the field H, we may rewrite the Maxwell equation (29) for the field B as   B   0   H  M   0,

i.e. as   H    M ,

(5.132)

While this relation is general, it is especially convenient in permanent magnets, where the magnetization vector M may be approximately considered field-independent.62 In this case, Eq. (132) for H is an exact analog of Eq. (1.27) for E, with the fixed term –M playing the role of the fixed charge density (more exactly, of /0). For the scalar potential m, defined by Eq. (120), this gives the Poisson equation  2 m    M ,

(5.133)

similar to those solved, for quite a few geometries, in the previous chapters. In the case when M is not only field-independent but also uniform inside a permanent magnet’s volume, then the right-hand sides of Eqs. (132) and (133) vanish both inside the volume and in the Note that in this approximation, there is no difference between the remanence magnetization MR  BR/0, of the magnet and its saturation magnetization MS  limH[B(H)/0 - H]. 62

Chapter 5

Page 35 of 42

Essential Graduate Physics

EM: Classical Electrodynamics

surrounding free space, and give a non-zero effective charge only on the magnet’s surface. Integrating Eq. (132) along a short path normal to the surface and crossing it, we get the following boundary conditions: H n  H n in free space  H n in magnet  M n  M cos  , (5.134) where  is the angle between the magnetization vector and the outer normal to the magnet’s surface. This relation is an exact analog of Eq. (1.24) for the normal component of the field E, with the effective surface charge density (or rather /0) equal to Mcos. This analogy between the magnetic field induced by a fixed, constant magnetization and the electric field induced by surface electric charges enables one to reuse the solutions of quite a few problems considered in Chapters 1-3. Leaving a few such problems for the reader's exercise (see Sec. 7), let me demonstrate the power of this analogy on just two examples specific to magnetic systems. First, let us calculate the force necessary to detach the flat ends of two long, uniform rod magnets, of length l and cross-section area A > A1/2, the right-hand side of Eq. (133) is substantially different from zero only in two relatively small areas at the needle’s ends. Integrating the equation over each area, we see that at distances r >> A1/2 from each end, we may reduce Eq. (132) to H 

qm

0

 (r  r )   (r  r ),

i.e.   B  q m  (r  r )   (r  r ) ,

(5.137)

where r are ends’ positions, and qm  0M0A, with A being the needle’s cross-section area.63 This expression for B is completely similar to Eq. (3.32) for the electric displacement D, for the particular case of two equal and opposite point charges, i.e. with  = q[(r – r+) – (r – r+)], with the only replacement q  qm. Since we know the resulting electric field all too well (see, e.g., Eq. (1.7) for E  D/0), we may immediately write a similar expression for the field H: r  r  q  r  r . (5.138) H r   m   3 4 0  r  r 3 r  r   (a)

(b)

H

 qm

M0

 qm

 qm

 qm  qm

 qm

Fig. 5.19. (a) “Magnetic charges” at the ends of a thin permanent-magnet needle and (b) the result of its breaking into two parts (schematically).

The resulting magnetic field H(r) exerts on another “magnetic charge” q’m, located at some point r’, the force F = q’mH(r’).64 Hence if two ends of different needles are separated by an intermediate distance R (A1/2 1, we may neglect the scattering at all. Classically, we can describe the charge carriers in such a “perfect conductor” as particles with a nonzero mass m, which are accelerated by the electric field following the 2nd Newton law (4.11),

mv  F  qE ,

(6.39)

so the current density j = qnv that they create, changes in time as 2 j  qnv  q n E . m

(6.40)

In terms of the Fourier amplitudes of the functions j(t) and E(t), this means

 i j 

q2n E . m

(6.41)

Comparing this formula with the relation j = E implied in the last section, we see that we can use all its results with the following replacement: q 2n . (6.42)  i m This change replaces the characteristic equation (29) with

 i 

 2 m , iq 2 n

i.e.  2 

q 2 n m

,

(6.43)

i.e. replaces the skin effect with the field penetration by the following frequency-independent depth:

 m      2    q n  1

1/ 2

.

(6.44)

Superficially, this means that the field decay into the superconductor does not depend on frequency:

21

Named so to acknowledge the pioneering theoretical work of brothers Fritz and Heinz London – see below. It is hardly fair to shorten this name to just the “Meissner effect” as it is frequently done, because of the reportedly crucial contribution by Robert Ochsenfeld, then a Walther Meissner’s student, to the discovery. 22

Chapter 6

Page 11 of 38

Essential Graduate Physics

EM: Classical Electrodynamics

H ( x, t )  H (0, t )e  x /  ,

(6.45)

thus explaining the Meissner-Ochsenfeld effect. However, there are two problems with this result. First, for the parameters typical for good metals (q = –e, n ~ 1029 m-3, m ~ me,   0), Eq. (44) gives  ~ 10-8 m, one or two orders of magnitude lower than the experimental values of L. Experiment also shows that the penetration depth diverges at T  Tc, which is not predicted by Eq. (44). The second, much more fundamental problem with Eq. (44) is that it has been derived for  >> 1. Even if we assume that somehow there is no scattering at all, i.e.  = , at   0 both parts of the characteristic equation (43) vanish, and we cannot make any conclusion about . This is not just a mathematical artifact we could ignore. For example, let us place a non-magnetic metal into a static external magnetic field at T > Tc. The field would completely penetrate the sample. Now let us cool it. As soon as temperature is decreased below Tc, the above calculations would become valid, forbidding the penetration into the superconductor of any change of the field, so the initial field would be “frozen” inside the sample. The Meissner-Ochsenfeld experiments have shown something completely different: as T is lowered below Tc, the initial field is being expelled out of the sample. The resolution of these contradictions is provided by quantum mechanics. As was explained in 1957 in a seminal work by J. Bardeen, L. Cooper, and J. Schrieffer (commonly referred to as the BCS theory), superconductivity is due to the correlated motion of electron pairs, with opposite spins and nearly opposite momenta. Such Cooper pairs, each with the electric charge q = –2e and zero spin, may form only in a narrow energy layer near the Fermi surface, of a certain thickness (T). This parameter (T), which may be also interpreted as the binding energy of the pair, tends to zero at T  Tc, while at T > 1, meaning that the wavefunctions of the pairs are strongly overlapped in space. Due to their integer spin, Cooper pairs behave like bosons, which means in particular that at low temperatures they exhibit the so-called BoseEinstein condensation onto the same ground energy level g.23 This means that the quantum frequency  23

A quantitative discussion of the Bose-Einstein condensation of bosons may be found in SM Sec. 3.4, though the full theory of superconductivity is more complicated because it has to describe the condensation taking place simultaneously with the formation of effective bosons (Cooper pairs) from fermions (single electrons). For a Chapter 6

Page 12 of 38

London’s penetration depth

Essential Graduate Physics

EM: Classical Electrodynamics

= g/ of the time evolution of each pair’s wavefunction  = exp{-it} is exactly the same, and that the phases  of the wavefunctions, defined by the relation

   e i ,

(6.47)

coincide, so the electric current is carried not by individual Cooper pairs but rather their Bose-Einstein condensate described by a single wavefunction (47). Due to this coherence, the quantum effects (which are, in the usual Fermi-gases of single electrons, masked by the statistical spread of their energies, and hence of their phases), become very explicit – “macroscopic”. To illustrate this, let us write the well-known quantum-mechanical formula for the probability current density of a free, non-relativistic particle,24 jw 





i 1       c.c.     i   c.c. , 2m 2m

(6.48)

where c.c. means the complex conjugate of the previous expression. Now let me borrow one result that will be proved later in this course (in Sec. 9.7) when we discuss the analytical mechanics of a charged particle moving in an electromagnetic field. Namely, to account for the magnetic field effects, the particle’s kinetic momentum p  mv (where v  dr/dt is the particle’s velocity) has to be distinguished from its canonical momentum,25 P  p  qA . (6.49) where A is the field’s vector potential defined by Eq. (5.27). In contrast with the Cartesian components pj = mvj of the kinetic momentum p, the canonical momentum’s components are the generalized momenta corresponding to the Cartesian components rj of the radius-vector r, considered as generalized coordinates of the particle: Pj = L /vj, where L is the particle’s Lagrangian function. According to the general rules of transfer from classical to quantum mechanics,26 it is the vector P whose operator (in the coordinate representation) equals –i, so the operator of the kinetic momentum p = P – qA is –i + qA. Hence, to account for the magnetic field27 effects, we should make the following replacement,

 i  i  qA ,

(6.50)

in all quantum-mechanical relations. In particular, Eq. (48) has to be generalized as jw 





1    i  qA   c.c. . 2m

(6.51)

This expression becomes more transparent if we take the wavefunction in form (47); then detailed, but still very readable coverage of the physics of superconductors, I can recommend the reader the monograph by M. Tinkham, Introduction to Superconductivity, 2nd ed., McGraw-Hill, 1996. 24 See, e.g., QM Sec. 1.4, in particular Eq. (1.47). 25 I am sorry to use traditional notations p and P for the momenta – the same symbols which were used for the electric dipole moment and polarization in Chapter 3. I hope there will be no confusion because the latter notions are not used in this section. 26 See, e.g., CM Sec. 10.1, in particular Eq. (10.26). 27 The account of the electric field is easier, because the related energy q of the particle may be directly included in the potential energy operator. Chapter 6

Page 13 of 38

Essential Graduate Physics

EM: Classical Electrodynamics

 q  2 (6.52)     A  . m    This relation means, in particular, that in order to keep jw gauge-invariant, the transformation (8)-(9) has to be accompanied by a simultaneous transformation of the wavefunction’s phase: q (6.53)     .  jw 

It is fascinating that the quantum-mechanical wavefunction (or more exactly, its phase) is not gaugeinvariant, meaning that you may change it in your mind – at your free will! Again, this does not change any observable (such as jw or the probability density *), i.e. any experimental results. Now for the electric current density of the whole superconducting condensate, Eq. (52) yields the following constitutive relation: qn p q  2 (6.54)     A  , j  j w qn p  m    The formula shows that this supercurrent may be induced by the dc magnetic field alone and does not require any electric field. Indeed, for the simple 1D geometry shown in Fig. 2a, j(r) = j(x)nz, A(r) = A(x) nz, and /z = 0, so the Coulomb gauge condition (5.48) is satisfied for any choice of the gauge function (x). For the sake of simplicity we can choose this function to provide (r)  const,28 so q 2 np 1 (6.55) j  A A. m  L2 where L is given by Eq. (46), and the field is assumed to be small and hence not affecting the probability 2 (here normalized to 1 in the absence of the field). This is the so-called London equation, proposed (in a different form) by F. and H. London in 1935 for the Meissner-Ochsenfeld effect’s explanation. Combining it with Eq. (5.44), generalized for a linear magnetic medium by the replacement 0  , we get 1 (6.56) 2A  2 A , L This simple differential equation, similar to Eq. (23), for our 1D geometry has an exponential solution similar to Eq. (32):  x  x  x (6.57) A( x)  A(0) exp , B ( x)  B (0) exp , j ( x)  j (0) exp  ,  L   L   L  which shows that the magnetic field and the supercurrent penetrate into a superconductor only by London’s penetration depth L, regardless of frequency.29 By the way, integrating the last result through the penetration layer, and using the vector potential’s definition, B =   A (for our geometry, giving 28

This is the so-called London gauge; for our simple geometry, it is also the Coulomb gauge (5.48). Since at T > 0, not all electrons in a superconductor form Cooper pairs, at any frequency   0 the unpaired electrons provide energy-dissipating Ohmic currents, which are not described by Eq. (54). These losses become very substantial when the frequency  becomes so high that the skin-effect length s of the material becomes less than L. For typical metallic superconductors, this crossover takes place at frequencies of a few hundred GHz, so even for microwaves, Eq. (57) still gives a fairly accurate description of the field penetration. 29

Chapter 6

Page 14 of 38

Supercurrent density

Essential Graduate Physics

EM: Classical Electrodynamics

B(x) = dA(x)/dx = –LA(x)) we may readily verify that the linear density J of the surface supercurrent still satisfies the universal coarse-grain relation (38). This universality should bring to our attention the following common feature of the skin effect (in “normal” conductors) and the Meissner-Ochsenfeld effect (in superconductors): if the linear size of a bulk sample is much larger than, respectively, s or L, than B = 0 in the dominating part of its interior. According to Eq. (5.110), a formal description of such conductors (valid only on a coarse-grain scale much larger than either s or L), may be achieved by formally treating the sample as an ideal diamagnet, with  = 0. In particular, we can use this description and Eq. (5.124) to immediately obtain the magnetic field’s distribution outside of a bulk sphere: B   0 H    0  m ,

 R3 with  m  H 0   r  2 2r 

  cos  , 

for r  R .

(6.58)

Figure 3 shows the corresponding surfaces of equal potential m. It is evident that the magnetic field lines (which are normal to the equipotential surfaces) bend to become parallel to the surface near it.

Fig. 6.3. Equipotential surfaces m = const around a conducting sphere of radius R >> s (or L), placed into a uniform magnetic field, calculated within the coarse-grain (ideal-diamagnet) approximation  = 0.

H0

This pattern also helps to answer the question that might arise at making the assumption (24): what happens to bulk conductors placed into a normal ac magnetic field – and to superconductors in a normal dc magnetic field as well? The answer is: the field is deformed outside of the conductor to sustain the following coarse-grain boundary condition:30 Coarsegrain boundary condition

Bn

surface

 0,

(6.59)

which follows from Eq. (5.118) and the coarse-grain requirement Binside = 0. This answer should be taken with reservations. For normal conductors it is only valid at sufficiently high frequencies where the skin depth (33) is sufficiently small: s > 1 turns, on its “walls” (windings), and the forces exerted by the field on the solenoid’s ends, for two cases: (i) the current through the solenoid is fixed by an external source, and (ii) after the initial current setting, the ends of the solenoid’s wire, with negligible resistance, are connected, so that it continues to carry a non-zero current. Compare the results, and give a physical interpretation of the direction of these forces.

l V 6.7. The electromagnetic railgun is a I F, v projectile launch system consisting of two  (a) long, parallel conducting rails and a sliding  conducting projectile, shorting the current I I fed into the system by a powerful source – v (b) t see panel (a) in the figure on the right. w Calculate the force exerted on the projectile, using two approaches: (i) by a direct calculation, assuming that the cross-section of the system has the simple shape shown on panel (b) of the figure above, with t 1), the first maxima (60) correspond to small scattering angles,  2 similar, equidistant scatterers, located along the same straight line shows that the positions (60) of the constructive interference maxima do not change (because the derivation of this condition is still applicable to each pair of adjacent scatterers), but the increase of N makes these peaks sharper and sharper. Leaving the quantitative analysis of this system for the reader’s exercise, let me jump immediately to the limit N  0, in which we may ignore the scatterers’ discreteness. The resulting pattern is similar to that at scattering by a continuous thin rod (see Fig. 6b), so let us first spell out the Born scattering formula for an arbitrary extended, continuous, uniform dielectric body. Transferring Eq. (56) from the sum to an integral, for the differential crosssection we get d  k 2  d  4

2

  k2  (  1) 2 F (q) sin 2      4

2

 2  (  1) 2 I (q) sin 2  , 

(8.62)

where I(q) now becomes the phase integral,27 I (q)   exp iq  r' d 3 r' ,

Phase integral

(8.63)

V

with the dimensionality of volume. Now we may return to the particular case of a thin rod (with both dimensions of the crosssection’s area A much smaller than , but an arbitrary length a), otherwise keeping the same simple geometry as for two point scatterers – see Fig. 6b. In this case, the phase integral is just Fraunhofer diffraction integral

I q   A

a / 2

 exp{iq

a / 2

a

x' )dx'  A

exp{iq a a / 2}  exp{iq a a / 2} sin  V ,  iq 

(8.64)

where V = Aa is the volume of the rod, and  is the dimensionless argument defined as q a ka sin   a  . (8.65) 2 2 The fraction participating in the last form of Eq. (64) is met in physics so frequently that it has deserved the special name of the sinc (not “sync”, please!) function (see Fig. 7): 26

This experiment was described in 1803 by Thomas Young – one more universal genius of science, who also introduced the Young modulus in the elasticity theory (see, e.g., CM Chapter 7), besides numerous other achievements – including deciphering Egyptian hieroglyphs! It is fascinating that the first clear observation of wave interference was made as early as 1666 by another genius, Sir Isaac Newton, in the form of so-called Newton’s rings. Unbelievably, Newton failed to give the most natural explanation of his observations – perhaps because he was vehemently opposed to the very idea of light as a wave, which was promoted in his times by others, notably by Christian Huygens. Due to Newton’s enormous authority, only Young’s two-slit experiments more than a century later have firmly established the wave picture of light – to be replaced by the dualistic wave/photon picture formalized by quantum electrodynamics (see, e.g., QM Ch. 9), in one more century. 27 Since the observation point’s position r does not participate in this formula explicitly, the prime sign in r’ could be dropped, but I keep it as a reminder that the integral is taken over points r’ of the scattering object. Chapter 8

Page 15 of 38

Essential Graduate Physics

EM: Classical Electrodynamics

sinc 

sin 



.

(8.66)

It vanishes at all points n = n with integer n, besides such point with n = 0: sinc0  sinc 0 = 1. 1.5 1.25 1

sin 

0.75



0.5 0.25 0

Fig. 8.7. The sinc function.

 0.25  0.5 5

4

3

2

1

0

1

2

3

4

5

 / The function F(q) = V2sinc2, given by Eq. (64) and plotted with the red line in Fig. 8, is called the Fraunhofer diffraction pattern.28 1 0.8

F (q ) V

0.6 0.4 0.2 0 3

2

1

0

1

2

3

Fig. 8.8. The Fraunhofer diffraction pattern (solid red line) and its envelope 1/2 (dashed red line). For comparison, the blue line shows the basic interference pattern cos2 – cf. Eq. (61).

 /   ka sin  / 2 Note that it oscillates with the same argument’s period (kasin) = 2/ka as the interference pattern (61) from two-point scatterers (shown with the blue line in Fig. 8). However, at the interference, the scattered wave intensity vanishes at angles n’ that satisfy the condition ka sin  n' 1  n , 2 2

(8.67)

i.e. when the optical paths difference asin  is equal to a semi-integer number of wavelengths /2 = /k, and hence the two waves from the scatterers reach the observer in anti-phase – the so-called destructive interference. On the other hand, for the diffraction on a continuous rod the minima occur at a different set of scattering angles: ka sin  n  n, (8.68) 2 28

It is named after Joseph von Fraunhofer (1787-1826) – who has invented the spectroscope, developed the diffraction grating (see below), and also discovered the dark Fraunhofer lines in Sun’s spectrum.

Chapter 8

Page 16 of 38

Sinc function

Essential Graduate Physics

EM: Classical Electrodynamics

i.e. exactly where the two-point interference pattern has its maxima – please have a look at Fig. 8 again. The reason for this relation is that the wave diffraction on the rod may be considered as a simultaneous interference of waves from all its elementary fragments, and exactly at the observation angles when the rod edges give waves with phases shifted by 2n, the interior points of the rod give waves with all phases within this difference, with their algebraic sum equal to zero. As even more visible in Fig. 8, at the diffraction, the intensity oscillations are limited by a rapidly decreasing envelope function 1/2 – while at the two-point interference, the oscillations retain the same amplitude. The reason for this fast decrease is that with each Fraunhofer diffraction period, a smaller and smaller fraction of the road gives an unbalanced contribution to the scattered wave. If the rod’s length is small (ka , the first zeros of the function I(q) correspond to very small angles , for which sin  1, so the differential cross-section is just d  k 2  d  4

2

 ka  (  1) 2 sinc 2 , 2 

(8.69)

i.e. Fig. 8 shows the scattering intensity as a function of the direction toward the observation point – if this point is within the plane containing the rod. Finally, let us discuss a problem of large importance for applications: calculate the positions of the maxima of the interference pattern arising at the incidence of a plane wave on a very large 3D periodic system of point scatterers. For that, first of all, let us quantify the notion of 3D periodicity. The periodicity in one dimension is simple: the system we are considering (say, the positions of point scatterers) should be invariant with respect to the linear translation by some period a, and hence by any multiple sa of this period, where s is any integer. Anticipating the 3D generalization, we may require any of the possible translation vectors R to that the system is invariant, to be equal sa, where the primitive vector a is directed along the (so far, the only) axis of the 1D system. Now we are ready for the common definition of the 3D periodicity – as the invariance of the system with respect to the translation by any vector of the following set: 3

R   sl a l ,

Bravais lattice

(8.70)

l 1

where sl are three independent integers, and {al} is a set of three linearly-independent primitive vectors. The set of geometric points described by Eq. (70) is called the Bravais lattice (first analyzed in detail, circa 1850, by Auguste Bravais). Perhaps the most nontrivial feature of this relation is that the vectors al should not necessarily be orthogonal to each other. (That requirement would severely restrict the set of possible lattices and make it unsuitable for the description, for example, of many solid-state crystals.) For the scattering problem we are considering, we will assume that the position rj of each point scatterer coincides with one of the points R of some Bravais lattice, with a given set of primitive vectors al, so in the basic Eq. (57), the index j is coding the set of three integers {s1, s2, s3}. Now let us consider a similarly defined Bravais lattice, but in the reciprocal (wave-number) space, numbered by three independent integers {t1, t2, t3}: 3

Q   tmb m ,

Reciprocal lattice

m 1

Chapter 8

with b m  2

a m"  a m' , a m  a m"  a m' 

(8.71)

Page 17 of 38

Essential Graduate Physics

EM: Classical Electrodynamics

where in the last expression, the indices m, m’, and m” are all different. This is the so-called reciprocal lattice, which plays an important role in all physics of periodic structures, in particular in the quantum energy-band theory.29 To reveal its most important property, and thus justify the above introduction of the primitive vectors bm, let us calculate the following scalar product: R Q 

3

 sl t m a l  b m  2

l , m 1

3

 sl t m a l 

l , m 1

3 a m"  a m' a  a m"  a m'   2  s l t k l . a m  a m"  a m'  a m  a m"  a m'  l , m 1

(8.72)

Applying to the numerator of the last fraction the operand rotation rule of vector algebra,30 we see that it is equal to zero if l  m, while for l = m the whole fraction is evidently equal to 1. Thus the double sum (72) is reduced to a single sum: 3

3

l 1

l 1

R  Q  2  s l t l  2  nl ,

(8.73)

where each of the products nl  sltl is an integer, and hence their sum, 3

n   nl  s1t1  s 2 t 2  s 3t 3 ,

(8.74)

l 1

is an integer as well, so the main property of the direct/reciprocal lattice couple is very simple: R  Q  2n,

and hence exp iR  Q  1 .

(8.75)

Now returning to the scattering function (56) for a Bravais lattice of point scatters, we see that if the vector q  k – k0 coincides with any vector Q of the reciprocal lattice, then all terms of the phase sum (57) take their largest possible values (equal to 1), and hence the sum as the whole is largest as well, giving a constructive interference maximum. This equality, q = Q, where Q is given by Eq. (71), is called the von Laue condition (named after Max von Laue) of the constructive interference; it is, in particular, the basis of the whole field of the X-ray crystallography of solids and polymers – the main tool for revealing their atomic/molecular structure.31 In order to recast the von Laue condition is a more vivid geometric form, let us consider one of the vectors Q of the reciprocal lattice, corresponding to a certain integer n in Eq. (75), and notice that if that relation is satisfied for one point R of the direct Bravais lattice (70), i.e. for one set of the integers {s1, s2, s3}, it is also satisfied for a 2D system of other integer sets, which may be parameterized, for example, by two integers S1 and S2:

s1'  s1  S1t 3 , s 2'  s 2  S 2 t 3 , s3'  s3  S1t1  S 2 t 2 .

(8.76)

Indeed, each of these sets has the same value of the integer n, defined by Eq. (74), as the original one: n'  s1' t1  s 2' t 2  s3' t 3  s1  S1t 3 t1  s 2  S 2 t 3 t 2  s3  S1t1  S 2 t 2 t 3  n .

(8.77)

Since according to Eq. (75), the vector of the distance between any pair of the corresponding points of the direct Bravais lattice (70), 29

See, e.g., QM Sec. 3.4, where several particular Bravais lattices R, and their reciprocals Q, are considered. See, e.g., MA Eq. (7.6). 31 For more reading on this important topic, I can recommend, for example, the classical monograph by B. Cullity, Elements of X-Ray Diffraction, 2nd ed., Addison-Wesley, 1978. (Note that its title uses the alternative name of the field, once again illustrating how blurry the boundary between the interference and diffraction is.) 30

Chapter 8

Page 18 of 38

Essential Graduate Physics

EM: Classical Electrodynamics

R  S1t 3a1  S 2 t 3a 2  S1t1  S 2 t 2 a 3 ,

(8.78)

satisfies the condition RQ = 2n = 0, this vector is normal to the (fixed) vector Q. Hence, all the points corresponding to the 2D set (76) with arbitrary integers S1 and S2, are located on one geometric plane, called the crystal (or “lattice”) plane. In a 3D system of N >> 1 scatterers (such as N ~1020 atoms in a ~1-mm3 solid crystal), with all linear dimensions comparable, such a plane contains ~N2/3 >> 1 points. As a result, the constructive interference peaks are very sharp. Now rewriting Eq. (75) as a relation for the vector R’s component along the vector Q, RQ 

2 n, Q

where RQ  R  n Q  R 

Q , and Q  Q , Q

(8.79)

we see that the parallel crystal planes corresponding to different numbers n (but the same Q) are located in space periodically, with the smallest distance 2 , (8.80) d Q so the von Laue condition q = Q may be rewritten as the following rule for the possible magnitudes of the scattering vector q  k – k0: 2n . (8.81) q d Figure 9a shows the diagram of the three wave vectors k, k0, and q, taking into account the elastic scattering condition k = k0 = k  2/. From the diagram, we immediately get the famous Bragg rule32 for the (equal) angles    /2 between the crystal plane and each of the vectors k and k0: Bragg rule

k sin  

q n  , 2 d

i.e. 2d sin   n .

(8.82)

The physical sense of this relation is very simple – see Fig. 9b drawn in the “direct” space of the radiusvectors r, rather than in the reciprocal space of the wave vectors, as Fig. 9a. It shows that if the Bragg condition (82) is satisfied, the total difference 2dsin of the optical paths of two waves, partly reflected from the adjacent crystal planes, is equal to an integer number of wavelengths, so these waves interfere constructively. k

  k0

(a)

(b)

q crystal plane’s direction

k0

k

 d

 

d sin 

d sin 

crystal plane crystal plane

Fig. 8.9. Deriving the Bragg rule: (a) from the von Laue condition (in the reciprocal space), and (b) from a direct-space diagram. Note that the scattering angle  equals 2. 32

Named after Sir William Bragg and his son, Sir William Lawrence Bragg, who were the first to demonstrate (in 1912) the X-ray diffraction by atoms in crystals. The Braggs’ experiments have made the existence of atoms (before that, a somewhat hypothetical notion ignored by many physicists) indisputable. Chapter 8

Page 19 of 38

Essential Graduate Physics

EM: Classical Electrodynamics

Finally, note that the von Laue and Bragg rules, as well as the similar condition (60) for the 1D system of scatterers, are valid not only in the Born approximation but also follow from any adequate theory of scattering, because the phase sum (57) does not depend on the magnitude of the wave propagating from each elementary scatterer, provided that they are all equal. 8.5. The Huygens principle As the reader could see, the Born approximation is very convenient for tracing the basic features of (and the difference between) the phenomena of interference and diffraction. Unfortunately, this approximation, based on the relative weakness of the scattered wave, cannot be used to describe more typical experimental implementations of these phenomena, for example, Young’s two-slit experiment, or diffraction on a single slit or orifice – see, e.g. Fig. 10. Indeed, at such experiments, the orifice size a is typically much larger than the light’s wavelength , and as a result, no clear decomposition of the fields to the “incident” and “scattered” waves is possible inside it.33 opaque screen wave source

r

observation angle

 r'

orifice

Fig. 8.10. Deriving the Huygens principle.

However, another approximation called the Huygens (or “Huygens-Fresnel”) principle,34 is very instrumental for the description of such situations. In this approach, the wave beyond the screen is represented as a linear superposition of spherical waves of the type (17), as if they were emitted by every point of the incident wave’s front that has arrived at the orifice. This approximation is valid if the following strong conditions are satisfied:   a  r , (8.83) where r is the distance of the observation point from the orifice. In addition, as we have seen in the last section, at small /a the diffraction phenomena are confined to angles  ~ 1/ka ~ /a > 1, then the differentiation in Eq. (97) may be, in both instances, limited to the rapidly changing exponents, giving e ikR  4 f  (r )   i k  k 0   n' f (r' )d 2 r' . R orifice

(8.99)

Second, if all observation angles are small, we may take kn’  k0n’  –k. With that, Eq. (99) is reduced to the Huygens principle in its form (91). 39

Actually, this is a nontrivial point of the proof. Indeed, it may be shown that the exact solution of Eq. (94) identically is equal to zero if f(r’) and f(r’)/n’ vanish together at any part of the boundary, of a non-zero area. A more careful analysis of this issue (it is the task of the formal vector theory of diffraction, which I will not have time to pursue) confirms the validity of the described intuition-based approach at a >> . 40 It follows, e.g., from Eq. (16) with a monochromatic source q(t) = q exp{-it}, with the amplitude q = 4   that fits the right-hand side of Eq. (93). Chapter 8

Page 23 of 38

Essential Graduate Physics

EM: Classical Electrodynamics

It is clear that the principle immediately gives a very simple description of the interference of waves passing through two small holes in the screen. Indeed, if the holes’ sizes are negligible in comparison with the distance a between them (though still are much larger than the wavelength!), Eq. (91) yield f  (r )  c1e

ikR1

 c2 e

ikR2

, with c1, 2  kf 1, 2 A1, 2 2 iR1, 2 ,

(8.100)

where R1,2 are the distances between the holes and the observation point, and A1,2 are the hole areas. For the wave intensity, Eq. (100) gives 2 2 S  f  f *  c1  c 2  2 c1 c 2 cosk R1  R2    , where   arg c1  arg c 2 . (8.101)

The first two terms in the last expression clearly represent the intensities of the partial waves passed through each hole, while the last one is the result of their interference. The interference pattern’s contrast ratio 2 S max  c1  c 2   , (8.102) S min  c1  c 2  is the largest (infinite) when both waves have equal amplitudes. The analysis of the interference pattern is simple if the line connecting the holes is perpendicular to wave vector k  k0 – see Fig. 6a. Selecting the coordinate axes as shown in that figure, and using for the distances R1,2 the same expansion as in Eq. (86), for the interference term in Eq. (101) we get  kxa  cosk R1  R2      cos (8.103)  .  z  This means that the term does not depend on y, i.e. the interference pattern in the plane of constant z is a set of straight, parallel strips, perpendicular to the vector a, with the period given by Eq. (60), i.e. by the Bragg law.41 This result is strictly valid only at y2 z, the diffraction patterns at two edges of the slit are well separated. Hence, near each edge (for example, near x’ = –a/2) we may simplify Eq. (107) as ik x  x'   2z  I x ( x)   exp dx'    2z  k  a / 2 

2

1/ 2



 expi d , 2

k / 2 z 

1/ 2

(8.111)

( x  a / 2)

and express it via the special functions called the Fresnel integrals:43 2 C ( )     

1/ 2 

 cos(

2

)d ,

0

2 S ( )     

1/ 2 

 sin (

2

)d ,

(8.112)

0

whose plots are shown in Fig. 13a. As was mentioned above, at large values of their argument (), both functions tend to ½. (a) 0.8

(b) 1.5

C ( )

  

0.6

1

S ( )  ½ 0.4

0.5

0.2

0

0

S ( )

0

2

4



6

8

10

 0.5  0.5

  

0

0.5

1

1.5

C ( )  ½ Fig. 8. 13. (a) The Fresnel integrals and (b) their parametric representation.

also that since in this limit ka2 > a, so S(0, z) ~ S0 (a/x) –a/2; however, in the latter region (i.e. inside the slit) the diffracted wave overlaps the incident wave passing through the slit directly, and their interference reveals the phase oscillations, making them visible in the observed intensity as well. Note that according to Eq. (113), the linear scale x of the Fresnel diffraction pattern is of the order of (2z/k)1/2, i.e. is complies with the estimate given by Eq. (90). If the slit is gradually narrowed so that its width a becomes comparable to x,44 the Fresnel diffraction patterns from both edges start to “collide” (interfere). The resulting wave, fully described by Eq. (107), is just a sum of two contributions of the type (111) from both edges of the slit. The resulting interference pattern is somewhat complicated, and only when a becomes substantially less than x, it is reduced to the simple Fraunhofer pattern (110). 44

Note that this condition may be also rewritten as a ~ x, i.e. z/a ~ a/.

Chapter 8

Page 27 of 38

Essential Graduate Physics

EM: Classical Electrodynamics

Of course, this crossover from the Fresnel to Fraunhofer diffraction may be also observed, at fixed wavelength  and slit width a, by increasing z, i.e. by measuring the diffraction pattern farther and farther away from the slit. Note also that the Fraunhofer limit is always valid if the diffraction is measured as a function of the diffraction angle  alone. This may be done, for example, by collecting the diffracted wave with a “positive” (converging) lens and observing the diffraction pattern in its focal plane. 8.7. Geometrical optics placeholder I would not like the reader to miss, behind all these details, the main feature of the Fresnel diffraction, which has an overwhelming practical significance. Namely, besides narrow diffraction “cones” (actually, parabolic-shaped regions) with the lateral scale x ~ (z)1/2, the wave far behind a slit of width a >> , x, reproduces the field just behind the slit, i.e. reproduces the unperturbed incident wave inside it, and has a negligible intensity in the shade regions outside it. An evident generalization of this fact is that when a plane wave (in particular an electromagnetic wave) passes any opaque object of a large size a >> , it propagates around it, by distances z up to ~a2/, along straight lines, with virtually negligible diffraction effects. This fact gives the strict foundation for the notion of the wave ray (or beam), as the line perpendicular to the local front of a quasi-plane wave. In a uniform media such a ray follows a straight line,45 but it refracts in accordance with the Snell law at the interface of two media with different values of the wave speed v, i.e. different values of the refraction index. The concept of rays enables the whole field of geometric optics, devoted mostly to ray tracing in various (sometimes very sophisticated) optical systems. This is why, at this point, an E&M course that followed the scientific logic more faithfully than this one, would give an extended discussion of the geometric and quasi-geometric optics, including (as a minimum46) such vital topics as - the so-called lensmaker’s equation expressing the focus length f of a lens via the curvature radii of its spherical surfaces and the refraction index of the lens material, - the thin lens formula relating the image distance from the lens via f and the source distance, - the concepts of basic optical instruments such as glasses, telescopes, and microscopes, - the concepts of the spherical, angular, and chromatic aberrations (image distortions). However, since I have made a (possibly, wrong) decision to follow the common tradition in selecting the main topics for this course, I do not have time/space left for such discussion. Still, I am using this “placeholder” pseudo-section to relay my deep conviction that any educated physicist has to know the geometric optics basics. If the reader has not been exposed to this subject during their undergraduate studies, I highly recommend at least browsing one of the available textbooks.47 45

In application to optical waves, this notion may be traced back to at least the work by Hero (a.k.a. Heron) of Alexandria (circa 170 AD). Curiously, he correctly described light reflection from one or several plane mirrors, starting from the completely wrong idea of light propagation from the eye of the observer to the observed object. 46 Admittedly, even this list leaves aside several spectacular effects, including such a beauty as conical refraction in biaxial crystals – see, e.g., Chapter 15 of the textbook by M. Born and E. Wolf, cited in the end of Sec. 7.1. 47 My top recommendation for that purpose would be Chapters 3-6 and Sec. 8.6 in Born and Wolf. A simpler alternative is Chapter 10 in G. Fowles, Introduction to Modern Optics, 2nd ed., Dover, 1989. Note also that the venerable field of optical microscopy is currently revitalized by holographic/tomographic methods, using the Chapter 8

Page 28 of 38

Essential Graduate Physics

EM: Classical Electrodynamics

8.8. Fraunhofer diffraction from more complex scatterers So far, our quantitative analysis of diffraction has been limited to a very simple geometry – a single slit in an otherwise opaque screen (Fig. 12). However, in the most important Fraunhofer limit, z >> ka2, it is easy to get a very simple expression for the plane wave diffraction/interference by a plane orifice (with a linear size scale a) of arbitrary shape. Indeed, the evident 2D generalization of the approximation (106)-(107) is





ik  x  x'    y  y'  dx'dy' I x I y   exp 2z orifice  ik ( x 2  y 2 )  kyy'   kxx'  exp i   exp  i dx'dy' , z  2z z   orifice 2

2

(8.115)

so besides the inconsequential total phase factor, Eq. (105) is reduced to General Fraunhofer diffraction pattern

f (ρ)  f 0

 exp iκ  ρ' d

orifice

2

'  f 0

 T (ρ' ) exp iκ  ρ'd

2

' .

(8.116)

all screen

Here the 2D vector  (not to be confused with wave vector k, which is virtually perpendicular to !) is defined as ρ (8.117) κ  k  q  k -k0 , z and  = {x, y} and ’ = {x’, y’} are 2D radius-vectors in the, respectively, observation and orifice planes – both nearly normal to the vectors k and k0.48 In the last form of Eq. (116), the function T(’) describes the screen’s transparency at point ’, and the integral is over the whole screen plane z’ = 0. (Though the two forms of Eq. (116) are strictly equivalent only if T(’ ) is equal to either 1 or 0, its last form may be readily obtained from Eq. (91) with f(r’) = T(’ )f0 for any transparency profile, provided that T(’ ) is any function that changes substantially only at distances much larger than   2/k.) From the mathematical point of view, the last form of Eq. (116) is just the 2D spatial Fourier transform of the function T(’), with the variable  defined by the observation point’s position:   (z/k)  (z/2). This interpretation is useful because of the experience we all have with the Fourier transform, if only in the context of its time/frequency applications. For example, if the orifice is a single small hole, T(’) may be approximated by a delta function, so Eq. (116) yields f()  const. This result corresponds (at least for the small diffraction angles   /z, for which the Huygens approximation is valid) to a spherical wave spreading from the point-like orifice. Next, for two small holes, Eq. (116) immediately gives the interference pattern (103). Let me now use Eq. (116) to analyze other simplest (and most important) 1D transparency profiles, leaving a few 2D cases for the reader’s exercise. (i) A single slit of width a (Fig. 12) may be described by transparency

scattered wave’s phase information. These methods are especially productive in biology and medicine – see, e.g., M. Brezinski, Optical Coherence Tomography, Academic Press, 2006, and G. Popescu, Quantitative Phase Imaging of Cells and Tissues, McGraw-Hill (2011). 48 Note that for a thin uniform plate of the same shape as the orifice we are discussing now, the Born phase integral (63) with q ) a result similar to Eq. (24): Magnetic dipole radiation: field

B m (r, t ) 

  r    r      t    n     t    n  .   m n  m 2 4 rv 4 rv   v    v 

(8.137)

According to this expression, just as at the electric dipole radiation, the vector B is perpendicular to the vector n, and its magnitude is also proportional to sin, where  is now the angle between the direction toward the observation point and the second time derivative of the vector m – rather than p: Bm 



 r   t   sin  . m 4 rv  v

(8.138)

2

As the result, the intensity of this magnetic dipole radiation has a similar angular distribution: Magnetic dipole radiation: power

Z S r  ZH  (4 v 2 r ) 2 2

52

2

  r    t   sin 2  m   v 

(8.139)

If you still need it, see MA Eq. (7.5).

Chapter 8

Page 33 of 38

Essential Graduate Physics

EM: Classical Electrodynamics

- cf. Eq. (26), besides the (generally) different meaning of the angle . Note, however, that this radiation is usually much weaker than its electric-dipole counterpart. For example, for a non-relativistic particle with electric charge q, moving on a trajectory of linear size ~a, the electric dipole moment is of the order of qa, while its magnetic moment scales as qa2, where  is the motion frequency. As a result, the ratio of the magnetic and electric dipole radiation intensities is of the order of (a/v)2, i.e. the squared ratio of the particle’s speed to the speed of the emitted waves – that has to be much smaller than 1 for our non-relativistic calculation to be valid. The angular distribution of the electric quadrupole radiation, described by Eq. (136), is more complicated. To show this, we may add to Aq a vector parallel to n (i.e. along the wave’s propagation), getting   r  A q r, t   Q  t  , where Q   q k 3rk n  rk   nrk2 , (8.140) 24 rv  v  k





because this addition does not give any contribution to the transverse component of the electric and magnetic fields, i.e. to the radiated wave. According to the above definition of the vector Q , its Cartesian components may be represented as 3

Qj   Qjj ' n j ' ,

(8.141)

j '1

where Qjj’ are the elements of the electric quadrupole tensor of the system – see the last of Eqs. (3.4):53

Qjj '   q k 3r j r j '  r 2 jj ' k .

(8.142)

k

Now taking the curl of the first of Eqs. (140) at r >> , we get

  r B q r, t    n  Q t   . 2 24 rv  v

(8.143)

This expression is similar to Eqs. (24) and (137), but according to Eqs. (140) and (142), components of the vector Q do depend on the direction of the vector n, leading to a different angular dependence of Sr. As the simplest example, let us consider the system of two equal point electric charges moving symmetrically, at equal distances d(t) R behind the disk’s center – see the figure on the right. Discuss the result.

S ( z)  ? orifice

S0

0

disk

S0

2R

0

z

S ( z)  ? 2R z

8.19. Use the Huygens principle to analyze the Fraunhofer diffraction of a plane wave normally incident on a square-shaped hole, of size aa, in an opaque screen. Sketch the diffraction pattern you would observe at a sufficiently large distance, and quantify the expression “sufficiently large” for this case. 8.20. Use the Huygens principle to analyze the propagation of a monochromatic Gaussian beam described by Eq. (7.181), with the initial characteristic width a0 >> , in a uniform, isotropic medium. Use the result for a semi-quantitative derivation of the so-called Abbe limit for the spatial resolution of Chapter 8

Page 37 of 38

Essential Graduate Physics

EM: Classical Electrodynamics

an optical system: wmin = /2sin, where  is the half-angle of the wave cone propagating from the object, and captured by the system. 8.21. Within the Fraunhofer approximation, analyze the pattern produced by a 1D diffraction grating with the periodic transparency profile shown in the figure on the right, for the normal incidence of a monochromatic plane wave.

T 1

w d

0

8.22. N equal point charges are attached, at equal intervals, to a circle rotating with a constant angular velocity about its center – see the figure on the right. For what values of N does the system emit: (i) the electric dipole radiation? (ii) the magnetic dipole radiation? (iii) the electric quadrupole radiation?

x

d

q R



2 / N

8.23. What general statements can you make about: (i) the electric dipole radiation, and (ii) the magnetic dipole radiation, due to a collision of an arbitrary number of similar non-relativistic classical particles? 8.24. Calculate the angular distribution and the total power radiated by a small planar loop antenna of radius R, fed with ac current with frequency  and amplitude I0, into free space. 8.25. The orientation of a magnetic dipole, with a constant magnitude m of its moment, is rotating about a certain axis with an angular velocity , with the angle  between them staying constant. Calculate the angular distribution and the average power of its radiation into the free space. 8.26. Solve Problem 12 (also in the low-frequency limit kR > R), and (ii) of a very small skin depth (s > 1) harmonics of the particle rotation frequency c, with comparable amplitudes. However, if the orbital frequency fluctuates even slightly (c/c > 1/N ~ 1/3), as it happens in most practical systems, the radiation pulses are not coherent, so the average radiation power spectrum may be calculated as that of one pulse, multiplied by the number of pulses per second. In this case, the spectrum is continuous, extending from low frequencies all the way to approximately

 max ~ 1 / t ~  3 c .

(10.50)

In order to verify and quantify this result, let us calculate the spectrum of radiation due to a single pulse. For that, we should first make the general notion of the radiation spectrum quantitative. Let us represent an arbitrary electric field (say that of the synchrotron radiation we are studying now) observed at a fixed point r, as a function of the observation time t, as a Fourier integral:14 

Et    E e it dt .

(10.51)



13

The fact that the in-plane component of each electric field’s pulse E(t) is antisymmetric with respect to its central point, and hence vanishes at that point (as Fig. 7b shows), readily follows from Eq. (19). 14 In contrast to the single-frequency case (i.e. a monochromatic wave), we may avoid taking the real part of the complex function (Ee-it) by requiring that in Eq. (51), E- = E*. However, it is important to remember the factor ½ required for the transition to a monochromatic wave of frequency 0 and with real amplitude E0: E = E0 [( – 0) + ( + 0)]/2. Chapter 10

Page 13 of 40

Essential Graduate Physics

EM: Classical Electrodynamics

This expression may be plugged into the formula for the total energy of the radiation pulse (i.e. of the loss of particle’s energy E) per unit solid angle:15 

dE R2    S n (t ) R 2 dt  dΩ  Z0



 E(t )

2

dt .

(10.52)



This substitution, followed by a natural change of the integration order, yields dE R 2   dΩ Z 0













d d' E  E ω'  dt e

i ( ' )t

(10.53)

.

But the inner integral (over t) is just 2( + ’).16 This delta function kills one of the frequency integrals (say, one over ’), and Eq. (53) gives us a result that may be recast as



dE  dΩ



 I  d,

with I   

0

4R 2 4R 2 E  E   E  E* , Z0 Z0

(10.54)

where the evident frequency symmetry of the scalar product EE- has been utilized to fold the integral of I() to positive frequencies only. The first of Eqs. (54) makes the physical sense of the function I() very clear: this is the so-called spectral density of the electromagnetic radiation (per unit solid angle).17 To calculate the spectral density, we can express the function E via E(t) using the Fourier transform reciprocal to Eq. (51):  1 (10.55) E  E(t )e it dt .  2  In the particular case of radiation by a single point charge, we may use here the second (radiative) term of Eq. (19):  1 q 1  n  (n  β)  β  it (10.56) E    e dt . 2 4 0 cR  (1  β  n) 3  ret





Since the vectors n and  are more natural functions of the radiation’s emission (retarded) time tret, let us use Eqs. (5) and (16) to exclude the observation time t from this integral:

E 





   Rret 1 1  n  (n  β)  β    expi  t ret  2  4 0 2 cR  (1  β  n) c    ret

q

 dt ret . 

(10.57)

Assuming that the observer is sufficiently far from the particle,18 we may treat the unit vector n as a constant and also use the approximation (8.19) to reduce Eq. (57) to 15

Note that the expression under this integral differs from dP/d defined by Eq. (29) by the absence of the term (1 – n) = tret/t – see Eq. (16). This is natural because now we are calculating the wave energy arriving at the observation point r during the time interval dt rather than dtret. 16 See, e.g. MA Eq. (14.4). 17 The notion of spectral density may be readily generalized to random processes – see, e.g., SM Sec. 5.4. 18 According to the estimate (49), for a synchrotron radiation’s pulse, this restriction requires the observer to be much farther than r’ ~ ct ~ R/3 from the particle. With the values R ~ 104 m and  ~ 105 mentioned above, r’ ~ 10-11 m, so this requirement is satisfied for any realistic radiation detector. Chapter 10

Page 14 of 40

Essential Graduate Physics

E 

EM: Classical Electrodynamics





   n  r'  1 q 1  ir   n  (n  β)  β exp expi  t   dt ret .  2 2 4 0 cR c  ret  c  (1  β  n)  

(10.58)

Plugging this expression into Eq. (54), and then using the definitions c  1/(00)1/2 and Z0  (0/0)1/2, we get19 2 Z 0 q 2   n  (n  β)  β   n  r'  (10.59) expi  t  I     dt ret .  c  ret 16 3  (1  β  n) 2  





This result may be further simplified by noticing that the fraction before the exponent may be represented as a full derivative over tret,





 n  (n  β)  β   n   (n  β)  dβ/dt d  n  (n  β)         , 2 2 (1  β  n)  ret dt  1  β  n  ret  (1  β  n)  ret 

(10.60)

and working out the resulting integral by parts. At this operation, the time differentiation of the parentheses in the exponent gives d[tret – nr’(tret)/c]/dtret = (1 – nu/c)ret  (1 – n)ret, leading to the cancellation of the remaining factor in the denominator and hence to a very simple general result: 20 Z q 2 2 I    0 3 16

Relativistic radiation: spectral density

2



   n  r'   n  (n  β) expi  t  c  dt ret . ret

(10.61)

Now returning to the particular case of the synchrotron radiation, it is beneficial to choose the origin of time tret so that at tret = 0, the angle  between the vectors n and  takes its smallest value 0, i.e., in terms of Fig. 5, the vector n is within the [y, z]-plane. Fixing this direction of the axes so that they do not move, we can redraw that figure as shown in Fig. 8.

y

x

   c t ret 0

R

n

 r' (t ret )

0

β ret

z

Fig. 10.8. Deriving the synchrotron radiation’s spectral density. The vector n is static within the [y, z]-plane, while the vectors r’(tret) and ret rotate, within the [x, z]-plane, with the angular velocity c of the particle.

In this “lab” reference frame, the vector n does not depend on time, while the vectors r’(tret) and ret do depend on it via the angle   ctret: 19

Note that for our current purposes of calculation of the spectral density of radiation by a single particle, the factor exp{ir/c} has got canceled. However, as we have seen in Chapter 8, this factor plays a central role in the interference of radiation from several (many) sources. Such interference is important, in particular, in undulators and free-electron lasers – the devices to be (qualitatively) discussed below. 20 Actually, this simplification is not occasional. According to Eq. (10b), the expression under the derivative in the last form of Eq. (60) is just the transverse component of the vector potential A (give or take a constant factor), and from the discussion in Sec. 8.2 we know that this component determines the electric dipole radiation of a system, which dominates the radiation in our current case of a single particle with a non-zero electric charge. Chapter 10

Page 15 of 40

Essential Graduate Physics

EM: Classical Electrodynamics

n  0, sin  0 , cos  0 , r' t ret   R1  cos  , 0, R sin  , β ret   sin  , 0,  cos   . (10.62) Now an easy multiplication yields

n  (n  β)ret





  sin  , sin  0 cos  0 cos  ,  sin 2  0 sin  ,

(10.63)

   n  r'    R  (10.64)   expi  t ret  cos  0 sin   . expi  t  c  ret c       As we already know, in the (most interesting) ultra-relativistic limit  >> 1, most radiation is confined to short pulses, so only small angles  ~ ctret ~  –1 may contribute to the integral in Eq. (61). Moreover, since most radiation goes to small angles  ~ 0 ~  –1, it makes sense to consider only such small angles. Expanding both trigonometric functions of these small angles, participating in parentheses of Eq. (64), into the Taylor series, and keeping only the leading terms, we get

t ret 

R  c3 3 R  02 R R  c t ret  cos  0 sin   t ret   c t ret  t ret . c 6 c 2 c c

(10.65)

Since (R/c)c = u/c =   1, in the two last terms we may approximate this parameter by 1. However, it is crucial to distinguish the difference between the two first terms, proportional to (1 – )tret, from zero; as we have done before, we may approximate it with tret/22. On the right-hand side of Eq. (63), which does not have such a critical difference, we may be bolder, taking21

 sin  , sin  0 cos  0 cos  ,  sin 2  0 sin     ,  0 , 0   c t ret ,  0 , 0. As a result, Eq. (61) is reduced to

I   

Z0q2 axn x  a yn y 16 3

2





Z0q2 2 ax  a y 3 16

2

,

(10.66)

(10.67)

where ax and ay are the following dimensionless factors:   i   2 3  dt ret , a x     c t ret exp  ( 02    2 )t ret  c t ret 2 3        i   2 3  dt ret , a y     0 exp  ( 02    2 )t ret  c t ret 3  2   

(10.68)

that describe the frequency spectra of two components of the synchrotron radiation, with mutually perpendicular polarization planes. Defining the following dimensionless parameter



3/ 2  2   0   2  , 3 c

(10.69)

21

This expression confirms that the in-plane (x) component of the electric field is an odd function of tret and hence of t – t0 (see its sketch in Fig. 7b), while the normal (y) component is an even function of this difference. Also, note that for an observer exactly in the rotation plane (0 = 0) the latter component equals zero for all times – the fact which could be predicted from the very beginning because of the evident mirror symmetry of the problem with respect to the particle’s rotation plane. Chapter 10

Page 16 of 40

Essential Graduate Physics

EM: Classical Electrodynamics

which is proportional to the observation frequency, and changing the integration variable to   ctret/(02 + -2)1/2, the integrals (68) may be reduced to the modified Bessel functions of the second kind, but with fractional indices: ax 

 3  2 3i  2  3    0    2    exp i    d  K  , c 3   02   2 1 / 2 2 / 3  2 

(10.70)

 3  2 3 0 1/ 2   3   0  02    2   exp i    d  2 K 1 / 3   ay  2 c 2 3      0   

Figure 9a shows the dependence of the Bessel factors defining the amplitudes ax and ay on the normalized observation frequency . It shows that the radiation intensity changes with frequency relatively slowly (note the log-log scale of the plot!) until the normalized frequency defined by Eq. (69) is increased beyond ~1. For the most important observation angles 0 ~ , this means that our estimate (50) is indeed correct, though formally the frequency spectrum extends to infinity.22 1

(a)

 K 2 / 3 ( )

(b) 

  K 5 / 3 ( ) d

0.8



0.6 0.1

 K 1 / 3 ( )

0.4 0.2

0.01 0.01

0.1



1

10

0.01

0.1



1

10

Fig. 10.9. The frequency spectra of: (a) two components of the synchrotron radiation, at a fixed angle 0, and (b) its total (polarization- and angle-averaged) intensity.

Naturally, the spectral density integrated over the full solid angle exhibits a similar frequency behavior. Without performing the integration,23 let me just give the result (also valid for  >> 1 only) for the reader’s reference:  3 2 2      where   I d   q K 5 / 3  d , . (10.71) 3 4  4 3   c  Figure 9b shows the dependence of this integral on the normalized frequency . (This plot is sometimes called the “universal flux curve”.) In accordance with the estimate (50), it reaches the maximum at The law of the spectral density decrease at large  may be readily obtained from the second of Eqs. (2.158), which is valid even for any (even non-integer) Bessel function index n: ax  ay  -1/2exp{-}. Here the exponential factor is certainly the most important one. 23 For that, and many other details, the interested reader may be referred, for example, to the fundamental review collection by E. Koch et al. (eds.) Handbook on Synchrotron Radiation (in 5 vols.), North-Holland, 1983-1991, or to a more concise monograph by A. Hofmann, The Physics of Synchrotron Radiation, Cambridge U. Press, 2007. 22

Chapter 10

Page 17 of 40

Essential Graduate Physics

EM: Classical Electrodynamics

 max  0.3, i.e.  max 

c 2

 3.

(10.72)

For example, in the National Synchrotron Light Source (NSLS-II) in the Brookhaven National Laboratory near our SBU campus, with its ring’s circumference of 792 m, the electron revolution period T is 2.64μs. With c = 2π/T  2.4106 s-1, for the achieved   6103 (E  3 GeV), we get max ~ 31017 s-1, i.e. the photon energy max ~ 200 eV corresponding to soft X-rays. In light of this estimate, the reader may be surprised by Fig. 10, which shows the calculated spectra of the radiation that this facility was designed to produce, with the intensity maxima at photon energies up to a few keV.

Fig. 10.10. Design brightness of various synchrotron radiation sources of the NSLS-II facility. For the bend magnets and wigglers, the “brightness” may be obtained by multiplication of the one-pulse spectral density I() calculated above, by the number of electrons passing the source per second. (Note the non-SI units used by the synchrotron radiation community.) However, for undulators, there is an additional factor due to the partial coherence of radiation – see below. (Adapted from the document NSLS-II Source Properties and Floor Layout that was available online at https://www.bnl.gov/ps/docs/pdf/SourceProperties.pdf in 2011-2020.)

The reason for this discrepancy is that in the NLLS-II, and in all modern synchrotron light sources, most radiation is produced not by the circular orbit itself (which is, by the way, not exactly

Chapter 10

Page 18 of 40

Essential Graduate Physics

EM: Classical Electrodynamics

circular, but consists of a series of straight and bend-magnet sections), but by such bend sections, and the devices called wigglers and undulators: strings of several strong magnets with alternating field direction (Fig. 11), that induce periodic bending (wiggling”) of the electron’s trajectory, with the synchrotron radiation emitted at each bend.

u

X  ray beam

electrons Fig. 10.11. The generic structure of the wigglers, undulators, and free-electron lasers. (Adapted from http://www.xfel.eu/overview/how_does_it_work/. )

The difference between the wigglers and the undulators is more quantitative than qualitative: the former devices have a larger spatial period u (the distance between the adjacent magnets of the same polarity, see Fig. 11), giving enough space for the electron beam to bend by an angle larger than  –1, i.e. larger than the radiation cone’s width. As a result, the radiation reaches an in-plane observer as a periodic sequence of individual pulses – see Fig. 12a. t  u / 2 2 c

(a)

(b) t t

t

Fig. 10.12. Waveforms of the radiation emitted by (a) a wiggler and (b) an undulator – schematically.

The shape of each pulse, and hence its frequency spectrum, are essentially similar to those discussed above,24 but with much higher local values of c and hence max – see Fig. 10. Another difference is a much higher frequency of the pulses. Indeed, the fundamental Eq. (16) allows us to calculate the time distance between them, for the observer, as t 

  t 1  t ret  1    u  2 u  u , t ret u c 2 c

(10.73)

Indeed, the period u is typically a few centimeters (see the numbers in Fig. 10), i.e. is much larger than the interval r’ ~ R/3 estimated above. Hence the synchrotron radiation results may be applied locally, to each electron beam’s bend. (In this context, a simple problem for the reader: use Eqs. (19) and (63) to explain the difference between shapes of the in-plane electric field pulses emitted at opposite magnetic poles of the wiggler, which is schematically shown in Fig. 12a.)

24

Chapter 10

Page 19 of 40

Essential Graduate Physics

EM: Classical Electrodynamics

where the first two relations are valid at u > 1/V of the k space is dN = d3k/(kxkxkx) = (V/3)d3k. Frequently, it is more convenient to work with traveling waves (88); in this case we should take into account that, as was just discussed, there are two different traveling wave 59

In some systems (e.g., a particle interacting with a potential well of a finite depth), a discrete energy spectrum within a certain energy interval may coexist with a continuous spectrum in a complementary interval. However, the conceptual philosophy of eigenfunctions and eigenvalues remains the same even in this case. 60 This is, of course, the general property of waves of any physical nature, propagating in a linear medium – see, e.g., CM Sec. 6.5 and/or EM Sec. 7.3. Chapter 1

Page 23 of 26

Essential Graduate Physics

QM: Quantum Mechanics

numbers (say, +kx and –kx) corresponding to each standing wave vector’s kx > 0. Hence the same number of physically different states corresponds to a 23 = 8-fold larger k-space or, equivalently, to an 8-fold smaller number of states per unit volume d3k: dN 

V

2 3

d 3k .

(1.90)

Number of 3D states

For dN >> 1, this expression is independent of the boundary conditions, and is frequently represented as the following summation rule lim

V

k 3V 

 f (k )   f (k )dN  2   f (k )d k 3

3

k,

(1.91)

Summation over 3D states

where f(k) is any function of k. Note that if the same wave vector k corresponds to several internal quantum states (such as spin – see Chapter 4), the right-hand side of Eq. (91) requires its multiplication by the corresponding degeneracy factor of orbital states.61 Finally, note that in systems with reduced wavefunction dimensionality, Eq. (90) for the number of states at large k (i.e., for an essentially free particle motion) should be replaced accordingly: in a 2D system of area A >> 1/k2, A (1.92) dN  d 2k , 2 2 

Number of 2D states

while in a 1D system of length l >> 1/k, dN 

l dk , 2

(1.93)

with the corresponding changes of the summation rule (91). This change has important implications for the density of states on the energy scale, dN/dE: it is straightforward (and hence left for the reader :-) to use Eqs. (90), (99), and (100) to show that for free 3D particles the density increases with E (proportionally to E1/2), for free 2D particles it does not depend on energy at all, while for free 1D particles it scales as E–1/2, i.e. decreases with energy. 1.8. Exercise problems 1.1. The actual postulate made by N. Bohr in his original 1913 paper was not directly Eq. (8), but rather the assumption that at quantum leaps between adjacent electron orbits with n >> 1, the hydrogen atom either emits or absorbs energy E = , where  is its classical radiation frequency – according to classical electrodynamics, equal to the angular velocity of the electron’s rotation.62 Prove that this postulate, complemented with the natural requirement that L = 0 at n = 0, is equivalent to Eq. (8). 1.2. Generalize the Bohr theory for a hydrogen-like atom/ion with a nucleus with the electric charge Q = Ze, to the relativistic case.

61

The front factor 2 in Eq. (1) for the number of electromagnetic wave modes is just one embodiment of the degeneracy factor, in that case describing two different polarizations of the waves with the same wave vector. 62 See, e.g., EM Sec. 8.2. Chapter 1

Page 24 of 26

Number of 1D states

Essential Graduate Physics

QM: Quantum Mechanics

1.3. A hydrogen atom, initially in the lowest excited state, returns to its ground state by emitting a photon propagating in a certain direction. Use the same approach as in Sec. 1(iv) to calculate the photon’s frequency reduction due to atomic recoil. 1.4. Use Eq. (53) to prove that the linear operators of quantum mechanics are commutative: Aˆ 2  Aˆ1  Aˆ1  Aˆ 2 , and associative: Aˆ1  Aˆ 2  Aˆ 3  Aˆ1  Aˆ 2  Aˆ 3 .









1.5. Prove that for any time-independent Hamiltonian operator Hˆ and two arbitrary complex functions f(r) and g(r), 3 3  f r Hˆ g r  d r   Hˆ f r g r  d r . 1.6. Prove that the Schrödinger equation (25) with the Hamiltonian operator given by Eq. (41), is Galilean form-invariant, provided that the wavefunction is transformed as  mv  r mv 2t  ' r' , t'    r, t exp i i ,  2   where the prime sign marks the variables measured in the reference frame 0’ that moves, without rotation and with a constant velocity v, relatively to the “lab” frame 0. Give a physical interpretation of this transformation.

1.7.* Prove the so-called Hellmann-Feynman theorem:63

E n H ,    n where  is some c-number parameter, on which the time-independent Hamiltonian Hˆ , and hence its eigenenergies En, depend. 1.8.* Use Eqs. (73) and (74) to analyze the effect of phase locking of Josephson oscillations on the dc current flowing through a weak link between two superconductors (frequently called the Josephson junction), assuming that an external source applies to the junction a sinusoidal ac voltage with frequency  and amplitude A. 1.9. Calculate x, px, x, and px for the eigenstate {nx, ny, nz} of a particle in a rectangular hard-wall box described by Eq. (77), and compare the product xpx with the Heisenberg’s uncertainty relation. 1.10. Looking at the lower (red) line in Fig. 8, it seems plausible that the lowest-energy eigenfunction (84) of the 1D boundary problem (83) may be well approximated with an inverted quadratic parabola: X(x)  Cx(ax – x), where C is a normalization constant. Explore how good this approximation is.

63

Despite this common name, H. Hellmann (in 1937) and R. Feynman (in 1939) were not the first ones in the long list of physicists who had (apparently, independently) discovered this equality. Indeed, it has been traced back to a 1922 paper by W. Pauli and was carefully proved by P. Güttinger in 1931.

Chapter 1

Page 25 of 26

Essential Graduate Physics

QM: Quantum Mechanics

1.11. A particle placed in a hard-wall rectangular box with sides {ax, ay, az} is in its ground state. Calculate the average force acting on each face of the box. Can the forces be characterized by a certain pressure? 1.12. A 1D quantum particle was initially in the ground state of a very deep, flat potential well of width a:  0, for  a / 2  x   a / 2, U ( x)    , otherwise.

At some instant, the well’s width is abruptly increased to a new value a’ > a, leaving the potential symmetric with respect to the point x = 0, and then is kept constant. Calculate the probability that after the change, the particle is still in the ground state of the system. 1.13. At t = 0, a 1D particle of mass m is placed into a hard-wall, flat-bottom potential well  0, for 0  x  a, U ( x)    , otherwise, in a 50/50 linear superposition of the lowest-energy (ground) state and the first excited state. Calculate: (i) the normalized wavefunction (x, t) for arbitrary time t  0, and (ii) the time evolution of the expectation value x of the particle’s coordinate. 1.14. Calculate the potential profiles U(x) for that the following wavefunctions, (i)  = c exp{–ax2 – ibt}, and (ii)  = c exp{–a x  – ibt} (with real coefficients a > 0 and b), satisfy the 1D Schrödinger equation for a particle with mass m. For each case, calculate x, px, x, and px, and compare the product xpx with Heisenberg’s uncertainty relation. 1.15. The wavefunction of an excited stationary state of a 1D particle moving in a potential profile U(x) is related to that of its ground state as e(x)  xg(x). Calculate the function U(x). 1.16. A 1D particle of mass m, moving in a potential well U(x), has the following stationary eigenfunction: (x) = C/coshx, where C is the normalization constant, and  is a real constant. Calculate the function U(x) and the state’s eigenenergy E. 1.17. Calculate the density dN/dE of traveling-wave quantum states inside large hard-wall rectangular boxes of various dimensions: d = 1, 2, and 3. 1.18.* A 1D particle is confined in a potential well of width a, with a flat bottom and hard, infinitely high walls. Use the finite-difference method with steps a/2 and a/3 to find as many eigenenergies as possible. Compare the results with each other, and with the exact formula. 64

64

You may like to start by reading about the finite-difference method – see, e.g., CM Sec. 8.5 or EM Sec. 2.11.

Chapter 1

Page 26 of 26

Essential Graduate Physics

QM: Quantum Mechanics

Chapter 2. 1D Wave Mechanics Even the simplest, 1D version of wave mechanics enables quantitative analysis of many important quantum-mechanical effects. The order of their discussion in this chapter is dictated mostly by mathematical convenience – going from the simplest potential profiles to more complex ones, so that we may build upon the previous results. However, the reader is advised to focus not on the math, but rather on the physics of the non-classical phenomena it describes, ranging from particle penetration into classically-forbidden regions, to quantum-mechanical tunneling, to the metastable state decay, to covalent bonding, to quantum oscillations, to energy bands and gaps. 2.1. Basic relations In many important cases, the wavefunction may be represented in the form (x, t)(y, z), where (x, t) satisfies the 1D version of the Schrödinger equation,1 Schrödinger equation

i

 ( x, t )  2  2  ( x, t )   U ( x, t )  ( x, t ) . t 2m x 2

(2.1)

If the transverse factor (y, z) is normalized:

   y, z 

2

dydz  1 ,

(2.2)

then the similar integration of Eq. (1.22b) over the [y, z] plane gives the following probability to find the particle on a segment [x1, x2]: x2

W (t )   Ψ ( x, t )Ψ * ( x, t )dx .

Probability

(2.3a)

x1

In particular, if the particle under analysis is definitely somewhere inside the system, the normalization of its 1D wavefunction (x, t) may be provided by extending this integral to the whole axis x: 

 w( x, t )dx  1,

Normalization

where w( x, t )  Ψ( x, t )Ψ * ( x, t ) .

(2.3b)



Similarly, the [y, z]-integration of Eq. (1.23) shows that in this case, the expectation value of any observable depending only on the coordinate x (and possibly time), may be expressed as 

Expectation value

A (t ) 

Ψ

*

( x, t ) Aˆ Ψ( x, t )dx ,

(2.4)



and that of Eq. (1.47) makes it valid for the whole probability current along the x-axis (a scalar): 1

Note that for this reduction, it is not sufficient for the potential energy U(r, t) to depend on just one spatial coordinate (x). Actually, Eq. (1) is a more robust model for the description of the opposite situations when the potential energy changes within the [y, z] much faster than in the x-direction, so that the transverse factor (y, z) is confined in space much more than , if the confining potential profile is independent of x and t. Let me leave a semi-quantitative analysis of this issue for the reader’s exercise. (See also Sec. 3.1.) © K. Likharev

Essential Graduate Physics

QM: Quantum Mechanics

I ( x, t )   j x dydz 

  *    2  . Im Ψ Ψ   Ψ ( x, t ) m  x  m x

(2.5)

Probability current

(2.6)

Continuity equation

Then the continuity equation (1.48) for any segment [x1, x2] takes the form

dW  I ( x 2 )  I ( x1 )  0 . dt

The above formulas are sufficient for the analysis of 1D problems of wave mechanics, but before proceeding to particular cases, let me deliver on my earlier promise to prove that Heisenberg’s uncertainty relation (1.35) is indeed valid for any wavefunction (x, t). For that, let us consider the following positive (or at least non-negative) integral: J   







2

 x   dx  0 , x

(2.7)

where  is an arbitrary real constant, and assume that at x   the wavefunction vanishes, together with its first derivative – as we will see below, a very common case. Then the left-hand side of Eq. (7) may be recast as 2 *         J     x   dx    x    x    dx x x  x     (2.8)      *   *  *  * 2 2  dx       x  dx    x   dx . x x x x     

According to Eq. (4), the first term in the last form of Eq. (8) is just x2, while the second and the third integrals may be worked out by parts: 







x  







   *  *  * * * x   * x   x  x  dx  x x  dx  xxd    x x     dx  1 , (2.9)   *  x x dx   

x  



2  1 *  * x   *    dx  2    d 2   x   x x  x x   





*ˆ2

p x dx 



p x2 2

. (2.10)

As a result, Eq. (7) takes the following form: J    x

2

 

2

p x2 2

 0, i.e.   a  b  0, with a   2

2 p x2

, b

2 x2 p x2

.

(2.11)

This inequality should be valid for any real , so the corresponding quadratic equation, 2 + a + b = 0, can have either one (degenerate) real root or no real roots at all. This is only possible if its determinant, Det = a2 – 4b, is non-positive, leading to the following requirement: 2 2 2 . (2.12) x px  4 In particular, if x = 0 and px = 0, then according to Eq. (1.33), Eq. (12) takes the form 2 2 2 ~ ~ x px  , 4 Chapter 2

(2.13)

Page 2 of 76

Heisenberg’s uncertainty relation

Essential Graduate Physics

QM: Quantum Mechanics

which, according to the definition (1.34) of the r.m.s. uncertainties, is equivalent to Eq. (1.35).2 Now let us notice that Heisenberg’s uncertainty relation looks very similar to the commutation relation between the corresponding operators:

xˆ, pˆ x   xˆpˆ x  pˆ x xˆ    x  i      i 

Coordinate/ momentum operators’ commutator

x  

   x   i . x 

(2.14a)

Since this relation is valid for any wavefunction (x, t), it may be represented as operator equality:

xˆ, pˆ x   i  0 .

(2.14b)

In Sec. 4.5 we will see that the relation between Eqs. (13) and (14) is just a particular case of a general relation between the expectation values of non-commuting operators and their commutators. 2.2. Free particle: Wave packets Let us start our discussion of particular problems with the free 1D motion, i.e. with U(x, t) = 0. From Eq. (1.29), it is evident that in the 1D case, a similar “fundamental” (i.e. a particular but the most important) solution of the Schrödinger equation (1) is a sinusoidal (“monochromatic”) wave 0 ( x, t )  const  expi (k 0 x   0 t ).

(2.15)

According to Eqs. (1.32), it describes a particle with a definite momentum3 p0 = k0 and energy E0 = 0 = 2k02/2m. However, for this wavefunction, the product * does not depend on either x or t, so the particle is completely delocalized, i.e. the probability to find it the same along all axis x, at all times. In order to describe a space-localized state, let us form, at the initial moment of time (t = 0), a wave packet of the type shown in Fig. 1.6, by multiplying the sinusoidal waveform (15) by some smooth envelope function A(x). As the most important particular example, consider the Gaussian wave packet Gaussian wave packet: t=0

 ( x,0)  A x  e

ik0 x

,

with A x  

 1 x2  . exp  2  (2 )1/4 ( x)1/2 ( 2  x )  

(2.16)

(By the way, Fig. 1.6a shows exactly such a packet.) The pre-exponential factor in this envelope function has been selected to have the initial probability density, w( x,0)   * ( x,0) ( x,0)  A* ( x) A( x) 

 x2  exp   , 2 1 / 2  x  2( x) 2  1

(2.17)

normalized as in Eq. (3b), for any parameters x and k0.4 Eq. (13) may be proved even if x and px are not equal to zero, by making the replacements x  x – x and /x  /x + ip/ in Eq. (7), and then repeating all the calculations – which in this case become somewhat bulky. In Chapter 4, equipped with the bra-ket formalism, we will derive a more general uncertainty relation, which includes Heisenberg’s relation (13) as a particular case, in a more efficient way. 3 From this point on to the end of this chapter, I will drop index x in the x-components of the vectors k and p. 4 This fact may be readily proved using the well-known integral of the Gaussian function (17), in infinite limits – see, e.g., MA Eq. (6.9b). It is also straightforward to use MA Eq. (6.9c) to prove that for the wave packet (16), the parameter x is indeed the r.m.s. uncertainty (1.34) of the coordinate x, thus justifying its notation. 2

Chapter 2

Page 3 of 76

Essential Graduate Physics

QM: Quantum Mechanics

To explore the evolution of this wave packet in time, we could try to solve Eq. (1) with the initial condition (16) directly, but in the spirit of the discussion in Sec. 1.5, it is easier to proceed differently. Let us first represent the initial wavefunction (16) as a sum (1.67) of the eigenfunctions k(x) of the corresponding stationary 1D Schrödinger equation (1.60), in our current case 

 2 d 2 k  E k k , 2m dx 2

with E k 

2k 2 , 2m

(2.18)

which are simply monochromatic waves,

 k  a k e ikx .

(2.19)

Since (as was discussed in Sec. 1.7) at the unconstrained motion the spectrum of possible wave numbers k is continuous, the sum (1.67) should be replaced with an integral:5  ( x,0)   a k e ikx dk .

(2.20)

Now let us notice that from the point of view of mathematics, Eq. (20) is just the usual Fourier transform from the variable k to the “conjugate” variable x, and we can use the well-known formula of the reciprocal Fourier transform to write ak 

 ~  ~ 1 1 1 x2 ikx ( , 0 ) exp x e dx     ik x  dx, where k  k  k 0 . (2.21)  1/4 1/2  2  2 2 (2 ) (x)   (2 x)

This Gaussian integral may be worked out by the following standard method, which will be used many times in this course. Let us complement the exponent to the full square of a linear combination of x and k, adding a compensating term independent of x: 



~ ~ x2 1 x  2i ( x) 2 k  ik x   2 2 (2 x) (2 x)



2

~  k 2 ( x) 2 .

(2.22)

~ Since the integration in the right-hand side of Eq. (21) should be performed at constant k , in the infinite ~ limits of x, its result would not change if we replace dx with dx  d[ x + 2i(x)2 k ]. As a result, we get:6





 ~2 x2  1 1 2 ak  dx exp  k  x   exp 2  2 (2 )1/4 ( x)1/2  (2 x)  ~ 1/ 2  k2  1  1   , exp  1/4 1/ 2 2   2  (2 ) ( k )  (2 k ) 

(2.23)

so ak also has a Gaussian distribution, now along the k-axis, centered to the value k0 (Fig. 1.6b), with the constant k defined as  k  1 2 x . (2.24) Thus we may represent the initial wave packet (16) as

5 For

the notation brevity, from this point on the infinite limit signs will be dropped in all 1D integrals. The fact that the argument’s shift is imaginary is not important. (Let me leave proof of this fact for the reader’s exercise.)

6

Chapter 2

Page 4 of 76

Essential Graduate Physics

QM: Quantum Mechanics

 1   ( x,0)     2 

1/ 2

 (k  k 0 ) 2  ikx 1 e dk . exp  2  (2 )1/4 ( k )1 / 2   (2 k ) 

(2.25)

From the comparison of this formula with Eq. (16), it is evident that the r.m.s. uncertainty of the wave number k in this packet is indeed equal to k defined by Eq. (24), thus justifying the notation. The comparison of the last relation with Eq. (1.35) shows that the Gaussian packet represents the ultimate case in which the product xp = x(k) has the lowest possible value (/2); for any other envelope’s shape, the uncertainty product may only be larger. We could of course get the same result for k from Eq. (16) using the definitions (1.23), (1.33), and (1.34); the real advantage of Eq. (25) is that it can be readily generalized to t > 0. Indeed, we already know that the time evolution of the wavefunction is always given by Eq. (1.69), for our current case7 Gaussian wave packet: arbitrary time

 1   ( x, t )     2 

1/ 2

 (k  k 0 ) 2  ikx  k 2 1  e exp exp   i 2  (2 )1/4 (k )1 / 2   2m  (2k ) 

 t dk . 

(2.26)

Fig. 1 shows several snapshots of the real part of the wavefunction (26), for a particular case k = 0.1 k0.

1

v ph

t0

)

Re 

t  2 .2

x v0

0

v gr

1 5

0

5

x / x

1

t0

Re 

t 3

0

10

15

20

Fig. 2.1. Typical time evolution of a 1D wave packet on (a) smaller and (b) larger time scales. The dashed lines show the packet envelopes, i.e.   .

x

t  20

v0

x v0

1  10

0

10

20

30

40

50

60

70

80

90

100

110

120

130

140

x / x The plots clearly show the following effects: 7

Note that Eq. (26) differs from Eq. (16) only by an exponent of a purely imaginary number, and hence this wavefunction is also properly normalized to 1 – see Eq. (3). Hence the wave packet introduction offers a natural solution to the problem of traveling de Broglie wave’s normalization, which was mentioned in Sec. 1.2. Chapter 2

Page 5 of 76

Essential Graduate Physics

QM: Quantum Mechanics

(i) the wave packet as a whole (as characterized by its envelope) moves along the x-axis with a certain group velocity vgr, (ii) the “carrier” quasi-sinusoidal wave inside the packet moves with a different, phase velocity vph that may be defined as the velocity of the spatial points where the wave’s phase (x, t)  arg takes a certain fixed value (say,  = /2 where Re vanishes), and (iii) the wave packet’s spatial width gradually increases with time – the packet spreads. All these effects are common for waves of any physical nature.8 Indeed, let us consider a 1D wave packet of the type (26) but a more general one:   x, t    a k e i ( kx t ) dk ,

(2.27)

Arbitrary 1D wave packet

propagating in a medium with an arbitrary (but smooth!) dispersion relation (k), and assume that the wave number distribution ak is narrow: k 0 depends on its initial width x’(0) = x in a non-monotonic way, tending to infinity at both x→ 0 and x → ∞. Because of that, for a given time interval t, there is an optimal value of x that minimizes x’:

x' min

2 x opt



 t    m

1/ 2

.

(2.40)

This expression may be used to estimate the spreading effect’s magnitude. Due to the smallness of the Planck constant  on the human scale of things, for macroscopic bodies the spreading is extremely small even for very long time intervals; however, for light particles it may be very noticeable: for an electron (m = me  10-30 kg), and t = 1 s, Eq. (40) yields (x’)min ~ 1 cm. Note also that for any t  0, the wave packet retains its Gaussian envelope, but the ultimate relation (24) is not satisfied, x’p > /2, due to a gradually accumulated phase shift between the component monochromatic waves. The last remark on this topic: in quantum mechanics, the wave packet spreading is not a ubiquitous effect! For example, in Chapter 5 we will see that in a quantum oscillator, the spatial width of a Gaussian packet (for that system, called the Glauber state of the oscillator) does not grow monotonically but rather either stays constant or oscillates in time. Now let us briefly discuss the case when the initial wave packet is not Gaussian but is described by an arbitrary initial wavefunction. To make the forthcoming result more aesthetically pleasing, it is beneficial to generalize our calculations to an arbitrary initial time t0; it is evident that if U does not depend on time explicitly, it is sufficient to replace t with (t – t0) in the above formulas. With this replacement, Eq. (27) becomes

 ( x, t )   a k e



i kx   (t  t 0 )

dk ,

(2.41)

and the reciprocal transform (21) reads ak 

1  ( x, t 0 ) e ikx dx .  2

(2.42)

If we want to express these two formulas with one relation, i.e. plug Eq. (42) into Eq. (41), we should give the integration variable x some other name, e.g., x0. (Such notation is appropriate because this variable describes the coordinate argument in the initial wave packet.) The result is  ( x, t ) 

1 2

 dk  dx ( x , t 0

0

0

)e

i k x  x0  t t0 

.

(2.43)

Changing the order of integration, this expression may be represented in the following general form:

 ( x, t )   G  x, t ; x0 , t0   ( x0 , t0 )dx0 ,

Chapter 2

(2.44)

Page 8 of 76

1D propagator: definition

Essential Graduate Physics

QM: Quantum Mechanics

where the function G, usually called kernel in mathematics, in quantum mechanics is called the propagator.10 Its physical sense may be understood by considering the following special initial condition:11  ( x0 , t 0 )   ( x0  x' ) , (2.45) where x’ is a certain point within the particle’s motion domain. In this particular case, Eq. (44) gives Ψ( x, t )  G ( x, t ; x' , t 0 ) .

(2.46)

Hence, the propagator, considered as a function of its arguments x and t only, is just the wavefunction of the particle, at the -functional initial conditions (45). Thus, just as Eq. (41) may be understood as a mathematical expression of the linear superposition principle in the momentum (i.e., reciprocal) space domain, Eq. (44) is an expression of this principle in the direct space domain: the system’s “response” (x,t) to an arbitrary initial condition (x0,t0) is just a sum of its responses to elementary spatial “slices” of this initial function, with the propagator G(x,t; x0,t0) representing the weight of each slice in the final sum. According to Eqs. (43) and (44), in the particular case of a free particle the propagator is equal to G  x, t ; x0 , t0  

1 i k x  x0  t t0  , e dk  2

(2.47)

Calculating this integral, one should remember that here  is not a constant but a function of k, given by the dispersion relation for the partial waves. In particular, for the de Broglie waves, with  = 2k2/2m, 1 G ( x, t ; x 0 , t 0 )  2

  k 2   exp i k x x ( )   t  t  0 0   dk .  2m  

(2.48)

This is a Gaussian integral again, and it may be readily calculated just it was done (twice) above, by completing the exponent to the full square. The result is   m  G ( x, t ; x0 , t 0 )    2i(t  t 0 ) 

Free particle’s propagator

1/ 2

 m( x  x 0 ) 2  exp .  2i (t  t 0 ) 

(2.49)

Please note the following features of this complex function: (i) It depends only on the differences (x – x0) and (t – t0). This is natural because the free-particle propagation problem is translation-invariant both in space and time. (ii) The function’s shape (Fig. 2) does not depend on its arguments – they just rescale the same function: as a function of x, it just becomes broader and lower with time. It is curious that the spatial broadening scales as (t – t0)1/2 – just as at the classical diffusion, indicating a deep mathematical analogy between quantum mechanics and classical statistics – to be discussed further in Chapter 7.

10

Its standard notation by letter G stems from the fact that the propagator is essentially the spatial-temporal Green’s function of the Schrödinger equation (1), defined very similarly to Green’s functions of other ordinary and partial differential equations describing various physics systems – see, e.g., CM Sec. 5.1 and/or EM Sec. 2.7 and 7.3. 11 Note that this initial condition is mathematically not equivalent to a -functional initial probability density (3). Chapter 2

Page 9 of 76

Essential Graduate Physics

QM: Quantum Mechanics

(iii) In accordance with the uncertainty relation, the ultimately compressed wave packet (45) has an infinite width of momentum distribution, and the quasi-sinusoidal tails of the free-particle’s propagator, clearly visible in Fig. 2, are the results of the free propagation of the fastest (highestmomentum) components of that distribution, in both directions from the packet’s center. 0.5

Re  G ( x, t ; x 0 , t 0 )   1/ 2 Im  m /  (t  t 0 )

0

 0.5  10

0

( x  x 0 ) /  (t  t 0 ) / m 

10

Fig. 2.2. The real (solid line) and imaginary (dotted line) parts of the 1D free particle’s propagator (49).

1/ 2

In the following sections, I will mostly focus on monochromatic wavefunctions (which, for unconfined motion, may be interpreted as wave packets of a very large spatial width x), and only rarely discuss wave packets. My best excuse is the linear superposition principle, i.e. our conceptual ability to restore the general solution from that of monochromatic waves of all possible energies. However, the reader should not forget that, as the above discussion has illustrated, mathematically such restoration is not always trivial. 2.3. Particle reflection and tunneling Now, let us proceed to the cases when a 1D particle moves in various potential profiles U(x) that are constant in time. Conceptually, the simplest of such profiles is a potential step – see Fig. 3. classically accessible

classically forbidden

E U (x)

Fig. 2.3. Classical 1D motion in a potential profile U(x).

xc classical turning point

As I am sure the reader knows, in classical mechanics the particle’s kinetic energy p2/2m cannot be negative, so if the particle is incident on such a step (in Fig. 3, from the left), it can only move within the classically accessible region where its (conserved) full energy, E

p2  U ( x) , 2m

(2.50)

is larger than the local value U(x). Let, for example, the initial velocity v = p/m be positive, i.e. directed toward the step. Before it has reached the classical turning point xc, defined by equality

Chapter 2

Page 10 of 76

Essential Graduate Physics

QM: Quantum Mechanics

U ( xc )  E ,

(2.51)

the particle’s kinetic energy p2/2m is positive, so it continues to move in the initial direction. On the other hand, a classical particle cannot penetrate that classically forbidden region x > xc, because there its kinetic energy would be negative. Hence when the particle reaches the point x = xc, its velocity has to change its sign, i.e. the particle is reflected back from the classical turning point. In order to see what the wave mechanics says about this situation, let us start from the simplest, sharp potential step shown with the bold black line in Fig. 4:  0, at x  0, U ( x)  U 0 ( x)   U 0 , at 0  x.

(2.52)

For this choice, and any energy within the interval 0 < E < U0, the classical turning point is xc = 0. A B

U ( x), E  x

U0

C Fig. 2.4. Reflection of a monochromatic de Broglie wave from a potential step U0 > E. (This particular wavefunction’s shape is for U0 = 5E.) The wavefunction is plotted with the same schematic vertical offset by E as those in Fig. 1.8.

E 0

x

Let us represent the incident particle with a wave packet so long that the spread k ~ 1/x of its wave-number spectrum is sufficiently small to make the energy uncertainty E =  = (d/dk)k negligible in comparison with its average value E < U0, as well as with (U0 – E). In this case, E may be considered as a given constant, the time dependence of the wavefunction is given by Eq. (1.62), and we can calculate its spatial factor (x) from the 1D version of the stationary Schrödinger equation (1.65):12 

 2 d 2  U ( x)  E . 2m dx 2

(2.53)

At x < 0, i.e. at U = 0, the equation is reduced to the Helmholtz equation (1.78), and may be satisfied with either of two traveling waves, proportional to, respectively, exp{+ikx} and exp{-ikx}, with k satisfying the dispersion equation (1.30): 2mE k2  2 . (2.54)  Incident and reflected waves

Thus the general solution of Eq. (53) in this region may be represented as

  x   Ae ikx  Be ikx ,

for x  0 .

(2.55)

12

Note that this is not an eigenproblem like the one we have solved in Sec. 1.4 for a potential well. Indeed, now the energy E is considered given – e.g., by the initial conditions that launch a long wave packet upon the potential step – in Fig. 4, from the left side.

Chapter 2

Page 11 of 76

Essential Graduate Physics

QM: Quantum Mechanics

The second term on the right-hand side of Eq. (55) evidently describes a (formally, infinitely long) wave packet traveling to the left, arising because of the particle’s reflection from the potential step. If B = –A, Eq. (55) is reduced to Eq. (1.84) for a potential well with infinitely high walls, but for our current case of a finite step height U0, the relation between the coefficients B and A may be different. To show this, let us solve Eq. (53) for x > 0, where U = U0 > E. In this region, the equation may be rewritten as d 2  (2.56)   2  , 2 dx where  is a real and positive constant defined by a formula similar in structure to Eq. (54):

2 

2m(U 0  E )  0. 2

(2.57)

The general solution of Eq. (56) is the sum of exp{+x} and exp{–x}, with arbitrary pre-exponential coefficients. However, in our particular case the wavefunction should be finite at x  +, so only the latter exponent is acceptable:

   x   Ce x ,

for x  0 .

(2.58)

Such penetration of the wavefunction into the classically forbidden region, and hence a non-zero probability to find the particle there, is one of the most fascinating predictions of quantum mechanics, and has been repeatedly observed in experiment – for example, via tunneling experiments – see the next section.13 From Eq. (58), it is evident that the constant  defined by Eqs. (57) may be interpreted as the reciprocal penetration depth. Even for the lightest particles, this depth is usually very small. Indeed, for any E U0, i.e. if there is no classically forbidden region in the problem? In classical mechanics, the incident particle would continue to move to the right, though with a reduced velocity corresponding to the new kinetic energy E – U0, so there would be no reflection. In quantum mechanics, however, the situation is different. To analyze it, it is not necessary to the whole problem again; it is sufficient to note that all our calculations, and hence Eqs. (63) are still valid if we take14 2 m( E  U 0 )   ik' , with k' 2   0. (2.65) 2 With this replacement, Eq. (63) becomes15 k  k' 2k BA , CA . (2.66) k  k' k  k' The most important result of this change is that now the particle’s reflection is not total:  B  <  A . To evaluate this effect quantitatively, it is fairer to use not the B/A or C/A ratios, but rather that of the probability currents (5) carried by the de Broglie waves traveling to the right, with amplitudes C and A, in the corresponding regions (respectively, for x > 0 and x < 0): Our earlier discarding of the particular solution exp{x}, now becoming exp{–ik’x}, is still valid, but now on different grounds: this term would describe a wave packet incident on the potential step from the right, and this is not the problem under our current consideration. 15 These formulas are completely similar to those describing the partial reflection of classical waves from a sharp interface between two uniform media, at normal incidence (see, e.g., CM Sec. 6.4 and EM Sec. 7.4), with the effective impedance Z of de Broglie waves being proportional to their wave number k. 14

Chapter 2

Page 13 of 76

Essential Graduate Physics

QM: Quantum Mechanics

1/ 2 4E ( E  U 0 ) I C k' C 4kk' T     1/ 2 I A k A 2 (k  k' ) 2 E 1 / 2  E  U 0  2





2

.

(2.67)

(The parameter T so defined is called the transparency of the system, in our current case of the potential step of height U0, at particle’s energy E.) The result given by Eq. (67) is plotted in Fig. 5a as a function of the U0/E ratio. Note its most important features: (i) At U0 = 0, the transparency is full, T = 1 – naturally, because there is no step at all. (ii) At U0  E, the transparency drops to zero, giving a proper connection to the case E < U0. (iii) Nothing in our solution’s procedure prevents us from using Eq. (67) even for U0 < 0, i.e. for the step-down (or “cliff”) potential profile – see Fig. 5b. Very counter-intuitively, the particle is (partly) reflected even from such a cliff, and the transmission diminishes (though rather slowly) at U0  –. (a) 1

E0

0.8

U 0

A

C

(b)

B0

0.6

T

U0

0.4 0.2 0 1

0

1

Fig. 2.5. (a) The transparency of a potential step with U0 < E as a function of its height, according to Eq. (75), and (b) the “cliff” potential profile, with U0 < 0.

U0 / E

The most important conceptual conclusion of this analysis is that the quantum particle is partly reflected from a potential step with U0 < E, in the sense that there is a non-zero probability T < 1 to find it passed over the step, while there is also some probability, (1 – T) > 0, to have it reflected. The last property is exhibited, but for any relation between E and U0, by another simple potential profile U(x), the famous potential (or “tunnel”) barrier. Fig. 6 shows its simple flat-top (“rectangular”) version:  0, for x  d / 2,  U ( x)  U 0 , for  d / 2  x   d / 2, (2.68)  0, for  d / 2  x . 

To analyze this problem, it is sufficient to look for the solution to the Schrödinger equation in the form (55) at x  –d/2. At x > +d/2, i.e., behind the barrier, we may use the arguments presented above (no wave source on the right!) to keep just one traveling wave, now with the same wave number:   ( x)  Fe ikx .

(2.69)

However, under the barrier, i.e. at –d/2  x  +d/2, we should generally keep both exponential terms,

 b ( x)  Ce x  De x ,

Chapter 2

(2.70)

Page 14 of 76

Potential step’s transparency

Essential Graduate Physics

QM: Quantum Mechanics

because our previous argument used in the potential step problem’s solution is no longer valid. (Here k and  are still defined, respectively, by Eqs. (54) and (57).) In order to express the coefficients B, C, D, and F via the amplitude A of the incident wave, we need to plug these solutions into the boundary conditions similar to Eqs. (61), but now at two boundary points, x =  d/2. U  U0 A

C

B

D

F

E Fig. 2.6. A rectangular potential barrier, and the de Broglie waves taken into account in its analysis.

U 0 x

d /2

d /2

Solving the resulting system of 4 linear equations, we get 4 ratios B/A, C/A, etc.; in particular, F   cosh d  A  and hence the barrier’s transparency Rectangular tunnel barrier’s transparency

F T  A

2

  2  k2  cosh 2 d     2k

1

 ikd i  k  ,    sinh d  e 2 k    2    sinh 2 d   

1

(2.71a)

  2  k2  1     2k

1

2    sinh 2 d  .  

(2.71b)

So, quantum mechanics indeed allows particles with energies E < U0 to pass “through” the potential barrier – see Fig. 6 again. This is the famous effect of quantum-mechanical tunneling. Fig. 7a shows the barrier transparency as a function of the particle energy E, for several characteristic values of its thickness d, or rather of the ratio d/, with  defined by Eq. (59). (a)

(b) 0.01

d

0.8



2

d /   3 .0

6

 0.3

110

 10

10

110

0.6

 14

T

T

1 .0

0.4

110

 18

110

30

 22

110

3 .0

0.2

 26

110

 30

0

1

2

E /U 0

3

110

0

0.2

0.4

0.6

1  E / U 0 

0.8

1/ 2

Fig. 2.7. The transparency of a rectangular potential barrier as a function of the particle’s energy E.

Chapter 2

Page 15 of 76

Essential Graduate Physics

QM: Quantum Mechanics

The plots show that generally, the transparency grows with the particle’s energy. This growth is natural because the penetration constant  decreases with the growth of E, i.e., the wavefunction penetrates more and more into the barrier, so more and more of it is “picked up” at the second interface (x = +d/2) and transferred into the wave Fexp{ikx} propagating behind the barrier. Now let us consider the important limit of a very thin and high rectangular barrier with d > 1:

24

An alternative way to solve the connection problem, without involving the Airy functions but using an analytical extension of WKB formulas to the complex-argument plane, may be found, e.g., in Sec. 47 of the textbook by L. Landau and E. Lifshitz, Quantum Mechanics, Non-Relativistic Theory, 3rd ed. Pergamon, 1977. 25 Note the following (exact) integral formulas,

Ai( ) 



3     d , cos   0  3  1

Bi( ) 

   3  1   3     d ,    exp  sin      0  3   3 

frequently more convenient for practical calculations of the Airy functions than the differential equation (101). Chapter 2

Page 21 of 76

Essential Graduate Physics

QM: Quantum Mechanics

Ai( ) 

1



1/ 2



1/ 4

1  2 3/ 2  for   ,  2 exp 3  ,     sin  2   3 / 2   , for   .   3 4

(2.102)

Now let us apply the WKB approximation to the Airy equation (101). Taking the classical turning point ( = 0) for the lower limit, for  > 0 we get

 2 ( )   ,

 ( )   1 / 2 ,



2

  (' )d'  3 

3/ 2

(2.103)

,

0

i.e. exactly the exponent in the top line of Eq. (102). Making a similar calculation for  < 0, with the natural assumption  b  =  a  (full reflection from the potential step), we arrive at the following result: Ai WKB   

1



1/ 4

  2 3/ 2  for   0, c' exp 3  ,     a' sin  2   3 / 2   , for   0.   3

(2.104)

This approximation differs from the exact solution at small values of  , i.e. close to the classical turning point – see Fig. 9b. However, at    >> 1, Eqs. (104) describe the Airy function exactly, provided that



 4

,

c' 

a' . 2

(2.105)

These connection formulas may be used to rewrite Eq. (104) as Ai WKB   

a' 2

1/ 4

  2 3/ 2  for   0, exp 3  ,    1 exp i 2  3 / 2  i   - exp i 2  3 / 2  i  , for   0,  i   3 4 4   3

(2.106)

and hence may be described by the following two simple mnemonic rules: (i) If the classical turning point is taken for the lower limit in the WKB integrals in the classically allowed and the classically forbidden regions, then the moduli of the quasi-amplitudes of the exponents are equal. (ii) Reflecting from a “soft” potential step, the wavefunction acquires an additional phase shift  = /2, if compared with its reflection from a “hard”, infinitely high potential wall located at point xc (for which, according to Eq. (63) with  = 0, we have B = –A). In order for the connection formulas (105)-(106) to be valid, deviations from the linear approximation (99) of the potential profile should be relatively small within the region where the WKB approximation differs from the exact Airy function:    ~ 1, i.e.  x – xc  ~ x0. These deviations may be estimated using the next term of the Taylor expansion, dropped in Eq. (99): (d2U/d2x)(x – xc)2/2. As a result, the condition of validity of the connection formulas (i.e. of the “softness” of the reflecting potential profile) may be expressed as d2U/d2xx0 > 1. However, we will see in Sec. 9 below that Eq. (114) is actually exactly correct for all energy levels – thanks to the special properties of the potential profile (111). Now let us use the mnemonic rule (i) to examine the particle’s penetration into the classically forbidden region of an abrupt potential step of a height U0 > E. For this case, the rule, i.e. the second of Eqs. (105), yields the following relation of the quasi-amplitudes in Eqs. (94) and (98):  c  =  a /2. If we now naively applied this relation to the sharp step sketched in Fig. 4, forgetting that it does not satisfy Eq. (107), we would get the following relation of the full amplitudes defined by Eqs. (55) and (58):

C





1 A . 2 k

(WRONG!)

(2.115)

This result differs from the correct Eq. (63), and hence we may expect that the WKB approximation’s prediction for more complex potentials, most importantly for tunneling through a soft potential barrier (Fig. 11) should be also different from the exact result (71) for the rectangular barrier shown in Fig. 6. d WKB  x c'  xc

U ( x) U max

E 0

a

f

c

b

d xc

xm

Fig. 2.11. Tunneling through a soft 1D potential barrier.

x c'

x

To analyze tunneling through such a soft barrier, we need (just as in the case of a rectangular barrier) to take unto consideration five partial waves, but now they should be taken in the WKB form:

Chapter 2

Page 24 of 76

Essential Graduate Physics

 WKB

QM: Quantum Mechanics

x  a   x  b   1/ 2 expi  k ( x' )dx'   1 / 2 exp i  k ( x' )dx' , for x  xc ,     k ( x)  k ( x)  x x     d  c exp   ( x' )dx'   1 / 2 exp  ( x' )dx' , for x c  x  xc ' ,   1/ 2    ( x)     ( x)  x    f expi k ( x' )dx' , for xc'  x,   k 1 / 2 ( x)     

(2.116)

where the lower limits of integrals are arbitrary (each within the corresponding range of x). Since on the right of the left classical point, we have two exponents rather than one, and on the right of the second point, one traveling waves rather than two, the connection formulas (105) have to be generalized by using asymptotic formulas not only for Ai( ), but also for the second Airy function, Bi( ). The analysis, similar to that carried out above (though naturally a bit bulkier),27 gives a remarkably simple result: Soft potential barrier: transparency

TWKB

f  a

2

   2 xc'  xc' 1/ 2  exp 2   ( x)dx   exp  2mU ( x)  E  dx  ,     xc  xc

(2.117)

with the pre-exponential factor equal to 1 – the fact that might be readily expected from the mnemonic rule (i) of the connection formulas. This formula is broadly used in applied quantum mechanics, despite the approximate character of its pre-exponential coefficient for insufficiently soft barriers that do not satisfy Eq. (107). For example, Eq. (80) shows that for a rectangular barrier with thickness d >> , the WKB approximation (117) with dWKB = d underestimates T by a factor of [4k/(k2 + 2)]2 – equal, for example, 4, if k = , i.e. if U0 = 2E. However, on the appropriate logarithmic scale (see Fig. 7b), such a factor, smaller than an order of magnitude, is just a small correction. Note also that when E approaches the barrier’s top Umax (Fig. 11), the points xc and xc’ merge, so according to Eq. (117), TWKB  1, i.e. the particle reflection vanishes at E = Umax. So, the WKB approximation does not describe the effect of the over-barrier reflection at E > Umax. (This fact could be noticed already from Eq. (95): in the absence of the classical turning points, the WKB probability current is constant for any barrier profile.) This conclusion is incorrect even for apparently smooth barriers where one could naively expect the WKB approximation to work perfectly. Indeed, near the point x = xm where the potential reaches maximum (i.e. U(xm) = Umax), we may approximate almost any smooth function U(x) with the quadratic term of the Taylor expansion, i.e. with an inverted parabola: m02  x  xm  . 2 2

U ( x)  U max 

(2.118)

Calculating the derivatives dU/dx and d2U/dx2 for this function and plugging the results into the condition (107), we may see that the WKB approximation is only valid if  Umax – E  >> 0. Just for the

27

For the most important case TWKB x2), the resulting scattered wave amplitudes A2 and B1 are just the sums of their values for separate incident waves: B1  S11 A1  S12 B2 , (2.123) A2  S 21 A1  S 22 B2 . These two relations may be conveniently represented using the so-called scattering matrix S:  B1  A     S  1 ,  A2   B2 

Scattering matrix: definition

S with S   11  S 21

S12  . S 22 

(2.124)

Scattering matrices, duly generalized, are an important tool for the analysis of wave scattering in higher dimensions than one; for 1D problems, however, another matrix is often more convenient to represent the same linear relations (123). Indeed, let us solve this system for A2 and B2. The result is A2  T11 A1  T12 B1 ,

Transfer matrix: definition

B2  T21 A1  T22 B1 ,

A  A  i.e.  2   T  1 ,  B2   B1 

(2.125)

where T is the transfer matrix, with the following elements: T11  S 21 

S S S11 S 22 1 , T12  22 , T21   11 , T22  . S 21 S12 S12 S12

(2.126)

The matrices S and T have some universal properties, valid for an arbitrary (but timeindependent) scatterer; they may be readily found from the probability current conservation and the time-reversal symmetry of the Schrödinger equation. Let me leave finding these relations for the reader’s exercise. The results show, in particular, that the scattering matrix may be rewritten in the following form:  re i  t  (2.127a) S  e i  i  ,  t re   where the four real parameters r, t, , and  satisfy the following universal relation: r 2  t 2  1,

Chapter 2

(2.127b)

Page 27 of 76

Essential Graduate Physics

QM: Quantum Mechanics

so only three of these parameters are independent. As a result of this symmetry, T11 may be also represented in a simpler form similar to T22: T11 = exp{i}/t = 1/S12*= 1/S21*. The last form allows a ready expression of the scatterer’s transparency via just one coefficient of the transfer matrix: 2

A 2 T  2  S 21  T11 A1 B 0 2

2

(2.128)

.

In our current context, the most important property of the 1D transfer matrices is that to find the total transfer matrix T of a system consisting of several (say, N) sequential arbitrary scatterers (Fig. 13), it is sufficient to multiply their matrices. A1 B1

A2 B2

A3

AN 1

B3

BN 1 Fig. 2.13. A sequence of several 1D scatterers.

 x1

x2

x3

x N 1

x

Indeed, extending the definition (125) to other points xj (j = 1, 2, …, N + 1), we can write  A2  A     T1  1 ,  B2   B1 

 A3  A  A     T2  2   T2 T1  1 , etc.  B2   B1   B3 

(2.129)

(where the matrix indices correspond to the scatterers’ order on the x-axis), so  AN 1  A     TN TN 1 ...T1  1 .  B1   BN 1 

(2.130)

But we can also define a total transfer matrix similarly to Eq. (125), i.e. as  AN 1  A     T  1 ,  B1   BN 1 

(2.131)

so comparing Eqs. (130) and (131) we get T  TN TN 1...T1 .

(2.132)

This formula is valid even if the flat-potential gaps between component scatterers are shrunk to zero, so it may be applied to a scatterer with an arbitrary profile U(x), by fragmenting its length into many small segments x = xj+1 – xj, and treating each fragment as a rectangular barrier of the average height (Uj)ef = [U(xj+1) – U(xj)]/2 – see Fig. 14. Since very efficient numerical algorithms are readily available for fast multiplication of matrices (especially as small as 2×2 in our case), this approach is broadly used in practice for the computation of transparency of 1D potential barriers with complicated profiles U(x). (This procedure is much more efficient than the direct numerical solution of the stationary Schrödinger equation.)

Chapter 2

Page 28 of 76

Transfer matrix: composite scatterer

Essential Graduate Physics

A1

QM: Quantum Mechanics

U ( x)

(U j ) ef

AN 1

B1

BN 1

x1

x2



x j x j 1



x N 1

Fig. 2.14. The transfer matrix approach to a potential barrier with an arbitrary profile.

x

In order to apply this approach to several particular, conceptually important systems, let us calculate the transfer matrices for a few elementary scatterers, starting from the delta-functional barrier located at x = 0 – see Fig. 8. Taking x1, x2  0, we can merely change the notation of the wave amplitudes in Eq. (78) to get 1  i (2.133) , . S11  S 21  1  i 1  i An absolutely similar analysis of the wave incidence from the left yields S 22  and using Eqs. (126), we get Transfer matrix: short scatterer

 i , 1  i

S12 

1 , 1  i

(2.134)

 1  i  i   . T   1  i   i

(2.135)

As a sanity check, Eq. (128) applied to this result, immediately brings us back to Eq. (79).

Identity transfer matrix

The next example may seem strange at the first glance: what if there is no scatterer at all between points x1 and x2? If the points coincide, the answer is indeed trivial and can be obtained, e.g., from Eq. (135) by taking W = 0, i.e.  = 0: 1 0   I T0   (2.136) 0 1 – the so-called identity matrix. However, we are free to choose the reference points x1,2 participating in Eq. (120) as we wish. For example, what if x2 – x1 = a? Let us first take the forward-propagating wave alone: B2 = 0 (and hence B1 = 0); then

 2   1  A1e

ik ( x  x1 )

 A1e

ik ( x2  x1 ) ik ( x  x2 )

e

.

(2.137)

The comparison of this expression with the definition (120) for j = 2 shows that A2 = A1 exp{ik(x2 – x1)} = A1 exp{ika}, i.e. T11 = exp{ika}. Repeating the calculation for the back-propagating wave, we see that T22 = exp{–ika}, and since the space interval provides no particle reflection, we finally get Transfer matrix: spatial interval

 e ika Ta    0

0  , e ika 

(2.138)

independently of a common shift of points x1 and x2. At a = 0, we naturally recover the special case (136). Chapter 2

Page 29 of 76

Essential Graduate Physics

QM: Quantum Mechanics

Now let us use these simple results to analyze the double-barrier system shown in Fig. 15. We could of course calculate its properties as before, writing down explicit expressions for all five traveling waves symbolized by arrows in Fig. 15, then using the boundary conditions (124) and (125) at each of the points x1,2 to get a system of four linear equations, and finally, solving it for four amplitude ratios. a W  x  x1 

W  x  x 2 

E

x1

x

x2

Fig. 2.15. The double-barrier system. The dashed lines show (schematically) the quasilevels of the metastable-state energies.

However, the transfer matrix approach simplifies the calculations, because we may immediately use Eqs. (132), (135), and (138) to write 1  i  i   e ika  T  T Ta T   1  i   0  i

0  1  i  i  .  1  i  e ika   i

(2.139)

Let me hope that the reader remembers the “row by column” rule of the multiplication of square matrices;29 using it for the last two matrices, we may reduce Eq. (139) to 1  i  i   (1  i )e ika  T   1  i   ie ika  i

 . (1  i )e ika   ie ika

(2.140)

Now there is no need to calculate all elements of the full product T, because, according to Eq. (128), for the calculation of the barrier’s transparency T we need only one of its elements, T11:

T 

1 T11

2



1

 2 e ika  (1  i ) 2 e ika

. 2

(2.141)

This result is somewhat similar to that following from Eq. (71) for E > U0: the transparency is a -periodic function of the product ka, reaching its maximum (T = 1) at some point of each period – see Fig. 16a. However, Eq. (141) is different in that for  >> 1, the resonance peaks of the transparency are very narrow, reaching their maxima at ka  kna  n, with n = 1, 2,… The physics of this resonant tunneling effect30 is the so-called constructive interference, absolutely similar to that of electromagnetic waves (for example, light) in a Fabry-Perot resonator formed by two parallel semi-transparent mirrors.31 Namely, the incident de Broglie wave may be

29

In the analytical form: AB jj' 

N

A j" 1

jj"

B j"j' , where N is the square matrix rank (in our current case, N = 2).

30

In older literature, it is sometimes called the Townsend (or “Ramsauer-Townsend”) effect. However, it is more common to use the last term only for a similar effect at 3D scattering – to be discussed in Chapter 3. 31 See, e.g., EM Sec. 7.9. Chapter 2

Page 30 of 76

Double barrier: transparency

Essential Graduate Physics

QM: Quantum Mechanics

thought to either tunnel through the two barriers or undertake, on its way out, several sequential reflections from these semi-transparent walls. At k = kn, i.e. at 2ka = 2kna = 2n, the phase differences between all these partial waves are multiples of 2, so they add up in phase – “constructively”. Note that the same constructive interference of numerous reflections from the walls may be used to interpret the standing-wave eigenfunctions (1.84), so the resonant tunneling at  >> 1 may be also considered as a result of the incident wave’s resonance induction of such a standing wave, with a very large amplitude, in the space between the barriers, with the transmitted wave’s amplitude proportionately increased. (a) 0.8

  0.3

(b)

2 1  1 2  1

Im

0.6

T

k 0.4

0

1.0

0.2 0

k

3.0 0

0.5

1

ka / 

1.5

2

Re

Fig. 2.16. Resonant tunneling through a potential well with delta-functional walls: (a) the system’s transparency as a function of ka, and (b) calculating the resonance’s FWHM at  >> 1.

As a result of this resonance, the maximum transparency of the system is perfect (Tmax = 1) even at   , i.e. in the case of very low transparency of each of the component barriers. Indeed, the denominator in Eq. (141) may be interpreted as the squared length of the difference between two 2D vectors, one of length 2 and another of length (1 – i)2 = 1 + 2, with the angle  = 2ka + const between them – see Fig. 16b. At the resonance, the vectors are aligned, and their difference is smallest (equal to 1) so Tmax = 1. (This result is exact only if the two barriers are exactly equal.) The same vector diagram may be used to calculate the so-called FWHM, the common acronym for the Full Width [of the resonance curve at its] Half-Maximum. By definition, this is the difference k = k+ – k- between such two values of k, on the opposite slopes of the same resonance curve, at that T = Tmax/2 – see the arrows in Fig. 16a. Let the vectors in Fig. 16b, drawn for  >> 1, be misaligned by a small angle  > 1, and left it there alone, without any incident wave. To simplify the analysis, let us assume that the initial state of the particle coincides with one of the stationary states of the infinite-wall well of the same size – see Eq. (1.84):

Chapter 2

Page 31 of 76

Essential Graduate Physics

QM: Quantum Mechanics

1/ 2

n 2  ( x,0)   n ( x)    sink n ( x  x1 ), where k n  , n  1, 2,... . (2.143) a a At   , this is just an eigenstate of the system, and from our analysis in Sec. 1.5 we know the time evolution of its wavefunction: 2  ( x, t )   n ( x) exp i n t    a

1/ 2

sink n ( x  x1 )exp i n t,

E n k n2  , with  n   2m

(2.144)

telling us that the particle remains in the well at all times with a constant probability: W(t) = W(0) = 1. However, if the parameter  is large but finite, the de Broglie wave would slowly “leak out” from the well, so W(t) would slowly decrease. Such a state is called metastable. Let us derive the law of its time evolution, assuming that at the slow leakage, with a characteristic time  >> 1/n, does not affect the instant wave distribution inside the well, besides the gradual, slow reduction of W.32 Then we can generalize Eq. (144) as  2W   ( x, t )     a 

1/ 2

sink n ( x  x1 )exp i n t  A exp i k n x   n t   B exp i k n x   n t , (2.145)

making the probability of finding the particle in the well equal to some W  1. As the last form of Eq. (145) shows, this is the sum of two traveling waves, with equal magnitudes of their amplitudes and hence equal but opposite probability currents (5): W  A  B    2a 

1/ 2

,

IA 

  W n 2 A kn  , m m 2a a

I B  I A .

(2.146)

But we already know from Eq. (79) that at  >> 1, the delta-functional wall’s transparency T equals 1/2, so the wave carrying the current IA, incident on the right wall from the inside, induces an outcoming wave outside of the well (Fig. 17) with the following probability current: I R  TI A 

1



2

IL

IA 

1  nW .  2 2ma 2

IR

(2.147) ~ vgr

E1 0

 vgr t

 vgr t

x

Fig. 2.17. Metastable state’s decay in the simple model of a 1D potential well formed by two low-transparent walls – schematically.

Absolutely similarly, IL 

1

2

I B  I R .

(2.148)

32

This (virtually evident) assumption finds its formal justification in the perturbation theory to be discussed in Chapter 6. Chapter 2

Page 32 of 76

Essential Graduate Physics

QM: Quantum Mechanics

Now we may combine the 1D version (6) of the probability conservation law for the well’s interior: dW  IR  IL  0 , dt

(2.149)

with Eqs. (147)-(148) to write

1  n dW  2 W. dt  ma 2 This is just the standard differential equation,

(2.150)

1 dW  W ,  dt

Metastable state: decay law

(2.151)

of an exponential decay, W(t) = W(0)exp{-t/}, where the constant  is called the metastable state’s lifetime. In our particular case, ma 2 2   , (2.152) n . Using Eq. (2.33b) for the de Broglie waves’ group velocity, for our particular wave vector giving vgr = kn/m = n/ma, Eq. (152) may be rewritten in a more general form, Metastable state: lifetime



ta

T

(2.153)

,

where the attempt time ta is equal to a/vgr, and (in our particular case) T = 1/2. Eq. (153) is valid for a broad class of similar metastable systems;33 it may be interpreted in the following semi-classical way. The particle travels back and forth between the confining potential barriers, with the time interval ta between the sequential moments of incidence, each time attempting to leak through the wall, with the success probability equal to T, so the reduction of W per each incidence is W = –WT. In the limit T > 1, i.e. at  >> ta), we may use it to discuss two important conceptual issues of quantum mechanics. First, the decay is one of the simplest examples of systems that may be considered, from two different points of view, as either Hamiltonian (and hence time-reversible), or open (and hence irreversible). Indeed, from the former point of view, our particular system is certainly described by a time-independent Hamiltonian (1.41), with the potential energy U  x   W   x  x1     x  x 2 

(2.156)

– see Fig. 15 again. In this point of view, the total probability of finding the particle somewhere on the x-axis remains equal to 1, and the full system’s energy, calculated from Eq. (1.23), 

E 

  x, t  Hˆ x, t  d *

3

x,

(2.157)



remains constant. On the other hand, since the “emitted” wave packets would never return to the potential well,37 it makes much sense to look at the well’s region alone. For such a truncated, open system (for which the space beyond the interval [x1, x2] serves as its environment), the probability W of 35

See, e.g., V. Braginsky and F. Khalili, Quantum Measurement, Cambridge U. Press, 1992. On this issue, I dare to refer the reader to my own old work K. Likharev, Int. J. Theor. Phys. 21, 311 (1982), which provided a constructive proof (for a particular system) that at reversible computation, whose idea had been put forward in 1973 by C. Bennett (see, e.g., SM Sec. 2.3), energy dissipation may be lower than this apparent “quantum limit”. 37 For more realistic 2D and 3D systems, this statement is true even if the system as a whole is confined inside some closed volume, much larger than the potential well housing the metastable states. Indeed, if the walls providing such confinement are even slightly uneven, the emitted plane-wave packets will be reflected from them, but would never return to the well intact. (See SM Sec. 2.1 for a more detailed discussion of this issue.) 36

Chapter 2

Page 34 of 76

Essential Graduate Physics

QM: Quantum Mechanics

finding the particle inside this interval, and hence its energy E = WEn, decay exponentially per Eq. (151) – the decay equation typical for irreversible systems. We will return to the discussion of the dynamics of such open quantum systems in Chapter 7. Second, the same model enables a preliminary discussion of one important aspect of quantum measurements. As Eq. (151) and Fig. 17 show, at t >> , the well becomes virtually empty (W  0), and the whole probability is localized in two clearly separated wave packets with equal amplitudes, moving from each other with the speed vgr, each “carrying the particle away” with a probability of 50%. Now assume that an experiment has detected the particle on the left side of the well. Though the formalisms suitable for quantitative analysis of the detection process will not be discussed until Chapter 7, due to the wide separation x = 2vgrt >> 2vgr of the packets, we may safely assume that such detection may be done without any actual physical effect on the counterpart wave packet.38 But if we know that the particle has been found on the left side, there is no chance of finding it on the right side. If we attributed the full wavefunction to all stages of this particular experiment, this situation might be rather confusing. Indeed, that would mean that the wavefunction at the right packet’s location should instantly turn into zero – the so-called wave packet reduction (or “collapse”) – a hypothetical, irreversible process that cannot be described by the Schrödinger equation for this system, even including the particle detectors. However, if (as was already discussed in Sec. 1.3) we attribute the wavefunction to a certain statistical ensemble of similar experiments, there is no need to involve such artificial notions. The twowave-packet picture we have calculated (Fig. 17) describes the full ensemble of experiments with all systems prepared in the initial state (143), i.e. does not depend on the particle detection results. On the other hand, the “reduced packet” picture (with no wave packet on the right of the well) describes only a sub-ensemble of such experiments, in which the particles have been detected on the left side. As was discussed on classical examples in Sec. 1.3, for such redefined ensemble, the probability distribution is rather different. So, the “wave packet reduction” is just a result of a purely accounting decision of the observer.39 I will return to this important issue in Sec. 10.1 – on the basis of the forthcoming discussion of open systems in Chapters 7 and 8. 2.6. Localized state coupling, and quantum oscillations Now let us discuss one more effect specific to quantum mechanics. Its mathematical description may be simplified using a model potential consisting of two very short and deep potential wells. For that, let us first analyze the properties of a single well of this type (Fig. 18), which may be modeled similarly to the short and high potential barrier – see Eq. (74), but with a negative “weight”: U  x   W  x ,

with W  0 .

(2.158)

In contrast to its tunnel-barrier counterpart (74), such potential sustains a stationary state with a negative eigenenergy E < 0, and a localized eigenfunction , with    0 at x  .

38

This argument is especially convincing if the particle’s detection time is much shorter than the time tc = 2vgrt/c, where c is the speed of light in vacuum, i.e. the maximum velocity of any information transfer. 39 “The collapse of the wavefunction after measurement represents nothing more than the updating of that scientist’s expectations.” N. D. Mermin, Phys. Today, 72, 53 (Jan. 2013). Chapter 2

Page 35 of 76

Essential Graduate Physics

QM: Quantum Mechanics

0 U ( x )  W ( x )



x E0

1/  1/ 

Fig. 2.18. Delta-functional potential well and its localized eigenstate (schematically).

Indeed, at x  0, U(x) = 0, so the 1D Schrödinger equation is reduced to the Helmholtz equation (1.83), whose localized solutions with E < 0 are single exponents vanishing at large distances:40  Ae x , for x  0,  ( x)   0  x    x  Ae , for x  0,

 2 2 with   E,   0 . 2m

(2.159)

Here the pre-exponential coefficients are taken equal to satisfy the boundary condition (76) of the wavefunction’s continuity at x = 0. Plugging Eq. (159) into the second boundary condition, given by Eq. (75) but now with the negative sign before W, we get

 A   A   2m2W 

A,

(2.160)

in which the common factor A  0 may be canceled. This equation41 has one solution for any W > 0: mW (2.161)   0  2 ,  and hence the system has only one localized state, with the following eigenenergy:42  2 02 mW 2 E  E0    . (2.162) 2m 2 2 Now we are ready to analyze localized states of the two-well potential shown in Fig. 19:

  a a   U ( x)  W   x      x  , with W  0 . (2.163) 2 2     Here we may still use the single-exponent solutions similar to Eq. (159), for the wavefunction outside the interval [-a/2, +a/2], but inside the interval, we need to take into account both possible exponents:

  C  ex  C  e x  C A sinh x  CS cosh x,

for 

a a x , 2 2

(2.164)

with the parameter  defined as in Eq. (159). The latter of these two equivalent expressions is more convenient because due to the symmetry of the potential (163) with respect to the central point x = 0, the system’s eigenfunctions should be either symmetric (even) or antisymmetric (odd) functions of x (see 40

See Eqs. (56)-(58), with U0 = 0. Such algebraic equations are frequently called characteristic. 42 Note that this E is equal, by magnitude, to the constant E that participates in Eq. (79). Note also that this result 0 0 was actually already obtained, “backward”, in the solution of Problem 1.12(ii), but that solution did not address the issue of whether the calculated potential (158) could sustain any other localized eigenstates. 41

Chapter 2

Page 36 of 76

Essential Graduate Physics

QM: Quantum Mechanics

Fig. 19), so they may be analyzed separately, only for one half of the system, say x  0, and using just one of the hyperbolic functions (164) in each case.  a / 2 U x 

S

EA ES

a/2 0

x Fig. 2.19. A system of two coupled potential wells, and its localized eigenstates (schematically).

A

For the antisymmetric eigenfunction, Eqs. (159) and (164) yield

A

  sinh x,  CA     a a  sinh exp   x  ,  2 2   

for 0  x  a for  x, 2

a , 2

(2.165)

where the front coefficient in the lower line is adjusted to satisfy the condition (76) of the wavefunction’s continuity at x = +a/2 – and hence at x = –a/2. What remains is to satisfy the condition (75), with a negative sign before W, for the derivative’s jump at that point. This condition yields the following characteristic equation: sinh

a 2

 cosh

a 2



2mW a sinh , 2 2  

i.e. 1  coth

a 2

2

 0 a  , a 

(2.166)

where the 0, given by Eq. (161), is the value of  for a single well, i.e. the reciprocal spatial width of its localized eigenfunction – see Fig. 18. Figure 20a shows both sides of Eq. (166) as functions of the dimensionless product a, for several values of the parameter 0a, i.e. of the normalized distance between the two wells. The plots show, first of all, that as the parameter 0a is decreased, the left-hand side and right-hand side plots cross (i.e. Eq. (166) has a solution) at lower and lower values of a. At a 1, i.e. if the distance a between the two narrow potential wells is larger than the following value, 1 2 a min   , (2.167)  0 mW which is equal to the characteristic spread of the wavefunction in a single well – see Fig. 18. (At a  amin, a  0, meaning that the state’s localization becomes weaker and weaker.) In the opposite limit of large distances between the potential wells, i.e. 0a >> 1, Eq. (166) shows that a >> 1 as well, so its left-hand side may be approximated as 2(1 + exp{–a}), and the equation yields

Chapter 2

Page 37 of 76

Essential Graduate Physics

QM: Quantum Mechanics

5

5

  A

4 3

1 0

3

LHS (166)

2

1.0

 0 a  0.5

1

1.5

a

LHS (172)

2

RHS

0

  S

4

2

1 0

3

 0 a  0.5 0

1

1.0

a

1.5

RHS 2

3

Fig. 2.20. Graphical solutions of the characteristic equations of the two-well system, for: (a) the antisymmetric eigenstate (165), and (b) the symmetric eigenstate (171).

   0 1  exp  0 a   0 .

(2.168)

This result means that the eigenfunction is an antisymmetric superposition of two virtually unperturbed wavefunctions (159) of each partial potential well:

 A x  

1 2

 R x    L x ,

a a   with  R  x    0  x  ,  L  x    0  x   , 2 2  

(2.169)

where the front coefficient is selected in such a way that if the eigenfunction 0 of each well is normalized, so is A. Plugging the middle (more exact) form of Eq. (168) into the last of Eqs. (159), we can see that in this limit the antisymmetric state’s energy is slightly higher than the eigenenergy E0 of a single well, given by Eq. (162): 2mW 2 where   exp  0 a  0 . 2

E A  E 0 1  2 exp  0 a  E 0   ,

(2.170)

The symmetric eigenfunction has a form similar to Eq. (165), but is still different from it: a  for 0  x  , cosh x, 2    S  CS     a a  a cosh exp   x  , for  x,  2 2  2  

(2.171)

giving a characteristic equation similar in structure to Eq. (166), but with a different left-hand side: 1  tanh

a 2

2

 0 a  . a 

(2.172)

Figure 20b shows both sides of this equation for several values of the parameter 0a. It is evident that in contrast to Eq. (166), Eq. (172) has a unique solution (and hence the system has a localized symmetric eigenstate) for any value of the parameter 0a, i.e. for any distance between the partial wells. In the limit of very close wells (i.e. their strong coupling), 0a 1, Eq. (172) shows that a >> 1 as well, so its left-hand side may be approximated as 2(1 – exp{–a}), and the equation yields

   0 1  exp  0 a   0 .

(2.174)

In this limit, the eigenfunction is a symmetric superposition of two virtually unperturbed wavefunctions (159) of each partial potential well: 1  R x    L x  ,  S x   (2.175) 2 and the eigenenergy is also close to the energy E0 of each partial well, but is slightly lower than it: ES  E 0 1  2 exp  0 a  E 0   ,

so that E A  ES  2 ,

(2.176)

where  is again given by the last of Eqs. (170). So, the eigenenergy of the symmetric state is always lower than that of the antisymmetric state. The physics of this effect (which remains qualitatively the same in more complex two-component systems, most importantly in diatomic molecules such as H2) is evident from the sketch of the wavefunctions A and S, given by Eqs. (165) and (171), in Fig. 19. In the antisymmetric mode, the wavefunction has to vanish at the center of the system, so each its half is squeezed to one half of the system’s spatial extension. Such a squeeze increases the function’s gradient, and hence its kinetic energy (1.27), and hence its total energy. On the contrary, in the symmetric mode, the wavefunction effectively spreads into the counterpart well. As a result, it changes in space slower, and hence its kinetic energy is also lower. Even more importantly, the symmetric state’s energy level goes down as the distance a is decreased, corresponding to the effective attraction of the partial wells. This is a good toy model of the strongest (and most important) type of atomic cohesion – the covalent (or “chemical”) bonding.43 In the simplest case of the H2 molecule, each of two electrons of the system, in its ground state,44 reduces its kinetic energy by spreading its wavefunction around both hydrogen nuclei (protons), rather than being confined near one of them – as it had to be in a single atom. The resulting bonding is very strong: in chemical units, it is 429 kJ/mol, i.e. 18.6 eV per molecule. Perhaps counter-intuitively, this quantummechanical covalent bonding is even stronger than the strongest classical (ionic) bonding due to electron transfer between atoms, leading to the Coulomb attraction of the resulting ions. (For example, the atomic cohesion in the NaCl molecule is just 3.28 eV.)

43

Historically, the development of the quantum theory of such bonding in the H2 molecule (by Walter Heinrich Heitler and Fritz Wolfgang London in 1927) was the breakthrough decisive for the acceptance of the thenemerging quantum mechanics by the community of chemists. 44 Due to the opposite spins of these electrons, the Pauli principle allows them to be in the same orbital ground state – see Chapter 8. Chapter 2

Page 39 of 76

Essential Graduate Physics

QM: Quantum Mechanics

Now let us analyze the dynamic properties of our model system (Fig. 19) carefully because such a pair of weakly coupled potential wells is our first example of the very important class of two-level systems.45 It is easiest to do in the weak-coupling limit 0a >> 1, when the simple results (168)-(170) and (174)-(176) are quantitatively valid. In particular, Eqs. (169) and (175) enable us to represent the quasi-localized states of the particle in each partial well as linear combinations of its two eigenstates:

 R x  

1 2

 S x    A x ,

 L x  

1 2

 S x    A x .

(2.177)

Let us perform the following thought (“gedanken”) experiment: place a particle, at t = 0, into one of these quasi-localized states, say R(x), and leave the system alone to evolve, so  ( x,0)   R ( x) 

1 2

 S ( x)   A ( x) .

(2.178)

According to the general solution (1.69) of the time-independent Schrödinger equation, the time dynamics of this wavefunction may be obtained simply by multiplying each eigenfunction by the corresponding complex-exponential time factor:  ( x, t ) 

1   ES   E  t    A ( x) exp i A t  .  S ( x) exp  i     2  

(2.179)

From here, using Eqs. (170) and (176), and then Eqs. (169) and (175) again, we get 1   iE 0 t   i t   i t    S ( x) exp    A ( x) exp   exp  2          t t   E t    R ( x) cos  i L ( x) sin  exp i 0  .       

 ( x, t ) 

(2.180)

This result implies, in particular, that the probabilities WR and WL to find the particle, respectively, in the right and left wells change with time as WR  cos 2

t 

,

WL  sin 2

t 

,

(2.181)

mercifully leaving the total probability constant: WR + WL = 1. (If our calculation had not passed this sanity check, we would be in big trouble.) This is the famous effect of quantum oscillations46 of the particle’s wavefunction between two coupled localized states, with the frequency



2 E A  ES  .  

(2.182)

In its last form, this result does not depend on the assumption of weak coupling, though the simple form (181) of the oscillations, with its 100% probability variations, does. (Indeed, at a strong coupling of two 45 As we will see later in Chapter 4, these properties are similar to those of spin-½ particles; hence two-level systems are sometimes called spin-½-like systems. 46 Sometimes they are called the Bloch oscillations, but more commonly the last term is reserved for a related but different effect in spatially-periodic systems – to be discussed in Sec. 8 below.

Chapter 2

Page 40 of 76

Quantum oscillations

Essential Graduate Physics

QM: Quantum Mechanics

subsystems, the very notion of the quasi-localized states R and L is ambiguous.) Qualitatively, this effect may be interpreted as follows: the particle placed into one of the potential wells, tries to escape from it via tunneling through the potential barrier separating the wells. (In our particular system, shown in Fig. 17, the barrier is formed by the spatial segment of length a, which has the potential energy, U = 0, higher than the eigenstate energy –E0.) However, in the two-well system, the particle can only escape into the adjacent well. After the tunneling into that counterpart well, the particle tries to escape from it, and hence comes back, etc. – very much as a classical 1D oscillator, initially deflected from its equilibrium position, at negligible damping. Some care is required at using such interpretation for quantitative conclusions. In particular, let us compare the period T  2/ of the oscillations (181) with the metastable state’s lifetime discussed in the previous section. For our particular model, we may use the second of Eqs. (170) to write



4 E0 

exp  0 a, i.e. T 

t    exp 0 a  a exp 0 a,  2 E0 2

for  0 a  1 , (2.183)

where ta  2/0  2/E0 is the effective attempt time. On the other hand, according to Eq. (80), the transparency T of our potential barrier, in this limit, scales as exp{-20a},47 so according to the general relation (153), the lifetime  is of the order of taexp{20a} >> T. This is a rather counter-intuitive result: the speed of particle tunneling into a similar adjacent well is much higher than that, through a similar barrier, to the free space! In order to show that this important result is not an artifact of our simple delta-functional model of the potential wells, and also compare T and  more directly, let us analyze the quantum oscillations between two weakly coupled wells, now assuming that the (symmetric) potential profile U(x) is sufficiently soft (Fig. 21), so all its eigenfunctions S and A are at least differentiable at all points.48

U ( x)

 L ( x)

 R ( x)

E 

a 2 x c

0

a  xc' 2

x

Fig. 2.21. Weak coupling between two similar, soft potential wells.

If the barrier’s transparency is low, the quasi-localized wavefunctions R(x) and L(x) = R(-x) and their eigenenergies may be found approximately by solving the Schrödinger equations in one of the wells, neglecting the tunneling through the barrier, but the calculation of  requires a little bit more care. Let us write the stationary Schrödinger equations for the symmetric and antisymmetric solutions as 47

It is hard to use Eq. (80) for a more exact evaluation of T in our current system, with its infinitely deep potential wells because the meaning of the wave number k is not quite clear. However, this is not too important, because, in the limit 0a >> 1, the tunneling exponent makes the dominant contribution to the transparency – see, again, Fig. 2.7b. 48 Such smooth wells may have more than one localized eigenstates, so the proper state (and energy) index n is implied in all remaining formulas of this section. Chapter 2

Page 41 of 76

Essential Graduate Physics

QM: Quantum Mechanics

E A  U ( x) A

 2 d 2 A  , 2m dx 2

 2 d 2 S ES  U ( x) S   , 2m dx 2

(2.184)

multiply the former equation by S and the latter one by A, subtract them from each other, and then integrate the result from 0 to . The result is 

( E A  ES )  S A dx  0

  d 2 A  2  d 2 S    S dx.  A 2 2   2m 0  dx dx 

(2.185)

If U(x), and hence d2A,S/dx2, are finite for all x, we may integrate the right-hand side by parts to get 



d A  2  d S  ( E A  ES )  S A dx  A  S  . (2.186)  2 m dx dx   0 0 So far, this result is exact (provided that the derivatives participating in it are finite at each point); for weakly coupled wells, it may be further simplified. Indeed, in this case, the left-hand side of Eq. (186) may be approximated as  E  ES  , (2.187) ( E A  ES )  S A dx  A 2 0 because this integral is dominated by the vicinity of point x = a/2, where the second terms in each of Eqs. (169) and (175) are negligible, so assuming the proper normalization of the function R(x), the integral is equal to ½. On the right-hand side of Eq. (186), the substitution at x =  vanishes (due to the wavefunction’s decay in the classically forbidden region), and so does the first term at x = 0, because for the antisymmetric solution, A(0) = 0. As a result, the energy half-split  may be expressed in any of the following (equivalent) forms:



d d d 2 2 2  S (0) A (0)   R (0) R (0)    L (0) L (0). 2m dx m dx m dx

(2.188)

It is straightforward (and hence left for the reader’s exercise) to show that within the limits of the WKB approximation’s validity, Eq. (188) may be reduced to  xc'   xc'    t a  exp   ( x' )dx'  ,   exp   ( x' )dx' , so that T  (2.189) 2 ta   xc   xc  where ta is the time period of the classical motion of the particle, with the energy E  EA  ES, inside each well, the function (x) is defined by Eq. (82), and xc and xc’ are the classical turning points limiting the potential barrier at the level E of the particle’s eigenenergy – see Fig. 21. The result (189) is evidently a natural generalization of Eq. (183), so the strong relationship between the times of particle tunneling into the continuum of states and into a discrete eigenstate, is indeed not specific for the deltafunctional model. We will return to this fact, in its more general form, at the end of Chapter 6.

2.7. Periodic systems: Energy bands and gaps Let us now proceed to the discussion of one of the most consequential issues of wave mechanics: particle motion through a periodic system. As a precursor to this discussion, let us calculate the transparency of the potential profile shown in Fig. 22 (frequently called the Dirac comb): a sequence of N similar, equidistant delta-functional potential barriers separated by (N – 1) potential-free intervals a. Chapter 2

Page 42 of 76

Essential Graduate Physics

QM: Quantum Mechanics

a

a IA

E

TI A

x1

x2



x

xN

Fig. 2.22. Tunneling through a Dirac comb: a system of N similar, equidistant barriers, i.e. (N – 1) similar coupled potential wells.

According to Eq. (132), its transfer matrix is the following product T  T Ta T ...Ta T ,  

(2.190)

N ( N 1) operands

with the component matrices given by Eqs. (135) and (138), and the barrier height parameter  defined by the last of Eqs. (78). Remarkably, this multiplication may be carried out analytically for arbitrary N,49 giving 2 1     sin ka   cos ka 2 2 (2.191a) T  T11  cos Nqa    sin Nqa   , sin qa     where q is a new parameter, with the wave number dimensionality, defined by the following relation: cos qa  cos ka   sin ka.

(2.191b)

For N = 1, Eqs. (191) immediately yield our old result (79), while for N = 2 they may be readily reduced to Eq. (141) – see Fig. 16a. Fig. 23 shows their predictions for two larger numbers N, and several values of the dimensionless parameter . (a)

N 3

  0.3

0.8

0.8

0.6

T

T

1.0

0.2 0



0.6

0.4

3.0 0

0.2

0.4

0.6

ka / 

0.8

(b)

N  10

0.3

0.4

1.0

0.2 0

3.0 0

0.2

0.4

0.6

ka / 

0.8

Fig. 2.23. The Dirac comb’s transparency as a function of the product ka for three values of . Since the function T(ka) is -periodic (just like it is for N = 2, see Fig. 16a), only one period is shown.

Let us start the discussion of the plots from the case N = 3 when three barriers limit two coupled potential wells between them. Comparison of Fig. 23a and Fig. 16a shows that the transmission patterns, 49 This

formula will be easier to prove after we have discussed the properties of Pauli matrices in Chapter 4.

Chapter 2

Page 43 of 76

Essential Graduate Physics

QM: Quantum Mechanics

and their dependence on the parameter , are very similar, besides that in the coupled-well system, each resonant tunneling peak splits into two, with the ka-difference between them scaling as 1/. From the discussion in the last section, we may now readily interpret this result: each pair of resonance peaks of transparency corresponds to the alignment of the incident particle’s energy E with the pair of energy levels of the symmetric (ES) and antisymmetric (EA) states of the system. However, in contrast to the system shown in Fig. 19, these states are metastable, because the particle may leak out from these states just as it could in the system studied in Sec. 5 – see Fig. 15 and its discussion. As a result, each of the resonant peaks has a non-zero energy width E, obeying Eq. (155). A further increase of N (see Fig. 23b) results in the increase of the number of resonant peaks per period to (N – 1), and at N   the peaks merge into the so-called allowed energy bands (frequently called just the “energy bands”) with average transparency T ~ 1, separated from similar bands in the adjacent periods of the function T(ka) by energy gaps50 where T  0. Notice the following important features of the pattern: (i) at N  , the band/gap edges become sharp for any , and tend to fixed positions (determined by  but independent of N); (ii) the larger the well coupling (the smaller is ), the broader the allowed energy bands and the narrower the gaps between them. Our previous discussion of the resonant tunneling gives us a clue for a semi-quantitative interpretation of these features: if (N – 1) potential wells are weakly coupled by tunneling through the potential barriers separating them, the system’s energy spectrum consists of groups of (N – 1) metastable energy levels, each group being close to one of the unperturbed eigenenergies of the well. (According to Eq. (1.84), for our current example shown in Fig. 22, with its rectangular potential wells, these eigenenergies correspond to kna = n.) Now let us recall that in the case N = 2 analyzed in the previous section, the eigenfunctions (169) and (175) differed only by the phase shift  between their localized components R(x) and L(x), with  = 0 for one of them (S) and  =  for its counterpart. Hence it is natural to expect that for other N as well, each metastable energy level corresponds to an eigenfunction that is a superposition of similar localized functions in each potential well, but with certain phase shifts  between them. Moreover, we may expect that at N  , i.e. for periodic structures,51 with U ( x  a)  U ( x),

(2.192)

when the system does not have the ends that could affect its properties, the phase shifts  between the localized wavefunctions in all couples of adjacent potential wells should be equal, i.e.

 ( x  a )   ( x ) e i 

(2.193a)

for all x.52 This equality is the much-celebrated Bloch theorem,53 or rather its 1D version. Mathematical rigor aside,54 it is a virtually evident fact because the particle’s density w(x) = *(x)(x), which has to

50

In solid-state (especially semiconductor) physics and electronics, the term bandgaps is more common. This is a reasonable 1D model, for example, for solid-state crystals, whose samples may feature up to ~109 similar atoms or molecules in each direction of the crystal lattice.

51

Chapter 2

Page 44 of 76

Essential Graduate Physics

QM: Quantum Mechanics

be periodic in this a-periodic system, may be so only  is constant. For what follows, it is more convenient to represent the real constant  in the form qa, so that the Bloch theorem takes the form Bloch theorem: 1D version

 ( x  a )   ( x)eiqa .

(2.193b)

The physical sense of the parameter q will be discussed in detail below, but we may immediately notice that according to Eq. (193b), an addition of (2/a) to this parameter yields the same wavefunction; hence all observables have to be (2/a)-periodic functions of q. 55 Now let us use the Bloch theorem to calculate the eigenfunctions and eigenenergies for the infinite version of the system shown in Fig. 22, i.e. for an infinite set of delta-functional potential barriers – see Fig. 24. a a a

En



xj

x j 1



x

Fig. 2.24. The simplest periodic potential: an infinite Dirac comb.

To start, let us consider two points separated by one period a: one of them, xj, just left of one of the barriers, and another one, xj+1, just left of the following barrier – see Fig. 24 again. The eigenfunctions at each of the points may be represented as linear superpositions of two simple waves exp{ikx}, and the amplitudes of their components should be related by a 22 transfer matrix T of the potential fragment separating them. According to Eq. (132), this matrix may be found as the product of the matrix (135) of one delta-functional barrier by the matrix (138) of one zero-potential interval a:  A j 1   A   ika    Ta T  j    e B  B    j 1   j  0

0 1  i  i  A j  .  1  i  B j  e  ika  i

(2.194)

However, according to the Bloch theorem (193b), the component amplitudes should be also related as A reasonably fair classical image of  is the geometric angle between similar objects – e.g., similar paper clips – attached at equal distances to a long, uniform rubber band. If the band’s ends are twisted, the twist is equally distributed between the structure’s periods, representing the constancy of . 53 Named after F. Bloch who applied this concept to wave mechanics in 1929, i.e. very soon after its formulation. Note, however, that an equivalent statement in mathematics, called the Floquet theorem, has been known since at least 1883. 54 I will recover this rigor in two steps. Later in this section, we will see that the function obeying Eq. (193) is indeed a solution to the Schrödinger equation. However, to save time/space, it will be better for us to postpone until Chapter 4 the proof that any eigenfunction of the equation, with periodic boundary conditions, obeys the Bloch theorem. As a partial reward for this delay, that proof will be valid for an arbitrary spatial dimensionality. 55 The product q, which has the linear momentum’s dimensionality, is called either the quasimomentum or (especially in solid-state physics) the “crystal momentum” of the particle. Informally, it is very convenient (and common) to use the name “quasimomentum” for the bare q as well, despite its evidently different dimensionality. 52

Chapter 2

Page 45 of 76

Essential Graduate Physics

QM: Quantum Mechanics

 A j 1   A   iqa    e iqa  j    e B  B    j 1   j  0

0  A j  .   e iqa  B j 

(2.195)

The condition of self-consistency of these two equations gives the following characteristic equation:  e ika   0 

0 1  i  i   e iqa   1  i   0 e  ika  i

0   0 . e iqa 

(2.196)

In Sec. 5, we have already calculated the matrix product participating in this equation – see the second operand in Eq. (140). Using it, we see that Eq. (196) is reduced to the same simple Eq. (191b) that has jumped at us from the solution of the somewhat different (resonant tunneling) problem. Let us explore that simple result in detail. First of all, the left-hand side of Eq. (191b) is a sinusoidal function of the product qa with unit amplitude, while its right-hand side is a sinusoidal function of the product ka, with amplitude (1 + 2)1/2 > 1 – see Fig. 25. gap 2

gap band

… band …

 1

1

cos qa 0 1 2

0

1

2

ka / 

3

4

Fig. 2.25. The graphical representation of the characteristic equation (191b) for a fixed value of the parameter . The ranges of ka that yield cos qa < 1, correspond to allowed energy bands, while those with cos qa > 1, correspond to energy gaps between them.

As a result, within each half-period (ka) =  of the right-hand side, there is an interval where the magnitude of the right-hand side is larger than 1, so the characteristic equation does not have a real solution for q. These intervals correspond to the energy gaps (see Fig. 23 again) while the complementary intervals of ka, where a real solution for q exists, correspond to the allowed energy bands. In contrast, the parameter q can take any real values, so it is more convenient to plot the eigenenergy E = 2k2/2m as the function of the quasimomentum q (or, even more conveniently, of the dimensionless parameter qa) rather than ka.56 Before doing that, we need to recall that the parameter , defined by the last of Eqs. (78), depends on the wave vector k as well, so if we vary q (and hence k), it is better to characterize the structure by another, k-independent dimensionless parameter, for example

  (ka)   so our characteristic equation (191b) becomes

W ,  / ma 2

(2.197)

56

A more important reason for taking q as the argument is that for a general periodic potential U(x), the particle’s momentum k is not uniquely related to E, while (according to the Bloch theorem) the quasimomentum q is. Chapter 2

Page 46 of 76

Essential Graduate Physics

QM: Quantum Mechanics

cos qa  cos ka  

Dirac comb: q vs k

sin ka . ka

(2.198)

Fig. 26 shows the plots of k and E, following from Eq. (198), as functions of qa, for a particular, moderate value of the parameter . The first evident feature of the pattern is its 2-periodicity in argument qa, which we have already predicted from the general Bloch theorem arguments. Due to this periodicity, the complete band/gap pattern may be studied, for example, on just one interval –  qa  + , called the 1st Brillouin zone – the so-called reduced zone picture. For some applications, however, it is more convenient to use the extended zone picture with –  qa  + – see, e.g., the next section. (a)

1st Brillouin zone

(b)

1st Brillouin zone 100

4

E E0

ka



50

2

1 0 -2

–1

0

qa / 

1

2

0 -2

–1

0

qa / 

1

E1 2

Fig. 2.26. (a) The “genuine” momentum k of a particle in an infinite Dirac comb (Fig. 24), and (b) its energy E = 2k2/2m (in the units of E0  2/2ma2), as functions of the normalized quasimomentum, for a particular value ( = 3) of the dimensionless parameter defined by Eq. (197). Arrows in the lower right corner of panel (b) illustrate the definitions of the energy band (En) and energy gap (n) widths.

However, maybe the most important fact, clearly visible in Fig. 26, is that there is an infinite number of energy bands, with different energies En(q) for the same value of q. Mathematically, it is evident from Eq. (198) – or alternatively from Fig. 25. Indeed, for each value of qa, there is a solution ka of this equation on each half-period (ka) = . Each of such solutions (see Fig. 26a) gives a specific value of the particle’s energy E = 2k2/2m. A continuous set of similar solutions for various qa forms a particular energy band. Since the energy band picture is one of the most practically important results of quantum mechanics, it is imperative to understand its physics. It is natural to describe this physics in two different ways in two opposite potential strength limits. In parallel, we will use this discussion to obtain simpler expressions for the energy band/gap structure in each limit. An important advantage of this approach is Chapter 2

Page 47 of 76

Essential Graduate Physics

QM: Quantum Mechanics

that both analyses may be carried out for an arbitrary periodic potential U(x) rather than for the particular Dirac comb model shown in Fig. 24. (i) Tight-binding approximation. This approximation works well when the eigenenergy En of the states quasi-localized at the energy profile minima is much lower than the height of the potential barriers separating them – see Fig. 27. As should be clear from our discussion in Sec. 6, essentially the only role of coupling between these states (via tunneling through the potential barriers separating the minima) is to establish a certain phase shift   qa between the adjacent quasi-localized wavefunctions un(x – xj) and un(x – xj+1).

a U ( x)

un ( x  x j 1 )

a un (x  x j )

n

En

u n ( x  x j 1 )

n a  x0

x0 0

x j 1

xj

x j 1

x

Fig. 2. 27. The tight-binding approximation (schematically).

To describe this effect quantitatively, let us first return to the problem of two coupled wells considered in Sec. 6, and recast the result (180), with the restored eigenstate index n, as  E  n ( x, t )  a R (t ) R ( x)  a L (t ) L ( x)exp i n t ,    where the probability amplitudes aR and aL oscillate sinusoidally in time: a R (t )  cos

n

a L (t )  i sin

(2.199)

n

t. (2.200)   This evolution satisfies the following system of two equations whose structure is similar to Eq. (1.61a): t,

ia R   n a L ,

ia L   n a R .

(2.201)

Eq. (199) may be readily generalized to the case of many similar coupled wells:    E  (2.202) n ( x, t )   a j (t )u n ( x  x j ) exp i n t  ,     j  where En are the eigenenergies and un the eigenfunctions of each well. In the tight-binding limit, only the adjacent wells are coupled, so instead of Eq. (201) we should write an infinite system of similar equations ia j   n a j 1   n a j 1 , (2.203)

for each well number j, where parameters n describe the coupling between two adjacent potential wells. Repeating the calculation outlined at the end of the last section for our new situation, for a smooth potential we may get an expression essentially similar to the last form of Eq. (188): du 2  n  u n ( x0 ) n (a  x0 ) , m dx

Chapter 2

(2.204)

Page 48 of 76

Tightbinding limit: coupling energy

Essential Graduate Physics

QM: Quantum Mechanics

where x0 is the distance between the well bottom and the middle of the potential barrier on the right of it – see Fig. 27. The only substantial new feature of this expression in comparison with Eq. (188) is that the sign of n alternates with the level number n: 1 > 0, 2 < 0, 3 > 0, etc. Indeed, the number of zeros (and hence, “wiggles”) of the eigenfunctions un(x) of any potential well increases as n – see, e.g., Fig. 1.8,57 so the difference of the exponential tails of the functions, sneaking under the left and right barriers limiting the well also alternates with n. The infinite system of ordinary differential equations (203) enables solutions of many important problems (such as the spread of the wavefunction that was initially localized in one well, etc.), but our task right now is just to find its stationary states, i.e. the solutions proportional to exp{-i(n/)t}, where n is a still unknown, q-dependent addition to the background energy En of the nth energy level. To satisfy the Bloch theorem (193) as well, such a solution should have the following form:

   a j (t )  a expiqx j  i n t  const  .    Tightbinding limit: energy bands

(2.205)

Plugging this solution into Eq. (203) and canceling the common exponent, we get





E  En   n  En   n e iqa  e iqa  En  2 n cos qa ,

(2.206)

so in this approximation, the energy band width En (see Fig. 26b) equals 4n . The relation (206), whose validity is restricted to n 0. (ii) Calculate p for the case when the function  ak 2 is symmetric with respect to some value k0. 2.4. Calculate the function ak defined by Eq. (20), for the wave packet with a rectangular spatial envelope: C exp ik 0 x, for  a / 2  x   a / 2,  ( x ,0 )    0, otherwise. Analyze the result in the limit k0a  . 2.5. Prove Eq. (49) for the 1D propagator of a free quantum particle, starting from Eq. (48). 2.6. Express the 1D propagator defined by Eq. (44) via the eigenfunctions and eigenenergies of a particle moving in an arbitrary stationary potential U(x). 2.7. Calculate the change of a 1D particle’s wavefunction, resulting from a short pulse of an external classical force that may be approximated by a delta function: F(t) = P(t). 2.8. Calculate the transparency T of the rectangular potential barrier (68), for x   d / 2, 0,  U ( x)  U 0 , for  d / 2  x   d / 2,  0, for d / 2  x, 

for a particle of energy E > U0. Analyze and interpret the result, taking into account that U0 may be either positive or negative. (In the latter case, we are speaking about the particle’s passage over a rectangular potential well of a finite depth  U0 .) 2.9. Prove Eq. (117) for the case TWKB > 1. 2.11. Use the WKB approximation to express the expectation value of the kinetic energy of a 1D particle confined in a soft potential well, in its nth stationary state, via the derivative dEn/dn, for n >> 1.

99

See, e.g., MA Eq. (15.1).

Chapter 2

Page 71 of 76

Essential Graduate Physics

QM: Quantum Mechanics

2.12. Use the WKB approximation to calculate the transparency T of the following triangular potential barrier: for x  0,  0, U ( x)   U 0  Fx, for x  0, with F, U0 > 0, as a function of the incident particle’s energy E. Hint: Be careful treating the sharp potential step at x = 0. 2.13. Prove that Eq. (2.67) of the lecture notes is valid even if the potential U(x) changes, sufficiently slowly, on both sides of the potential step, provided that U(x) < E everywhere. 2.14.* Prove that the symmetry of the 1D scattering matrix S describing an arbitrary timeindependent scatterer allows its representation in the form (127). 2.15. Prove the universal relations between elements of the 1D transfer matrix T of a stationary (but otherwise arbitrary) scatterer, mentioned in Sec. 5. 2.16.* A k-narrow wave packet is incident on a finite-length 1D scatterer. Obtain a general expression for the time of its delay caused by the scatterer, and evaluate the time for the case of a very short but high potential barrier. 2.17. A 1D particle had been localized in a very narrow and deep potential well, with the “weight” U(x)dx equal to –W, where W > 0. Then (say, at t = 0) the well’s bottom is suddenly lifted up, so that the particle becomes completely free. Calculate the probability density, w(k), to find the particle in a state with a certain wave number k at t > 0, and the total final energy of the system. 2.18. Calculate the lifetime of the metastable localized state of a 1D particle in the potential U  x   W  x   Fx,

with W  0 ,

in the WKB approximation. Formulate the condition of validity of the result. 2.19. Calculate the energy levels and the corresponding eigenfunctions of a 1D particle placed into a flat-bottom potential well of width 2a, with infinitely-high hard walls, and a narrow potential barrier in the middle – see the figure on the right. Discuss the particle’s dynamics in the limit when W is very large but still finite. 2.20.* Consider a symmetric system of two potential wells of the type shown in Fig. 21, but with U(0) = U() = 0 – see the figure on the right. Derive a general expression for the well interaction force due to their sharing a quantum particle of mass m, and determine its sign for the cases when the particle is in:

U ( x) W ( x)

a

0

a

x

U x  0

x

(i) a symmetric localized eigenstate: S(–x) = S(x), and (ii) an antisymmetric localized eigenstate: A(–x) = –A(x).

Chapter 2

Page 72 of 76

Essential Graduate Physics

QM: Quantum Mechanics

Use a different approach to verify your conclusions for the particular case of delta-functional wells. 2.21. Derive and analyze the characteristic equation for localized eigenstates of a 1D particle in a rectangular potential well of a finite depth (see the figure on the right):

a/2

0

U x  a/2

x  U 0 , for x  a/ 2 , U0 with U 0  0. U ( x)   0 , otherwise,  In particular, calculate the number of localized states as a function of the well’s width a, and explore the limit U0 0, with a sharp step to a flat potential plateau at x < 0:   W   x  ja , with W  0, U x     j 1 U 0  0,

for x  0, for x  0.

Prove that the system can have a set of the so-called Tamm states localized near the “surface” x = 0, and calculate their energies in the limit when U0 is very large but finite. (Quantify this condition.) 100 2.29. Calculate the transfer matrix of the rectangular potential barrier specified by Eq. (68), for particle energies both below and above U0. 2.30. Use the results of the previous problem to calculate the transfer matrix of one period of the periodic Kronig-Penney potential shown in Fig. 31b. 2.31. Using the results of the previous problem, derive the characteristic equations for a particle’s motion in the periodic Kronig-Penney potential, for both E < U0 and E > U0. Try to bring the equations to a form similar to that obtained in Sec. 7 for the delta-functional barriers – see Eq. (198). Use the equations to formulate the conditions of applicability of the tight-binding and weak-potential approximations, in terms of the system’s parameters and the particle’s energy E. 2.32. For the Kronig-Penney potential, use the tight-binding approximation to calculate the widths of the allowed energy bands. Compare the results with those of the previous problem (in the corresponding limit). 2.33. For the same Kronig-Penney potential, use the weak-potential limit formulas to calculate the energy gap widths. Again, compare the results with those of Problem 31, in the corresponding limit. 2.34. 1D periodic chains of atoms may exhibit what is called the Peierls instability, leading to the Peierls transition to a phase in which atoms are slightly displaced, from the exact periodicity, by equal but sign-alternated shifts xj = (–1)jx, with x 0. U ( x) 

2.43. Initially, a 1D harmonic oscillator was in its ground state. At a certain moment of time, its spring constant  is abruptly increased so that its frequency 0 = (/m)1/2 is increased by a factor of , and then is kept constant at the new value. Calculate the probability that after the change, the oscillator is still in its ground state. 2.44. A 1D particle is placed into the following potential well: for x  0,   , U ( x)   2 2  m 0 x / 2, for x  0.

(i) Find its eigenfunctions and eigenenergies. (ii) This system had been let to relax into its ground state, and then the potential wall at x < 0 was rapidly removed so that the system was instantly turned into the usual harmonic oscillator (with the same m and 0). Find the probability for the oscillator to remain in its ground state. 2.45. Prove the following formula for the propagator of the 1D harmonic oscillator:   m 0  G ( x, t ; x0 , t 0 )    2i sin[ 0 (t  t 0 )] 

1/ 2

  im 0 x 2  x02 cos[ 0 (t  t 0 )]  2 xx0 . exp  2 sin[ 0 (t  t 0 )] 







Discuss the relation between this formula and the propagator of a free 1D particle. 2.46. In the context of the Sturm oscillation theorem mentioned in Sec. 9, prove that the number of eigenfunction’s zeros of a particle confined in an arbitrary but finite potential well always increases with the corresponding eigenenergy. Hint: You may like to use the suitably modified Eq. (186). 2.47.* Use the WKB approximation to calculate the lifetime of the metastable ground state of a 1D particle of mass m in the “pocket” of the potential profile U ( x) 

m 02 2 x  x 3 . 2

Contemplate the significance of this problem.

Chapter 2

Page 76 of 76

Essential Graduate Physics

QM: Quantum Mechanics

Chapter 3. Higher Dimensionality Effects The descriptions of the basic quantum-mechanical effects, given in the previous chapter, may be extended to higher dimensions in an obvious way. This is why this chapter is focused on the phenomena (such as the AB effect and the Landau levels) that cannot take place in one dimension due to topological reasons, and also on the key 3D problems (such as the Born approximation in the scattering theory, and the axially- and spherically-symmetric systems) that are important for numerous applications. 3.1. Quantum interference and the AB effect In the past two chapters, we have already discussed some effects of the de Broglie wave interference. For example, standing waves inside a potential well, or even on the top of a potential barrier, may be considered as a result of interference of incident and reflected waves. However, there are some remarkable new effects made possible by the spatial separation of such waves, and such separation requires a higher (either 2D or 3D) dimensionality. A good example of wave separation is provided by the Young-type experiment (Fig. 1) in which particles, emitted by the same source, are passed through two small holes (or narrow slits) in an otherwise opaque partition. x

l1''

l1'

1

z C particle source

l 2'

2

l2''

W  w(r ) particle detector Fig. 3.1. The scheme of the “two-slit” (Young-type) interference experiment.

partition with 2 slits

According to Eq. (1.22), if particle interactions are negligible (which is always true if the emission rate is sufficiently low), the average rate of particle counting by the detector is proportional to the probability density w(r, t) = (r, t) *(r, t) to find a single particle at the detector’s location r, where (r, t) is the solution of the single-particle Schrödinger equation (1.25) for the system. Let us calculate this rate for the case when the incident particles may be represented by virtuallymonochromatic waves of energy E (e.g., very long wave packets), so their wavefunction may be taken in the form given by Eqs. (1.57) and (1.62): (r, t) = (r) exp{-iEt/}. In this case, in the free-space parts of the system, where U(r) = 0, (r) satisfies the stationary Schrödinger equation (1.78a): 

2 2    E . 2m

(3.1a)

With the standard definition k  (2mE)1/2/, it may be rewritten as the 3D Helmholtz equation: 3D Helmholtz equation

 2  k 2  0 .

© K. Likharev

(3.1b)

Essential Graduate Physics

QM: Quantum Mechanics

The opaque parts of the partition may be well described as classically forbidden regions, so if their size scale a is much larger than the wavefunction penetration depth  described by Eq. (2.59), we may use on their surface S the same boundary conditions as for the well’s walls of infinite height:



S

 0.

(3.2)

Eqs. (1) and (2) describe the standard boundary problem of the theory of propagation of scalar waves of any nature. For an arbitrary geometry, this problem does not have a simple analytical solution. However, for a conceptual discussion of wave interference, we may use certain natural assumptions that will allow us to find its particular, approximate solution. First, let us discuss the wave emission, into free space, by a small-size, isotropic source located at the origin of our reference frame. Naturally, the emitted wave should be spherically symmetric: (r) = (r). The well-known expression for the Laplace operator in spherical coordinates1 shows that in this case, Eq. (1) is reduced to the following ordinary differential equation:

1 d  2 d  2 r k   0. r 2 dr  dr 

(3.3)

Let us introduce a new function, f(r)  r(r). Plugging the reciprocal relation  = f/r into Eq. (3), we see that for the function f, it gives the standard 1D wave equation: d2 f  k2 f  0. dr 2

(3.4)

As was discussed in Sec. 2.2, for a fixed k, the general solution of Eq. (4) may be represented in the form of two traveling waves: f  f  e ikr  f  e ikr (3.5) so the full solution of Eq. (3) is

 (r ) 

f  ikr f  ikr f i ( kr t ) f  i ( kr t ) E k 2 e  e e , i.e.  r , t    e  , with    . (3.6) r r r r  2m

If the source is located at point r’  0, the obvious generalization of Eq. (6) is  (r, t ) 

f  i ( kRt ) f  i ( kR t ) e  e , with R  R , where R  r  r ' . R R

(3.7)

The first term of this solution describes a spherically-symmetric wave propagating from the source outward, while the second one represents a wave converging onto the source point r’ from large distances. Though the latter solution is possible in some very special circumstances (say, when the outgoing wave is reflected back from a perfectly spherical shell), for our current problem, only the outgoing waves are relevant, so we may keep only the first term (proportional to f+) in Eq. (7). Note that the factor R is the denominator (that was absent in the 1D geometry) has a simple physical sense: it provides the independence of the full probability current I = 4R2j(R), with j(R) k*  1/R2, of the distance R between the observation point and the source.

1

See, e.g., MA Eq. (10.9) with / = / = 0.

Chapter 3

Page 2 of 64

Essential Graduate Physics

QM: Quantum Mechanics

Now let us assume that the partition’s geometry is not too complicated – for example, it is either planar as shown in Fig. 1, or nearly-planar, and consider the region of the particle detector location far behind the partition (at z >> 1/k), and at a relatively small angle to the normal: x  0 had been placed into a time-independent magnetic field B0 = B0nz and let relax into the lowest-energy state. At t = 0, an additional field B1(t) was turned on; its vector has a constant magnitude but rotates within the [x, y]-plane with an angular velocity . Calculate the expectation values of all Cartesian components of the spin at t  0, and discuss thе representation of its dynamics on the Bloch sphere. 5.6.* Analyze statistics of the spacing S  E+ – E- between energy levels of a two-level system, assuming that all elements Hjj’ of its Hamiltonian matrix (2) are independent random numbers, with equal and constant probability densities within the energy interval of interest. Compare the result with that for a purely diagonal Hamiltonian matrix, with a similar probability distribution of its random diagonal elements. 5.7. For a periodic motion of a single particle in a confining potential U(r), the virial theorem of non-relativistic classical mechanics53 is reduced to the following equality: For the interested reader, I can recommend either Sec. 17.7 in E. Merzbacher, Quantum Mechanics, 3rd ed., Wiley, 1998, or Sec. 3.10 in J. Sakurai, Modern Quantum Mechanics, Addison-Wesley, 1994. 53 See, e.g., CM Problem 1.12. 52

Chapter 5

Page 42 of 48

Essential Graduate Physics

QM: Quantum Mechanics

1 r  U , 2 where T is the particle’s kinetic energy, and the top bar means averaging over the time period of motion. Prove the following quantum-mechanical version of the theorem for an arbitrary stationary state, in the absence of spin effects: 1 T  r  U , 2 where the angular brackets mean the expectation values of the observables. Hint: Mimicking the proof of the classical virial theorem, consider the time evolution of the following operator: Gˆ  rˆ  pˆ . T 

5.8. A non-relativistic particle moves in the spherically symmetric potential U(r) = Cln(r/R). Prove that for: (i) v2 is the same in each eigenstate, and (ii) the spacing between the energy levels is independent of the particle’s mass. 5.9. Calculate, in the WKB approximation, the transparency T of the following saddle-shaped potential barrier: xy   U ( x, y )  U 0 1  2 ,  a  where U0 > 0 and a are real constants, for tunneling of a 2D particle with energy E < U0. 5.10. In the WKB approximation, calculate the so-called Gamow factor54 for the alpha decay of atomic nuclei, i.e. the exponential factor in the transparency of the potential barrier resulting from the following simple model for the alpha-particle’s potential energy as a function of its distance from the nuclear center: for r  R, U 0  0,  2 U r    ZZ'e , for R  r ,  4 0 r where Ze = 2e > 0 is the charge of the particle, Z’e > 0 is that of the nucleus after the decay, and R is the nucleus’ radius. 5.11. Use the WKB approximation to calculate the average time of ionization of a hydrogen atom, initially in its ground state, made metastable by the application of an additional weak, uniform, time-independent electric field E. Formulate the conditions of validity of your result. 5.12. For a 1D harmonic oscillator with mass m and frequency 0, calculate: (i) all matrix elements n xˆ 3 n' , and (ii) the diagonal matrix elements n xˆ 4 n , where n and n’ are arbitrary Fock states. 54

Named after G. Gamow, who made this calculation as early as in 1928.

Chapter 5

Page 43 of 48

Essential Graduate Physics

QM: Quantum Mechanics

5.13. Calculate the sum (over all n > 0) of the so-called oscillator strengths, 2 2m f n  2 E n  E 0  n xˆ 0 ,  (i) for a 1D harmonic oscillator, and (ii) for a 1D particle confined in an arbitrary stationary potential. 5.14. Prove the so-called Bethe sum rule,

 E n'  E n 

n e ikxˆ n'

2



n'

2k 2 2m

(where k is any c-number), valid for a 1D particle moving in an arbitrary time-independent potential U(x), and discuss its relation with the Thomas-Reiche-Kuhn sum rule whose derivation was the subject of the previous problem. Hint: Calculate the expectation value, in a stationary state n, of the following double commutator, Dˆ  Hˆ , e ikxˆ , e ikxˆ ,







in two ways: first, just by spelling out both commutators, and, second, by using the commutation relations between operators pˆ x and e ikxˆ , and compare the results.



 

5.15. Spell out the commutator aˆ , exp aˆ † , where aˆ † and aˆ are the creation-annihilation operators (5.65), and  is a c-number. 5.16. Given Eq. (116), prove Eq. (117) by using the hint given in the accompanying footnote. 5.17. Use Eqs. (116)-(117) to simplify the following operators: (i) exp iaxˆ pˆ x exp iaxˆ , and (ii) exp iapˆ x  xˆ exp iapˆ x , where a is a c-number. 5.18.* Derive the commutation relation between the number operator (5.73) and a reasonably derived quantum-mechanical operator describing the harmonic oscillator’s phase. Obtain the uncertainty relation for the corresponding observables, and explore its limit at N >> 1. 5.19. At t = 0, a 1D harmonic oscillator was in a state described by the ket-vector 1  31  32 ,   2 where n are the ket-vectors of the stationary (Fock) states of the oscillator. Calculate: (i) the expectation value of the oscillator’s energy, and (ii) the time evolution of the expectation values of its coordinate and momentum.

Chapter 5

Page 44 of 48

Essential Graduate Physics

QM: Quantum Mechanics

5.20.* Re-derive the London dispersion force’s potential of the interaction of two isotropic 3D harmonic oscillators (already calculated in Problem 3.16), using the language of mutually-induced polarization. 5.21. An external force pulse F(t), of a finite time duration T, is exerted on a 1D harmonic oscillator, initially in its ground state. Use the Heisenberg-picture equations of motion to calculate the expectation value of the oscillator’s energy at the end of the pulse. 5.22. Use Eqs. (144)-(145) to calculate the uncertainties x and p for a harmonic oscillator in its squeezed ground state, and in particular, to prove Eqs. (143) for the case  = 0. 5.23. Calculate the energy of a harmonic oscillator in the squeezed ground state . 5.24.* Prove that the squeezed ground state described by Eqs. (142) and (144)-(145) may be sustained by a sinusoidal modulation of a harmonic oscillator’s parameter, and calculate the squeezing factor r as a function of the parameter modulation depth, assuming that the depth is small and the oscillator’s damping is negligible. 5.25. Use Eqs. (148) to prove that at negligible spin effects, the operators Lˆ j and Lˆ2 commute with the Hamiltonian of a particle placed in any central potential field. 5.26. Use Eqs. (149)-(150) and (153) to prove Eqs. (155). 5.27. Derive Eq. (164) by using any of the prior formulas. 5.28. Derive the expression L2 = 2l(l + 1) from basic statistics, assuming that all (2l + 1) values Lz = m of a system with a fixed integer number l have equal probability, and that the system is isotropic. Explain why this statistical picture cannot be used for proof of Eq. (5.164). 5.29. In the basis of common eigenstates of the operators Lˆ z and Lˆ2 , described by kets l, m: (i) calculate the matrix elements l , m Lˆ l , m and l , m Lˆ2 l , m , 1

x

2

1

x

2

(ii) spell out your results for diagonal matrix elements (with m1 = m2) and their y-axis counterparts, and (iii) calculate the diagonal matrix elements l , m Lˆ x Lˆ y l , m and l , m Lˆ y Lˆ x l , m . 5.30. For the state described by the common eigenket l, m of the operators Lˆ z and Lˆ2 in a reference frame {x, y, z}, calculate the expectation values Lz’ and Lz’2 in the reference frame whose z’-axis forms angle  with the z-axis. 5.31. Write down the matrices of the following Lˆ x , Lˆ y , Lˆ z , and Lˆ  , in the z-basis of the {l, m} states with l = 1.

Chapter 5

angular

momentum

operators:

Page 45 of 48

Essential Graduate Physics

QM: Quantum Mechanics

5.32. Calculate the angular factor of the orbital wavefunction of a particle with a definite value of L , equal to 62, and the largest possible value of Lx. What is this value? 2

5.33. For the state with the wavefunction  = Cxye-r, with a real positive , calculate: (i) the expectation values of the observables Lx, Ly, Lz, and L2, and (ii) the normalization constant C. 5.34. An angular state of a spinless particle is described by the following ket-vector: 1  l  3, m  0  l  3, m  1  .   2 Calculate the expectation values of the x- and y-components of its angular momentum. Is the result sensitive to a possible phase shift between the component eigenkets? 5.35. A particle is in a quantum state  with the orbital wavefunction proportional to the spherical harmonic Y11 ( ,  ). Find the angular dependence of the wavefunctions corresponding to the following ket-vectors: (i) Lˆ  , (ii) Lˆ  , (iii) Lˆ  , (iv) Lˆ Lˆ  , and (v) Lˆ2  . x

y



z



5.36. A charged, spinless 2D particle of mass m is trapped in the potential well U(x, y) = m02(x2 +y2)/2. Calculate its energy spectrum in the presence of a uniform magnetic field B normal to the [x, y]plane of the particle’s motion. 5.37. Solve the previous problem for a spinless 3D particle, placed (in addition to a uniform magnetic field B) into a spherically-symmetric potential well U(r) = m02r2/2. 5.38. Calculate the spectrum of rotational energies of an axially-symmetric rigid macroscopic body.

 



5.39. Simplify the double commutator rˆj , Lˆ2 , rˆj ' . 5.40. Prove the following commutation relation:

Lˆ , Lˆ , rˆ    2 rˆ Lˆ 2

2

2

j

2

j



 Lˆ2 rˆj .

5.41. Use the commutation relation proved in the previous problem and Eq. (148) to prove the orbital electric-dipole transition selection rules mentioned in Sec. 6.









5.42. Express the commutators listed in Eq. (179), Jˆ 2 , Lˆ z and Jˆ 2 , Sˆ z , via Lˆ j and Sˆ j . 5.43. Find the operator Tˆ describing a quantum state’s rotation by angle  about a certain axis, by using the similarity of this operation with the shift of a Cartesian coordinate, discussed in Sec. 5. Then use this operator to calculate the probabilities of measurements of spin-½ components of particles Chapter 5

Page 46 of 48

Essential Graduate Physics

QM: Quantum Mechanics

with z-polarized spin, by a Stern-Gerlach instrument turned by angle  within the [z, x] plane, where y is the axis of particle propagation – see Fig. 4.1.55 5.44. The rotation operator Tˆ analyzed in the previous problem and the linear translation operator Tˆ X discussed in Sec. 5 have a similar structure:





Tˆ  exp  iCˆ  /  ,

where  is a real c-number, characterizing the shift, while Cˆ is a Hermitian operator that does not explicitly depend on time. (i) Prove that such operators are unitary. (ii) Prove that if the shift by , induced by the operator Tˆ , leaves the Hamiltonian of some 

system unchanged for any , then C is a constant of motion for any initial state of the system. (iii) Discuss what the last conclusion means for the particular operators Tˆ X and Tˆ . 5.45. A particle with spin s is in a state with definite quantum numbers l and j. Prove that the observable LS also has a definite value and calculate it. 5.46. For a spin-½ particle in a state with definite quantum numbers l, ml, and ms, calculate the expectation value of the observable J2, and the probabilities of all its possible values. Interpret your results in terms of the Clebsch-Gordan coefficients (190). 5.47. Derive general recurrence relations for the Clebsch-Gordan coefficients for a particle with spin s. Hint: By using the similarity of the commutation relations discussed in Sec. 7, write the relations similar to Eqs. (164) for other components of the angular momentum, and then apply them to Eq. (170). 5.48. Use the recurrence relations derived in the previous problem to prove Eqs. (190) for the spin-½ Clebsch-Gordan coefficients. 5.49. A spin-½ particle is in a state with definite values of L2, J2, and Jz. Find all possible values of the observables S2, Sz, and Lz, the probability of each listed value, and the expectation value for each of these observables. 5.50. Re-solve the Landau-level problem discussed in Sec. 3.2, now for a spin-½ particle. Discuss the result for the particular case of an electron. 5.51. In the Heisenberg picture of quantum dynamics, find an explicit relation between the operators of velocity vˆ  drˆ / dt and acceleration aˆ  dvˆ / dt of a spin-½ particle with an electric charge q, moving in an arbitrary external electromagnetic field. Compare the result with the corresponding classical expression.

55

Note that the last task is just a particular case of Problem 4.18 (see also Problem 1).

Chapter 5

Page 47 of 48

Essential Graduate Physics

QM: Quantum Mechanics

Hint: For the orbital motion’s description, you may use Eq. (3.26). 5.52. One of the byproducts of the solution of Problem 47 was the following relation for the spin operators (valid for any spin s): 1/ 2 m s  1 Sˆ  m s  s  m s  1s  m s  . Use this result to spell out the matrices Sx, Sy, Sz, and S2 of a particle with s = 1, in the z-basis – defined as the basis in which the matrix Sz is diagonal. 5.53.* For a particle with an arbitrary spin s, find the ranges of the quantum numbers mj and j that are necessary to describe, in the coupled-representation basis: (i) all states with a definite quantum number l, and (ii) a state with definite values of not only l, but also ml and ms. Give an interpretation of your results in terms of the classical vector diagram – see, e.g., Fig. 13. 5.54. For a particle with spin s, find the range of the quantum numbers j necessary to describe, in the coupled-representation basis, all states with definite quantum numbers l and ml. 5.55. A particle of mass m, with electric charge q and spin s, free to move along a plane circle of a radius R, is placed into a constant uniform magnetic field B directed normally to the circle’s plane. Calculate the energy spectrum of the system. Explore and interpret the particular form the result takes when the particle is an electron with the g-factor ge  2.

Chapter 5

Page 48 of 48

Essential Graduate Physics

QM: Quantum Mechanics

Chapter 6. Perturbative Approaches This chapter discusses several perturbative approaches to problems of quantum mechanics, and their simplest but important applications starting with the fine structure of atomic energy levels, and the effects of external dc and ac electric and magnetic fields on these levels. It continues with a discussion of quantum transitions to continuous spectrum and the Golden Rule of quantum mechanics, which naturally brings us to the issue of open quantum systems – to be discussed in the next chapter. 6.1. Time-independent perturbations Unfortunately, only a few problems of quantum mechanics may be solved exactly in an analytical form. Actually, in the previous chapters we have solved a substantial part of such problems for a single particle, while for multiparticle systems, the exactly solvable cases are even more rare. However, most practical problems of physics feature a certain small parameter, and this smallness may be exploited by various approximate analytical methods giving asymptotically correct results – i.e. the results whose error tends to zero at the reduction of the small parameter(s). Earlier in the course, we explored one of them, the WKB approximation, which is adequate for a particle moving through a soft potential profile. In this chapter, we will discuss other techniques that are more suitable for other cases. The historic name for these techniques is the perturbation theory, but it is fairer to speak about perturbative approaches because they are substantially different for different situations. The simplest version of the perturbation theory addresses the problem of stationary states and energy levels of systems described by time-independent Hamiltonians of the type

Hˆ  Hˆ ( 0 )  Hˆ (1) ,

(6.1)

where the operator Hˆ (1) , describing the system’s “perturbation”, is relatively small – in the sense that its addition to the unperturbed operator Hˆ ( 0) results in a relatively small change of the eigenenergies En and the corresponding eigenstates of the system. A typical problem of this type is the 1D weakly anharmonic oscillator (Fig. 1), described by the Hamiltonian (1) with pˆ 2 m 02 xˆ 2 ( 0) ˆ , H   2m 2

Weakly anharmonic oscillator

Hˆ (1)   xˆ 3   xˆ 4  ...

(6.2)

with sufficiently small coefficients , , ….

U (0) 

m x 2 2 0

U ( x) En

2

U ( 0 )  H (1) E2

E 2( 0 ) E1

E1( 0 ) 0

© K. Likharev

x

Fig. 6.1. The simplest application of the perturbation theory: a weakly anharmonic 1D oscillator. (Dashed lines characterize the unperturbed harmonic oscillator.)

Essential Graduate Physics

QM: Quantum Mechanics

I will use this system as our first example, but let me start by describing the perturbative approach to the general time-independent Hamiltonian (1). In the bra-ket formalism, the eigenproblem (4.68) for the perturbed Hamiltonian, i.e. the stationary Schrödinger equation of the system, is

Hˆ

(0)



 Hˆ (1) n  E n n .

(6.3)

Let the eigenstates and eigenvalues of the unperturbed Hamiltonian, which satisfy the equation

Hˆ ( 0) n ( 0)  E n( 0) n ( 0) ,

(6.4)

be considered as known. In this case, the solution of problem (3) means finding, first, its perturbed eigenvalues En and, second, the coefficients n’ (0)n of the expansion of the perturbed state’s vectors n in the following series over the unperturbed ones, n’ (0): n   n' ( 0 ) n' ( 0) n .

(6.5)

n'

Let us plug Eq. (5), with the summation index n’ replaced with n” (just to have a more compact notation in our forthcoming result), into both sides of Eq. (3):

 n"

n" ( 0) n Hˆ ( 0 ) n" ( 0 )   n" ( 0 ) n Hˆ (1) n" ( 0 )   n" ( 0 ) n E n n" ( 0 ) . n"

(6.6)

n"

and then inner-multiply all terms by an arbitrary unperturbed bra-vector n’ (0) of the system. Assuming that the unperturbed eigenstates are orthonormal, n’ (0)n” (0) = n’n”, and using Eq. (4) in the first term on the left-hand side, we get the following system of linear equations



(1) n" ( 0) n H n'n"  n' ( 0) n E n  E n'( 0 )  ,

(6.7)

n"

where the matrix elements of the perturbation are calculated, by definition, in the unperturbed brackets: (1) H n'n"  n' ( 0) Hˆ (1) n" ( 0 ) .

(6.8)

The linear equation system (7) is still exact,1 and is frequently used for numerical calculations. (Since the matrix coefficients (8) typically decrease when n’ and/or n” become sufficiently large, the sum on the left-hand side of Eq. (7) may usually be truncated, still giving an acceptable accuracy of the solution.) To get analytical results, we need to make approximations. In the simple perturbation theory we are discussing now, this is achieved by the expansion of both the eigenenergies and the expansion coefficients into the Taylor series in a certain small parameter  of the problem:

E n  E n( 0 )  E n(1)  E n( 2 ) ..., n' ( 0 ) n  n' ( 0 ) n

(0)

 n' ( 0 ) n

(1)

 n" ( 0 ) n

(6.9) ( 2)

...,

(6.10)

where E n( k )  n' ( 0 ) n

(k )

 k.

(6.11)

1

Please note the similarity of Eq. (7) to Eq. (2.215) of the 1D band theory. Indeed, the latter equation is just a particular form of Eq. (7) for the 1D wave mechanics, with a specific (periodic) potential U(x) considered as the perturbation Hamiltonian. Moreover, the whole approximate treatment of the weak-potential limit in Sec. 2.7 was essentially a particular case of the perturbation theory we are discussing now (in its 1st order). Chapter 6

Page 2 of 36

Perturbation’s matrix elements

Essential Graduate Physics

QM: Quantum Mechanics

In order to explore the 1st-order approximation, which ignores all terms O(2) and higher, let us plug only the two first terms of the expansions (9) and (10) into the basic equation (7):

H n"

(1) n'n"

   n" ( 0 ) n  n"n

(1)

     n' ( 0) n   n'n

(1)





 E ( 0)  E (1)  E ( 0 ) . n n'  n

(6.12)

Now let us open the parentheses, and disregard all the remaining terms O(2). The result is (1) H n'n   n'n En(1)  n' ( 0 ) n

(1)

( En( 0 )  En'( 0 ) ),

(6.13)

This relation is valid for any choice of the indices n and n’; let us start from the case n = n’, immediately getting a very simple (and practically, the most important!) result: Energy: st 1 -order correction

(1)  n ( 0) Hˆ (1) n ( 0 ) . E n(1)  H nn

(6.14)

For example, let us see what this result gives for two first perturbation terms in the weakly anharmonic oscillator (2): E n(1)   n ( 0 ) xˆ 3 n ( 0)   n ( 0 ) xˆ 4 n ( 0 ) . (6.15) As the reader knows (or should know :-) from the solution of Problem 5.12, the first bracket equals zero, while the second one yields 3 E n(1)   x04 2n 2  2n  1. (6.16) 4 Naturally, there should be some non-vanishing contribution to the energies from the (typically, larger) perturbation proportional to , so for its calculation, we need to explore the 2nd order of the theory. However, before doing that, let us complete our discussion of its 1st order. For n’  n, Eq. (13) may be used to calculate the eigenstates rather than the eigenvalues: n' ( 0) n

(1)



(1) H n'n , E n( 0 )  E n'( 0 )

for n'  n.

(6.17)

This means that the eigenket’s expansion (5), in the 1st order, may be represented as (1) H n'n n' ( 0 ) . ( 0) (0) n'  n E n  E n '

States: st 1 -order result

n (1)  C n ( 0)  

(6.18)

The coefficient C  n(0)n(1) cannot be found from Eq. (17); however, requiring the final state n to be normalized, we see that other terms may provide only corrections O(2), so in the 1st order we should take C = 1. The most important feature of Eq. (18) is its denominators: the closer the unperturbed eigenenergies of two states, the larger their mutual “interaction” due to the perturbation. This feature also affects the 1st-order’s validity condition, which may be quantified using Eq. (17): the magnitudes of the brackets it describes have to be much less than the unperturbed bracket nn(0) = 1, so all elements of the perturbation matrix have to be much less than the difference between the corresponding unperturbed energies. For the anharmonic oscillator’s energy corrections (16), this requirement is reduced to En(1) >  (for example, at the exact resonance,  = 0., i.e. nn’ = , so En = En’ + ), Eqs. (101)-(102) give  =  A / and (Wn)max = 1, i.e. describe a periodic, full “repumping” of the system from one level to another and back, with a frequency proportional to the perturbation amplitude.31 A 3 

0.8

Wn

0.6

1

0.4

0 .3

0.2

0 .1 0

0

0.2

0.4

0.6

0.8

Fig. 6.9. The Rabi oscillations for several values of the normalized amplitude of ac perturbation.

t /( 2 /  )

This effect is a close analog of the quantum oscillations in two-level systems with timeindependent Hamiltonians, which were discussed in Secs. 2.6 and 5.1. Indeed, let us revisit, for a moment, their discussion started at the end of Sec.1 of this chapter, now paying more attention to the time evolution of the system under a perturbation. As was argued in that section, the most general perturbation Hamiltonian lifting the two-fold degeneracy of an energy level, in an arbitrary basis, has the matrix (28). Let us describe the system’s dynamics using, again, the Schrödinger picture, representing the ket-vector of an arbitrary state of the system in the form (5.1), where  and  are the

30

It was derived in 1952 by Isaac Rabi, in the context of his group’s pioneering experiments with the ac (practically, microwave) excitation of quantum states, using molecular beams in vacuum. 31 As Eqs. (82), (96), and (99) show, the lowest frequency in the system is  =  – /2 + , so at A  0,   l n’ l n’ + 2A2/. This effective shift of the lowest energy level (which may be measured by another “probe” field of a different frequency) is a particular case of the ac Stark effect, which was already mentioned in Sec. 2. Chapter 6

Page 22 of 36

Rabi formula

Essential Graduate Physics

QM: Quantum Mechanics

time-independent states of the basis in that Eq. (28) is written (now without any obligation to associate these states with the z-basis of any spin-½.) Then, the Schrödinger equation (4.158) yields

      H i    H 1      11          H 21

H 12      H 11   H 12   .     H 22      H 21   H 22  

(6.103)

As we know (for example, from the discussion in Sec. 5.1), the average of the diagonal elements of the matrix gives just a common shift of the system’s energy; for the purpose of the analysis, it may be absorbed into the energy reference level. Also, the Hamiltonian operator has to be Hermitian, so the offdiagonal elements of its matrix have to be complex-conjugate. With this, Eqs. (103) are reduced to the form,



i       H 12  , 2



i   H 12*      , 2

with   H 22  H 11 ,

(6.104)

which is absolutely similar to Eqs. (97). In particular, these equations describe the quantum oscillations of the probabilities W = 2 and W = 2 with the frequency32 1/ 2

2  H 12   2 2     4 2  . (6.105)      The similarity of Eqs. (97) and (104), and hence of Eqs. (99) and (105), shows that the “usual” quantum oscillations and the Rabi oscillations have essentially the same physical nature, besides that in the latter case the external ac signal quantum  bridges the separated energy levels, effectively reducing their difference (En – En’) to a much smaller difference –  (En – En’) – . Also, since the Hamiltonian (28) is similar to that given by Eq. (5.2), the dynamics of such a system with two accoupled energy levels, within the limits (93) of the perturbation theory, is completely similar to that of a time-independent two-level system. In particular, its state may be similarly represented by a point on the Bloch sphere shown in Fig. 5.3, with its dynamics described, in the Heisenberg picture, by Eq. (5.19). This fact is very convenient for the experimental implementation of quantum information processing systems (to be discussed in more detail in Sec. 8.5), because it enables qubit manipulations in a broad variety of physical systems with well-separated energy levels, using external ac (usually either microwave or optical) sources.

Note, however, that according to Eq. (90), if a system has energy levels other than n and n’, they also become occupied to some extent. Since the sum of all occupancies equals 1, this means that (Wn)max may approach 1 only if the other excitation amplitude is very small, and hence the state manipulation time scale T = 2/ = 2/ A  is very long. The ultimate limit in this sense is provided by the harmonic oscillator where all energy levels are equidistant, and the probability repumping between all of them occurs at comparable rates. In particular, in this system the implementation of the full Rabi oscillations is impossible even at the exact resonance.33 32

By the way, Eq. (105) gives a natural generalization of the relations obtained for the frequency of such oscillations in Sec. 2.6, where the coupled potential wells were assumed to be exactly similar, so  = 0. Moreover, Eqs. (104) gives a long-promised proof of Eqs. (2.201), and hence a better justification of Eqs. (2.203). 33 From Sec. 5.5, we already know what happens to the ground state of an oscillator at its external sinusoidal (or any other) excitation: it turns into a Glauber state, i.e. a superposition of all Fock states – see Eq. (5.134). Chapter 6

Page 23 of 36

Essential Graduate Physics

QM: Quantum Mechanics

However, I would not like these quantitative details to obscure from the reader the most important qualitative (OK, maybe semi-quantitative :-) conclusion of this section’s analysis: a resonant increase of the interlevel transition intensity at   nn’. As will be shown later in the course, in a quantum system coupled to its environment at least slightly (hence in reality, in any quantum system), such increase is accompanied by a sharp increase of the external field’s absorption, which may be measured. This increase is used in numerous applications, notably including the magnetic resonance techniques already mentioned in Sec. 5.1. 6.6. Quantum-mechanical Golden Rule One of the results of the past section, Eq. (102), may be used to derive one of the most important and nontrivial results of quantum mechanics. For that, let us consider the case when the perturbation causes quantum transitions from a discrete energy level En’ into a group of eigenstates with a very dense (essentially continuous) spectrum En – see Fig. 10a. (a)

En

(b) 0.2



0.1

En'

0  15

0

15

 nn ' t

Fig. 6.10. Deriving the Golden Rule: (a) the energy level scheme, and (b) the function under the integral in Eq. (108).

If, for all states n of the group, the following conditions are satisfied Ann '

2





  nn '    nn ' , 2

2

(6.106)

then Eq. (102) coincides with the result that would follow from Eq. (90). This means that we may apply Eq. (102), with the indices n and n’ duly restored, to any level n of our tight group. As a result, the total probability of having our system transferred from the initial level n’ to that group is 4 W (t )   Wn (t )  2  n

 n

Ann ' 

2 nn '

2

sin 2

 nn ' t . 2

(6.107)

Now comes the main, absolutely beautiful trick: let us assume that the summation over n is limited to a tight group of very similar states whose matrix elements Ann’ are virtually similar (we will check the validity of this assumption later on), so we can take Ann’2 out of the sum in Eq. (107) and then replace the sum with the corresponding integral:

W (t ) 

4 Ann ' 2

2

2

4 Ann '  n t 1 1 2  nn' t 2  nn' t  2nn' sin 2 dn     t 2 sin 2 d ( nn' t ), nn'

(6.108)

where n is the density of the states n on the energy axis:

n 

Chapter 6

dn . dE n

(6.109)

Page 24 of 36

Density of states

Essential Graduate Physics

QM: Quantum Mechanics

This density and the matrix element Ann’ have to be evaluated at nn’ = 0, i.e. at energy En = En’ + , and are assumed to be constant within the final state group. At fixed En’, the function under integral (108) is even and decreases fast at nn’t >> 1 – see Fig. 10b. Hence we may introduce a dimensionless integration variable   nn’t, and extend the integration over it formally from – to +. Then the integral in Eq. (108) is reduced to a table one,34 and yields 2

2

4 Ann'  nt  4 Ann'  nt   1  W (t )   t , sin 2 d  2   2  2   where the constant 

Golden Rule

(6.110)

2 2 Ann'  n 

(6.111)

is called the transition rate.35 This is one of the most famous and useful results of quantum mechanics, its Golden Rule36, which deserves much discussion. First of all, let us reproduce the reasoning already used in Sec. 2.5 to show that the meaning of the rate  is much deeper than Eq. (110) seems to imply. Indeed, due to the conservation of the total probability, Wn’ + W = 1, we can rewrite that equation as W n'

t 0

 .

(6.112)

Evidently, this result cannot be true for all times, otherwise the probability Wn’ would become negative. The reason for this apparent contradiction is that Eq. (110) was obtained in the assumption that initially, the system was completely on level n’: Wn’(0) = 1. Now, if at the initial moment the value of Wn’ is different, the result (110) has to be multiplied by that number, due to the linear relation (88) between dan/dt and an’. Hence, instead of Eq. (112), we get a differential equation similar to Eq. (2.159), W n'

t 0

 Wn' ,

(6.113)

which, for a time-independent , has the evident solution, Initial occupancy’s decay

Wn ' (t )  Wn ' (0)e  Γt ,

(6.114)

describing the exponential decay of the initial state’s occupancy, with the time constant  = 1/. I am inviting the reader to review this fascinating result again: by the summation of periodic oscillations (102) over many levels n, we have got an exponential decay (114) of the probability. This trick becomes possible because the effective range En of the state energies En giving substantial 34

See, e.g., MA Eq. (6.12). some texts, the density of states in Eq. (111) is replaced with a formal expression  (En – En’ – ). Indeed, applied to a finite energy interval En with n >> 1 levels, it gives the same result: n  (dn/dEn)En  nEn. Such replacement may be technically useful in some cases, but is incorrect for n ~ 1, and hence should be used with the utmost care, so for most applications, the more explicit form (111) is preferable. 36 Sometimes Eq. (111) is called “Fermi’s Golden Rule”. This is rather unfair, because this result had been developed mostly by the same P. A. M. Dirac in 1927, and Enrico Fermi’s role was not much more than advertising it, under the name of “Golden Rule No. 2”, in his influential lecture notes on nuclear physics that were published much later, in 1950. (To be fair to Fermi, he has never tried to pose as the Golden Rule’s author.) 35 In

Chapter 6

Page 25 of 36

Essential Graduate Physics

QM: Quantum Mechanics

contributions to the integral (108), shrinks with time: En ~ /t.37 However, since most of the decay (114) takes place within the time interval of the order of   1/, the range of the participating final energies may be estimated as  (6.115)  E n ~   .  This estimate is very instrumental for the formulation of conditions of the Golden Rule’s validity. First, we have assumed that the matrix elements of the perturbation and the density of states are independent of the energy within the interval (115). This gives the following requirement En ~   En  En' ~  ,

(6.116)

Second, for the transfer from the sum (107) to the integral (108), we need the number of states within that energy interval, Nn = nEn, to be much larger than 1. Merging Eq. (116) with Eq. (92) for all the energy levels n”  n, n’ not participating in the resonant transition, we may summarize all conditions of the Golden Rule validity as

 n1  Γ      n'n" .

(6.117)

(The reader may ask whether I have neglected the condition expressed by the first of Eqs. (106). However, for nn’ ~ En/ ~ , this condition is just Ann’2 0, so the excitation rate is different from zero only for frequencies

   min 

E n' 



mW 2 . 2 3

The weak sinusoidal force may be described by the following perturbation Hamiltonian, F Hˆ (1)   F (t ) xˆ   F0 xˆ cos t   0 xˆ  e it  e  it , for t  0 ,  2  so according to Eq. (86), which serves as the amplitude operator’s definition, in this case

(6.121)

(6.122)

F Aˆ  Aˆ †   0 xˆ. (6.123) 2 The matrix elements Ann’ that participate in Eq. (111) may be readily calculated in the coordinate representation:  F0  * * ˆ (6.124) Ann '   n ( x) A( x) n ' ( x) dx    n ( x) x n ' ( x)dx . 2  

Since, according to Eq. (120), the initial n’ is a symmetric function of x, any non-vanishing contributions to this integral are given only by antisymmetric functions n(x), proportional to sinknx, with the wave number kn related to the final energy by the well-familiar equality (1.89):  2 k n2  En . 2m

Chapter 6

(6.125)

Page 27 of 36

Essential Graduate Physics

QM: Quantum Mechanics

As we know from Sec. 2.6 (see in particular Eq. (2.167) and its discussion), such antisymmetric functions, with n(0) = 0, are not affected by the zero-centered delta-functional potential (119), so their density n is the same as that in completely free space, and we could use Eq. (1.93). However, since that relation was derived for traveling waves, it is more prudent to repeat its derivation for standing waves, confining them to an artificial segment [-l/2, +l/2] – long in the sense k n l , l  1 ,

(6.126)

so it does not affect the initial localized state and the excitation process. Then the confinement requirement n(l/2) = 0 immediately yields the condition knl/2 = n, so Eq. (1.93) is indeed valid, but only for positive values of kn, because sinknx with kn  –kn does not describe an independent standingwave eigenstate. Hence the final state density is dn dn  n  dE n dk n

 2kn lm  . m 2 2 k n

dE n l  dk n 2

(6.127)

It may look troubling that the density of states depends on the artificial segment’s length l, but the same l also participates in the final wavefunctions’ normalization factor,39 2 l

1/ 2

n   

sin k n x ,

(6.128)

and hence in the matrix element (124): Ann '  

F0  2    2  l 

1 / 2 l

 sin k n x e

l

 x

xdx.  

F0  2    2i  l 

1/ 2

l  l (ikn  ) x  (ikn  ) x  e  . (6.129)  xdx e xdx    0 0 

These two integrals may be readily worked out by parts. Taking into account that due to the condition (126), their upper limits may be extended to , the result is  2  Ann '     l 

1/ 2

F0

k

2k n  2 n

2



2

.

(6.130)

Note that the matrix element is a smooth function of kn (and hence of En), so an important condition of the Golden Rule, the virtual constancy of Ann’ on the interval En ~  rB: 42

See, e.g., P. Delsing et al., Phys. Rev. Lett. 63, 1180 (1989).

Chapter 6

Page 31 of 36

Essential Graduate Physics

QM: Quantum Mechanics

Hˆ (1)  dˆ  Eˆ,

with dˆ   q k rˆk ,

(6.146)

k

where the dipole electric moment d depends only on the positions rk of the charged particles (numbered with index k) of the atomic system, while the electric field E is a function of only the electromagnetic field’s degrees of freedom – to be discussed in Chapter 9 below. Returning to the general situation shown in Fig. 12, if the component system a was initially in an excited state n’a, the interaction (145), turned on at some moment of time, may bring it into another discrete state na of lower energy – for example, the ground state. In the process of this transition, the released energy, in the form of an energy quantum is picked up by the system b:

  E n 'a  E na ,

(6.147)

E nb  E n 'b    E n 'b  E n 'a  E na  ,

(6.148)

so the total energy E = Ea + Eb of the system does not change. (If the states na and n’b are the ground states of the two component systems, as they are in most applications of this analysis, and we take the ground state energy Eg = Ena + En’b of the composite system for the reference, then Eq. (148) gives merely Enb = En’a.) If the final state nb of the system b is inside a state group with a quasi-continuous energy spectrum (Fig. 12), the process has the exponential character (114)43 and may be interpreted as the effect of energy relaxation of system a, with the released energy quantum  absorbed by the system b. If the relaxation rate  is sufficiently low, it may be described by the Golden Rule (137). Since the perturbation (145) does not depend on time explicitly, and the total energy E does not change, this relation, with the account of Eqs. (143) and (145), takes the form 2 2 2  Ann ' Bnn '  n , where Ann '  na Aˆ n' a , 

and Bnn '

 nb Bˆ n' b ,

(6.149)

where n is the density of the final states of the system b at the relevant energy (147). In particular, Eq. (149), with the dipole Hamiltonian (146), enables a very straightforward calculation of the natural linewidth of atomic electric-dipole transitions. However, this calculation has to be postponed until Chapter 9, in which we will discuss the electromagnetic field quantization – i.e., the exact nature of the states nb and n’b for this problem, and hence will be able to calculate Bnn’ and n. Instead, I will now proceed to a general discussion of the effects of quantum systems’ interaction with their environment, toward which the situation shown in Fig. 12 provides a clear conceptual path. Indeed, in this case the transition from the Hamiltonian (and hence reversible) quantum dynamics of the whole composite system a + b to the Golden-Rule-governed (and hence irreversible) dynamics of system a has been achieved essentially by following this component system alone, i.e. ignoring the details of the exact state of system b. (As was argued in the previous section, the quasi-continuous spectrum of the latter system essentially requires it to have a large spatial size, so it may be legitimately called the environment of the “open” system a.) This is exactly the approach that will be pursued in the next chapter.

43

This process is spontaneous: it does not require any external agent and starts as soon as either the interaction (145) has been turned on, or (if it is always on) as soon as the system a is placed into the excited state n’a.

Chapter 6

Page 32 of 36

Golden Rule for coupled systems

Essential Graduate Physics

QM: Quantum Mechanics

6.8. Exercise problems 6.1. Use Eq. (6.14) of the lecture notes to prove the following general form of the HellmannFeynman theorem:44 E n Hˆ  n n ,   where  is an arbitrary c-number parameter. 6.2. Establish a relation between Eq. (16) and the result of the classical theory of weakly anharmonic (“nonlinear”) oscillations at negligible damping. Hint: You may like to use N. Bohr’s reasoning discussed in Problem 1.1. 6.3. An additional weak time-independent force F is exerted on a 1D particle that had been placed into a hard-wall potential well for 0  x  a, 0, U x    otherwise.   , Calculate, sketch, and discuss the 1st-order perturbation of its ground-state wavefunction. 6.4. A time-independent force F = (nxy+nyx), where  is a small constant, is applied to a 3D harmonic oscillator of mass m and frequency 0, located at the origin. Calculate, in the first order of the perturbation theory, the effect of the force upon the ground-state energy of the oscillator and its lowest excited energy level. How small should the constant  be for your results to be quantitatively correct? 6.5. A 1D particle of mass m is localized at a narrow potential well that may be approximated with a delta function: U  x   W  x , with W  0. Calculate the change of its ground state energy by an additional weak time-independent force F, in the first non-vanishing approximation of the perturbation theory. Discuss the limits of validity of this result, taking into account that at F  0, the localized state of the particle is metastable. 6.6. Use the perturbation theory to calculate the eigenvalues of the operator Lˆ2 , in the limit  m   l >> 1, by purely wave-mechanical means. Hint: Try the following substitution: () = f()/sin1/2. 6.7. In the lowest non-vanishing order of the perturbation theory, calculate the shift of the ground-state energy of an electrically charged spherical rotor (i.e. a particle of mass m, free to move over a spherical surface of radius R) due to a weak uniform time-independent electric field E. 6.8. Use the perturbation theory to evaluate the effect of a time-independent uniform electric field E on the ground state energy Eg of a hydrogen atom. In particular: 44

As a reminder, proof of its wave-mechanics form was the task of Problem 1.7.

Chapter 6

Page 33 of 36

Essential Graduate Physics

QM: Quantum Mechanics

(i) calculate the 2nd-order shift of Eg, neglecting the extended unperturbed states with E > 0, and bring the result to the simplest analytical form you can, (ii) find the lower and the upper bounds on the shift, and (iii) discuss the simplest experimental manifestation of this quadratic Stark effect. 6.9. A particle of mass m, with electric charge q, is in its ground s-state with a given energy Eg < 0, being localized by a very-short-range, spherically symmetric potential well. Calculate its static electric polarizability. 6.10. In some atoms, the effect of nuclear charge screening by electrons on the motion of each of them may be reasonably well approximated by the replacement of the Coulomb potential (3.190), U = – C/r, with the so-called Hulthén potential for r  a,  1 / r , C/a U   C   exp r / a  1  exp r / a/ a, for a  r. Assuming that the effective screening radius a is much larger than r0  2/mC, use the perturbation theory to calculate the energy spectrum of a single particle of mass m, moving in this potential, in the lowest order needed to lift the l-degeneracy of the energy levels. 6.11. In the lowest non-vanishing order of the perturbation theory, calculate the correction to energies of the ground state and all lowest excited states of a hydrogen-like atom/ion, due to the electron’s penetration into the nucleus, modeling it the latter a spinless, uniformly charged sphere of radius R > rB. 6.21. In a certain quantum system, distances between the three lowest E2 energy levels are slightly different – see the figure on the right ( 0, the Wigner function takes negative values in some regions of the [X, P] plane – see Fig. 3.22 21

Please note that in Sec. 5.5, the capital letters X and P mean not the arguments of the Wigner function as in this section, but the Cartesian coordinates of the central point (5.102), i.e. the classical complex amplitude of the oscillations. 22 Spectacular experimental measurements of this function (for n = 0 and n = 1) were carried out recently by E. Bimbard et al., Phys. Rev. Lett. 112, 033601 (2014). Chapter 7

Page 12 of 54

Essential Graduate Physics

QM: Quantum Mechanics

(Such plots were the basis of my, admittedly very imperfect, classical images of the Fock states in Fig. 5.8.)

Fig. 7.3. The Wigner functions W(X, P) of a harmonic oscillator, in a few of its stationary (Fock) states n: (a) n = 0, (b) n = 1; (c) n = 5. Graphics by J. S. Lundeen; adapted from http://en.wikipedia.org/wiki/Wigner_function as a public-domain material.

The same is true for most other quantum systems and their states. Indeed, this fact could be predicted just by looking at the definition (50) applied to a pure quantum state, in which the density function may be factored – see Eq. (31): ~ ~ ~  iPX  ~  1 X  * X   (7.63)  W ( X , P)   X   X exp      dX . 2  2   2     Changing the argument P (say, at fixed X), we are essentially changing the spatial “frequency” (wave number) of the wavefunction product’s Fourier component we are calculating, and we know that their Fourier images typically change sign as the frequency is changed. Hence the wavefunctions should have some high-symmetry properties to avoid this effect. Indeed, the Gaussian functions (describing, for example, the Glauber states, and in their particular case, the ground state of the harmonic oscillator) have such symmetry, but many other functions do not. Hence if the Wigner function was taken seriously as the quantum-mechanical analog of the classical probability density wcl(X, P), we would need to interpret the negative probability of finding the particle in certain elementary intervals dXdP – which is hard to do. However, the function is still used for semi-quantitative interpretation of mixed states of quantum systems. 7.3. Open system dynamics: Dephasing So far we have discussed the density operator as something given at a particular time instant. Now let us discuss how is it formed, i.e. its evolution in time, starting from the simplest case when the probabilities Wj participating in Eq. (15) are time-independent – by this or that reason, to be discussed in a moment. In this case, in the Schrödinger picture, we may rewrite Eq. (15) as wˆ (t )   w j (t ) W j w j (t ) .

(7.64)

j

Taking a time derivative of both sides of this equation, multiplying them by i, and applying Eq. (4.158) to the basis states wj, with the account of the fact that the Hamiltonian operator is Hermitian, we get

Chapter 7

Page 13 of 54

Essential Graduate Physics

QM: Quantum Mechanics



iwˆ  i  w j (t ) W j w j (t )  w j (t ) W j w j (t )



j



  Hˆ w j (t ) W j w j (t )  w j (t ) W j w j (t ) Hˆ



(7.65)

j

 Hˆ  w j (t ) W j w j (t )   w j (t ) W j w j (t ) Hˆ . j

j

Now using Eq. (64) again (twice), we get the so-called von Neumann equation23





iwˆ  Hˆ , wˆ .

(7.66)

von Neumann equation

Note that this equation is similar in structure to Eq. (4.199) describing the time evolution of timeindependent operators in the Heisenberg picture operators:





 iAˆ  Aˆ , Hˆ ,

(7.67)

besides the opposite order of the operators in the commutator – equivalent to the change of sign of the right-hand side. This should not be too surprising, because Eq. (66) belongs to the Schrödinger picture of quantum dynamics, while Eq. (67), belongs to its Heisenberg picture. The most important case when the von Neumann equation is (approximately) valid is when the “own” Hamiltonian Hˆ s of the system s of our interest is time-independent, and its interaction with the environment is so small that its effect on the system’s evolution during the considered time interval is negligible, but it had lasted so long that it gradually put the system into a non-pure state – for example, but not necessarily, into the classical mixture (24).24 (This is an example of the second case discussed in Sec. 1, when we need the mixed-ensemble description of the system even if its current interaction with the environment is negligible.) If the interaction with the environment is stronger and hence is not negligible at the considered time interval, Eq. (66) is generally not valid,25 because the probabilities Wj may change in time. However, this equation may still be used for a discussion of one major effect of the environment, namely dephasing (also called “decoherence”), within a simple model. Let us start with the following general model of a system interacting with its environment, which will be used throughout this chapter: Hˆ  Hˆ s  Hˆ e    Hˆ int ,

(7.68)

where {} denotes the (huge) set of degrees of freedom of the environment.26 23

In some texts, it is called the “Liouville equation”, due to its philosophical proximity to the classical Liouville theorem for the classical distribution function wcl(X, P) – see, e.g., SM Sec. 6.1 and in particular Eq. (6.5). 24 In the last case, the statistical operator is diagonal in the stationary state basis and hence commutes with the Hamiltonian. Hence the right-hand side of Eq. (66) vanishes, and it shows that in this basis, the density matrix is completely time-independent. 25 Very unfortunately, this fact is not explained in some textbooks, which quote the von Neumann equation without proper qualifications. 26 Note that by writing Eq. (68), we are treating the whole system, including the environment, as a Hamiltonian one. This can always be done if the accounted part of the environment is large enough so the processes in the system s of our interest do not depend on the type of boundary between this part and the “external” (even larger) environment; in particular, we may assume the total system to be closed, i.e. Hamiltonian. Chapter 7

Page 14 of 54

Interaction with environment

Essential Graduate Physics

QM: Quantum Mechanics

Evidently, this model is useful only if we may somehow tame the enormous size of the Hilbert space of these degrees of freedom, and so work out the calculations all way to a practicably simple result. This turns out to be possible mostly if the elementary act of interaction of the system and its environment is in some sense small. Below, I will describe several cases when this is true; the classical example is the Brownian particle interacting with the molecules of the surrounding gas or fluid.27 (In this example, a single hit by a molecule changes the particle’s momentum by a minor fraction.) On the other hand, the model (68) is not very productive for a particle interacting with the environment consisting of similar particles, when a single collision may change its momentum dramatically. In such cases, the methods discussed in the next chapter are more appropriate. Now let us analyze a very simple model of an open two-level quantum system, with its intrinsic Hamiltonian having the form Hˆ s  c zˆ z , (7.69) similar to the Pauli Hamiltonian (4.163),28 and a factorable, bilinear interaction – cf. Eq. (6.145) and its discussion: Hˆ int  fˆ  ˆ z , (7.70) where fˆ is a Hermitian operator depending only on the set {} of environmental degrees of freedom (“coordinates”), defined in their Hilbert space – different from that of the two-level system. As a result, the operators fˆ   and Hˆ e   commute with ˆ z - and with any other intrinsic operator of the two-level system. Of course, any realistic Hˆ   is extremely complex, so how much we will be able to achieve e

without specifying it, may become a pleasant surprise for the reader. Before we proceed to the analysis, let us recognize two examples of two-level systems that may be described by this model. The first example is a spin-½ in an external magnetic field of a fixed direction (taken for the z-axis), which includes both an average component B and a random ~ (fluctuating) component Bz (t) induced by the environment. As it follows from Eq. (4.163b), it may be described by the Hamiltonian (68)-(70) with cz  

  ~ Bz , and fˆ   Bˆz  t  . 2 2

(7.71)

Another example is a particle in a symmetric double-well potential Us (Fig. 4), with a barrier between them sufficiently high to be practically impenetrable, and an additional force F(t), exerted by the environment, so the total potential energy is U(x, t) = Us(x) – F(t)x. If the force, including its static

27

The theory of the Brownian motion, the effect first observed experimentally by biologist Robert Brown in the 1820s, was pioneered by Albert Einstein in 1905 and developed in detail by Marian Smoluchowski in 1906-1907 and Adriaan Fokker in 1913. Due to this historic background, in some older texts, the approach described in the balance of this chapter is called the “quantum theory of the Brownian motion”. Let me, however, emphasize that due to the later progress of experimental techniques, quantum-mechanical behaviors, including the environmental effects in them, have been observed in a rapidly growing number of various quasi-macroscopic systems, for which this approach is quite applicable. In particular, this is true for most systems being explored as possible qubits of prospective quantum computing and encryption systems – see Sec. 8.5 below. 28 As we know from Secs. 4.6 and 5.1, such Hamiltonian is sufficient to lift the energy level degeneracy. Chapter 7

Page 15 of 54

Essential Graduate Physics

QM: Quantum Mechanics

~ part F and fluctuations F t  , is sufficiently weak, we can neglect its effects on the shape of potential wells and hence on the localized wavefunctions L,R, so the force’s effect is reduced to the variation of the difference EL – ER = F(t)x between the eigenenergies. As a result, the system may be described by Eqs. (68)-(70) with ~ˆ c z   F x / 2; fˆ   F t x / 2 . (7.72)

L

R

U s (x)

 F (t ) x 0

x

Fig. 7.4. Dephasing in a double-well system.

x Now let us start our general analysis of the model described by Eqs. (68)-(70) by writing the equation of motion for the Heisenberg-picture operator ˆ z t  :









iˆ z  ˆ z , Hˆ  (c z  fˆ ) ˆ z , ˆ z  0,

(7.73)

showing that in our simple model (68)-(70), the operator ˆ z does not evolve in time. What does this mean for the observables? For an arbitrary density matrix of any two-level system, w w   11  w21

w12  , w22 

(7.74)

we can readily calculate the trace of the operator ˆ z wˆ . Indeed, since the operator traces are basisindependent, we may do this in the usual z-basis:  1 0  w11  Tr σˆ z wˆ   Tr σ z w   Tr   0  1 w21

w12    w11  w22  W1  W2 . w22 

(7.75)

Since, according to Eq. (5), ˆ z may be considered the operator for the difference of the number of particles in the basis states 1 and 2, in the case (73) the difference W1 – W2 does not depend on time, and since the sum of these probabilities is also fixed, W1 + W2 = 1, both of them are constant. The physics of this simple result is especially clear for the model shown in Fig. 4: since the potential barrier separating the potential wells is so high that tunneling through it is negligible, the interaction with the environment cannot move the system from one well into another one. It may look like nothing interesting may happen in such a simple situation, but in a minute we will see that this is not true. Due to the time independence of W1 and W2, we may use the von Neumann equation (66) to describe the density matrix evolution. In the z-basis:

Chapter 7

Page 16 of 54

Essential Graduate Physics

QM: Quantum Mechanics

 w 11   i  iw  w  21

w 12    H, w   c  fˆ σ , w  z z w 22 



 1  c z  fˆ   0







w12   0   c  fˆ  z   2w w22  21 

0   w11 ,   1  w21



(7.76)

2w12  . 0 



This result means that while the diagonal elements, i.e., the probabilities of the states, do not evolve in time (as we already know), the off-diagonal elements do change; for example, iw 12  2(c z  fˆ ) w12 ,

with a similar but complex-conjugate equation for w21. straightforward, and yields  2c w12 (t )  w12 (0) exp i z  

(7.77)

The solution of this linear differential equation is

 2t ˆ   t  exp i  f (t' )dt'  . (7.78)   0  The first exponent is a deterministic c-number factor, while in the second one fˆ (t )  fˆ  (t ) is still an operator in the Hilbert space of the environment, but from the point of view of the two-level system of our interest, it is just a random function of time. The time-average part of this function may be included in cz, so in what follows, we will assume that it equals zero. Let us start from the limit when the environment behaves classically.29 In this case, the operator in Eq. (78) may be considered a classical random function f(t), provided that we average its effects over a statistical ensemble of many such functions describing many (macroscopically similar) experiments. For a small time interval t = dt  0, we can use the Taylor expansion of the exponent, truncating it after the quadratic term: dt dt  2 dt    2 dt 2 1 2 exp i  f (t' )dt'   1   i  f (t' )dt'    i  f (t' )dt'   i  f (t" )dt"  2  0 0    0   0 dt

dt

dt

dt

dt

(7.79)

2 2 2  1  i  f (t' ) dt'  2  dt'  dt" f (t' ) f (t" )  1  2  dt'  dt"K f (t'  t" ). 0  0 0  0 0

Here we have used the facts that the statistical average of f(t) is equal to zero, while the second average, called the correlation function, in a statistically- (i.e. macroscopically-) stationary state of the environment may only depend on the time difference   t’ – t”: Correlation function

f (t' ) f (t" )  K f (t'  t" )  K f ( ).

(7.80)

If this difference is much larger than some time scale c, called the correlation time of the environment, the values f(t’) and f(t”) are completely independent (uncorrelated), as illustrated in Fig. 5a, so at  

29

This assumption is not in contradiction with the quantum treatment of the two-level system s, because a typical environment is large, and hence has a very dense energy spectrum, with small adjacent level distances that may be readily bridged by thermal excitations of minor energy, often making it essentially classical. Chapter 7

Page 17 of 54

Essential Graduate Physics

QM: Quantum Mechanics

, the correlation function has to tend to zero. On the other hand, at  = 0, i.e. t’ = t”, the correlation function is just the variance of f: K f (0)  f 2 , (7.81) and has to be positive. As a result, the function looks approximately as shown in Fig. 5b. (On the way to zero at   , it may or may not change sign.) (a)

f (t )

t" 

t'

0

(b)

f (t' ) f (t" )

0

t

c

Fig. 7.5. (a) A typical random process and (b) its correlation function – schematically.

t'  t"

Hence, if we are only interested in time differences  much longer than c, which is typically very short, we may approximate Kf() well with a delta function of the time difference. Let us take it in the following form, convenient for later discussion: K f ( )   2 D  ( ) ,

(7.82)

where D is a positive constant called the phase diffusion coefficient. The origin of this term stems from the very similar effect of classical diffusion of Brownian particles in a highly viscous medium. Indeed, the particle’s velocity in such a medium is approximately proportional to the external force. Hence, if the random hits of a particle by the medium’s molecules may be described by a force that obeys a law similar to Eq. (82), the velocity (along any Cartesian coordinate) is also delta-correlated: v(t' )v(t" )  2 D (t'  t" ).

v(t )  0,

(7.83)

Now we can integrate the kinematic relation x  v, to calculate the particle’s displacement from its initial position during a time interval [0, t] and its variance: t

x(t )  x(0)   v(t' )dt' ,

(7.84)

0

x(t )  x(0) 

2



t

t

0

0

 v(t' )dt'  v(t" )dt"

t

t

t

t

0

0

0

0

  dt'  dt" v(t' )v(t" )   dt'  dt" 2 D (t'  t" )  2 Dt. (7.85)

This is the famous law of diffusion, showing that the r.m.s. deviation of the particle from the initial point grows with time as (2Dt)1/2, where the constant D is called the diffusion coefficient. Returning to the diffusion of the quantum-mechanical phase, with Eq. (82) the last double integral in Eq. (79) yields 2Dφdt, so the statistical average of Eq. (78) is

 2c  w12 (dt )  w12 (0) exp i z dt 1  2 D dt  .    Applying this formula to sequential time intervals, Chapter 7

(7.86)

Page 18 of 54

Phase diffusion coefficient

Essential Graduate Physics

QM: Quantum Mechanics

 2c   2c  2 w12 (2dt )  w12 (dt ) exp i z dt 1  2 D dt   w12 (0) exp i z 2dt 1  2 D dt  , (7.87)       etc., for a finite time t = Ndt, in the limit N → ∞ and dt → 0 (at fixed t) we get N 1  2c   w12 (t )  w12 (0) exp i z t   lim N  1  2 D t  .   N  

(7.88)

By the definition of the natural logarithm base e,30 this limit is just exp{-2Dt}, so, finally:  t   2a   2a  w12 (t )  w12 (0) exp i t  exp 2 D t  w12 (0) exp i t  exp  .        T2 

Two-level system: dephasing

(7.89)

So, due to coupling to the environment, the off-diagonal elements of the density matrix decay with some dephasing time T2 = 1/2D,31 providing a natural evolution from the density matrix (22) of a pure state to the diagonal matrix (23), with the same probabilities W1,2, describing a fully dephased (incoherent) classical mixture.32 This simple model offers a very clear look at the nature of the decoherence: the random “force” f(t) exerted by the environment, “shakes” the energy difference between two eigenstates of the system and hence the instantaneous velocity 2(cz + f)/ of their mutual phase shift φ(t) – cf. Eq. (22). Due to the randomness of the force, φ(t) performs a random walk around the trigonometric circle, so the average of its trigonometric functions exp{±iφ} over time gradually tends to zero, killing the off-diagonal elements of the density matrix. Our analysis, however, has left open two important issues: (i) Is this approach valid for a quantum description of a typical environment? (ii) If yes, what is physically the D that was formally defined by Eq. (82)? 7.4. Fluctuation-dissipation theorem Similar questions may be asked about a more general situation, when the Hamiltonian Hˆ s of the system of interest (s), in the composite Hamiltonian (68), is not specified at all, but the interaction between that system and its environment still has the bilinear form similar to Eqs. (70) and (6.130): Hˆ int   Fˆ {} xˆ,

(7.90)

where x is some observable of our system s – say, one of its generalized coordinates. It may look incredible that in this very general situation, one still can make a very simple and powerful statement about the statistical properties of the generalized force F, under only two (interrelated) conditions – which are satisfied in a huge number of cases of interest: 30

See, e.g., MA Eq. (1.2a) with the substitution n = –N/2Dt. In context of the spin magnetic resonance (see below), T2 is frequently called the “spin-spin relaxation time”. 32 Note that this result is valid only if the approximation (82) may be applied at time interval dt which, in turn, should be much smaller than the T2 in Eq. (88), i.e. if the dephasing time is much longer than the environment’s correlation time c. This requirement may be always satisfied by making the coupling to the environment sufficiently weak. In addition, in typical environments, c is very short. For example, in the original Brownian motion experiments with a-few-m pollen grains in water, it is of the order of the average interval between sequential molecular impacts, of the order of 10-21 s. 31

Chapter 7

Page 19 of 54

Essential Graduate Physics

QM: Quantum Mechanics

(i) the coupling of system s of interest to its environment e is not too strong – in the sense that the perturbation theory (see Chapter 6) is applicable, and (ii) the environment may be considered as staying in thermodynamic equilibrium, with a certain temperature T, regardless of the process in the system of interest.33 This famous statement is called the fluctuation-dissipation theorem (FDT).34 Due to the importance of this fundamental result, let me derive it.35 Since by writing Eq. (68), we treat the whole system (s + e) as a Hamiltonian one, we may use the Heisenberg equation (4.199) to write



 



 iFˆ  Fˆ , Hˆ  Fˆ , Hˆ e .

(7.91)

(The second step uses the fact, discussed in the last section, that Fˆ   commutes with both Hˆ s and xˆ .) Generally, very little may be done with this equation, because the time evolution of the environment’s Hamiltonian depends, in turn, on that of the force. This is where the perturbation theory becomes indispensable. Let us decompose the force operator into the following sum: ~ˆ ~ˆ Fˆ    Fˆ  F (t ), with F (t )  0 ,

(7.92)

where (here and on, until further notice) the sign … means the statistical averaging over the environment alone, i.e. over an ensemble with absolutely similar evolutions of the system s, but random states of its environment.36 From the point of view of the system s, the first term of the sum (still an operator!) describes the average response of the environment to the system’s dynamics (possibly, including such irreversible effects as friction), and has to be calculated with a proper account of their interaction – as we will do later in this section. On the other hand, the last term in Eq. (92) represents random fluctuations of the environment, which exist even in the absence of the system s. Hence, in the first nonvanishing approximation in the interaction strength, the fluctuation part may be calculated ignoring the interaction, i.e. treating the environment as being in thermodynamic equilibrium: ~ˆ ~ˆ iF   F , Hˆ e 

eq

. 

(7.93)

33

The most frequent example of the violation of this condition is the environment’s overheating by the energy flow from system s. Let me leave it to the reader to estimate the overheating of a standard physical laboratory room by a typical dissipative quantum process – the emission of an optical photon by an atom. (Hint: it is extremely small.) 34 The FDT was first derived by Herbert Callen and Theodore Allen Welton in 1951, on the background of an earlier derivation of its classical limit by Harry Nyquist in 1928 (for a particular case of electric circuits). 35 The FDT may be proved in several ways that are shorter than the one given below – see, e.g., either the proof in SM Secs. 5.5 and 5.6 (based on H. Nyquist’s arguments), or the original paper by H. Callen and T. Welton, Phys. Rev. 83, 34 (1951) – wonderful in its clarity. The longer approach I will describe here, besides giving the important Green-Kubo formula (109) as a byproduct, is a very useful exercise in operator manipulation and the perturbation theory in its integral form – different from the differential forms used in Chapter 6. The reader not interested in this exercise may skip the derivation and jump straight to the result expressed by Eq. (134), which uses the notions defined by Eqs. (114) and (123). 36 For usual (“ergodic”) environments, without intrinsic long-term memories, this statistical averaging over an ensemble of environments is equivalent to averaging over intermediate times – much longer than the correlation time c of the environment, but still much shorter than the characteristic time of evolution of the system under analysis, such as the dephasing time T2 and the energy relaxation time T1 – both still to be calculated. Chapter 7

Page 20 of 54

Essential Graduate Physics

QM: Quantum Mechanics

Since in this approximation the environment’s Hamiltonian does not have an explicit dependence on time, the solution of this equation may be written by combining Eqs. (4.190) and (4.175):

 i   i  Fˆ t   exp Hˆ e eq t  Fˆ 0  exp Hˆ e eq t  . (7.94)       Let us use this relation to calculate the correlation function of the fluctuations F(t), defined similarly to Eq. (80), but taking care of the order of the time arguments (very soon we will see why): ~ ~    i   i   i  i F t F t'   exp  Hˆ e t  Fˆ 0  exp  Hˆ e t  exp  Hˆ e t'  Fˆ 0  exp Hˆ e t'  .            

(7.95)

(Here, for the notation brevity, the thermal equilibrium of the environment is just implied.) We may calculate this expectation value in any basis, and the best choice for it is evident: in the environment’s stationary-state basis, the density operator of the environment, its Hamiltonian and hence the exponents in Eq. (95) are all represented by diagonal matrices. Using Eq. (5), the correlation function becomes

 ~ ~   i   i   i   i F t F t'   Tr  wˆ exp Hˆ e t  Fˆ 0  exp Hˆ e t  exp Hˆ e t'  Fˆ 0 exp Hˆ e t'                  i   i   i   i    wˆ exp Hˆ e t  Fˆ 0  exp Hˆ e t  exp Hˆ e t'  Fˆ 0 exp Hˆ e t'   nn            n 

 i   i   i   i    Wn exp E n t  Fˆnn' exp E n' t  exp E n' t'  Fˆn'n exp E n t'              n , n' 2  i    Wn Fnn' exp  E n  E n'  (t  t' ) .    n ,n '

(7.96)

Here Wn are the Gibbs-distribution probabilities given by Eq. (24), with the environment’s temperature T, and Fnn’  Fnn’(0) are the Schrödinger-picture matrix elements of the interaction force operator. We see that though the correlator (96) is a function of the difference   t – t’ only (as it should be for fluctuations in a macroscopically stationary system), it may depend on the order of its arguments. This is why let us mark this particular correlation function with the upper index “+”, ~  iE   ~ ~ ~ 2  K F    F t F t'    Wn Fnn' exp (7.97)  , where E  E n  E n' , n ,n '    while its counterpart, with swapped times t and t’, with the upper index “–”:

~  iE   ~ ~ 2 K    K     F t' F t    Wn Fnn' exp  (7.98) . n,n '    So, in contrast with classical processes, in quantum mechanics the correlation function of fluctuations ~ F is not necessarily time-symmetric: ~ E ~ ~ ~ ~ 2      0, (7.99) K F    K F    K F    K F     F t F t'   F t' F t   2i  Wn Fnn' sin  n,n '  F

Chapter 7

 F

Page 21 of 54

Essential Graduate Physics

QM: Quantum Mechanics

so Fˆ t  gives one more example of a Heisenberg-picture operator whose “values” at different moments generally do not commute – see Footnote 47 in Chapter 4. (A good sanity check here is that at  = 0, i.e. at t = t’, the difference (99) between KF+ and KF– vanishes.) Now let us return to the force operator’s decomposition (92), and calculate its first (average) component. To do that, let us write the formal solution of Eq. (91) as follows: t





1 Fˆ (t )  Fˆ t' , Hˆ e t'  dt' . i 

(7.100)

On the right-hand side of this relation, we still cannot treat the Hamiltonian of the environment as an unperturbed (equilibrium) one, even if the effect of our system (s) on the environment is very weak, because this would give us zero statistical average of the force F(t). Hence, we should make one more step in our perturbative treatment, taking into account the effect of the force on the environment. To do this, let us use Eqs. (68) and (90) to write the (so far, exact) Heisenberg equation of motion for the environment’s Hamiltonian,  iHˆ  Hˆ , Hˆ   xˆ Hˆ , Fˆ , (7.101) e



e





e



and its formal solution, similar to Eq. (100), but for time t’ rather than t:



t'



1 Hˆ e (t' )    xˆ (t" ) Hˆ e t" , Fˆ t"  dt" . i  

(7.102)

Plugging this equality into the right-hand side of Eq. (100), and averaging the result (again, over the environment only!), we get t t' 1 (7.103) Fˆ (t )  2  dt'  dt" xˆ t"  Fˆ t' , Hˆ e t" , Fˆ t"  .   







This is still an exact result, but now it is ready for an approximate treatment, implemented by averaging in its right-hand side over the unperturbed (thermal-equilibrium) state of the environment.37 This may be done absolutely similarly to that in Eq. (96), at the last step using Eq. (94):

Fˆ t' , Hˆ t" , Fˆ t"    Trw Ft' , H Ft"   e

e

 Tr w Ft' H e Ft"   Ft' Ft" H e  H e Ft" Ft'   Ft" H e Ft'     Wn Fnn' t' E n' Fn'n t"   Fnn ' t' Fn 'n t" E n  E n Fnn' t" Fn'n t'   Fnn' t" E n' Fn'n t"  n,n '

~ (7.104)   iE t'  t"   ~ 2  c.c . .    Wn E Fnn' exp    n ,n '     Now, if we try to integrate each term of this sum, as Eq. (103) seems to require, we will see that the lower-limit substitution (at t’, t”  –) is uncertain because the exponents oscillate without decay. This 37

This is exactly the moment when, in this approach, the reversible quantum dynamics of the formallyHamiltonian full system (s + e) is replaced with irreversible dynamics of the system s. At the conditions formulated at the beginning of this section, this replacement is physically justified – see also the discussions in Secs. 2.5, 6.6, and 6.7. Chapter 7

Page 22 of 54

Essential Graduate Physics

QM: Quantum Mechanics

mathematical difficulty may be overcome by the following physical reasoning. As illustrated by the example considered in the previous section, coupling to a disordered environment makes the “memory horizon” of the system of our interest (s) finite: its current state does not depend on its history beyond a certain time scale.38 As a result, the function under the integrals of Eq. (103), i.e. the sum (104), should self-average at a certain finite time. A simplistic technique for expressing this fact mathematically is just dropping the lower-limit substitution; this would give the correct result for Eq. (103). However, a better (mathematically more acceptable) trick is to first multiply the functions under the integrals by, respectively, exp{(t – t’)} and exp{(t’ – t”)}, where  is a very small positive constant, then carry out the integration, and after that follow the limit   0. The physical justification of this procedure may be provided by saying that the system’s behavior should not be affected if its interaction with the environment was not kept constant but rather turned on gradually and very slowly – say, exponentially with an infinitesimal rate . With this modification, Eq. (103) becomes t t'   iE~ t'  t"    1 ~ 2 Fˆ (t )   2  Wn E Fnn' lim  0  dt'  dt" xˆ t"  exp   t"  t   c.c. . (7.105)   n ,n '       This double integration is over the area shaded in Fig. 6, which makes it obvious that the order of integration may be changed to the opposite one as t

t'

 dt'  dt"  





t

t

 dt"  dt'  



t"

t

 dt"



0

 d t'  t  

t" t

t



 dt"  d '  ,



(7.106)

0

where ’  t – t’, and   t – t”. t"

t'  t"

t

t

t'

Fig. 7.6. The 2D integration area in Eqs. (105) and (106).

As a result, Eq. (105) may be rewritten as a single integral, t



Fˆ t    G (t  t" ) xˆ (t" )dt"   G ( ) xˆ (t   )d , 

whose kernel, Environment’s response function

0

   iE~   '    ~ 2 Wn E Fnn' lim 0  exp     c.c. d'   n,n '    0  ~ ~ E  2 E 2 2 2 e  lim 0  Wn Fnn' sin   Wn Fnn' sin ,  n ,n '   n ,n ' 

1 G   0   2 

(7.107)

(7.108)

38

Indeed, this is true for virtually any real physical system – in contrast to idealized models such as a dissipationfree oscillator that swings for ever and ever with the same amplitude and phase, thus “remembering” the initial conditions. Chapter 7

Page 23 of 54

Essential Graduate Physics

QM: Quantum Mechanics

does not depend on the particular law of evolution of the system (s) under study, i.e. provides a general characterization of its coupling to the environment. In Eq. (107) we may readily recognize the most general form of the linear response of a system (in our case, the environment), taking into account the causality principle, where G() is the response function (also called the “temporal Green’s function”) of the environment. Now comparing Eq. (108) with Eq. (99), we get a wonderfully simple universal relation,  F~ˆ ( ), F~ˆ (0)  iG ( ) .  

(7.109)

Green-Kubo formula

that emphasizes once again the quantum nature of the correlation function’s time asymmetry. (This relation, called the Green-Kubo (or just “Kubo”) formula after the works by Melville Green in 1954 and Ryogo Kubo in 1957, does not come up in the easier derivations of the FDT, mentioned in the beginning of this section.) However, for us the relation between the function G() and the force’s anti-commutator,

F~ˆ (t   ), F~ˆ (t) 

~ˆ ~ˆ ~ˆ ~ˆ F (t   ) F (t )  F (t ) F (t   )  K F    K F   ,

(7.110)

is much more important, because of the following reason. Eqs. (97)-(98) show that the so-called symmetrized correlation function, ~ K     K F   1 ~ˆ E 2  ~ˆ 2 K F    F F ( ), F (0)  lim 0  Wn Fnn' cos e  2 2  n,n ' (7.111) ~ E 2   Wn Fnn' cos ,  n,n '





Symmetrized correlation function

which is an even function of the time difference , looks very similar to the response function (108), “only” with another trigonometric function under the sum, and a different constant front factor.39 This similarity may be used to obtain a direct algebraic relation between the Fourier images of these two functions of . Indeed, the function (111) may be represented as the Fourier integral40

K F ( )  with the reciprocal transform S F ( ) 



S

F

( )e



1 2

 i



d  2  S F ( ) cos  d ,

(7.112)

0



i  K F ( )e d 



1





K

F

( ) cos  d ,

(7.113)

0

of the symmetrized spectral density of the variable F, defined as S F ( ) (  ' ) 





1 ˆ ˆ 1 ˆ ˆ F Fω'  Fˆω' Fˆ  F , Fω' , 2 2

(7.114)

39

For the heroic reader who has suffered through the calculations up to this point: our conceptual work is done! What remains is just some simple math to bring the relation between Eqs. (108) and (111) to an explicit form. 40 Due to their practical importance, and certain mathematical issues of their justification for random functions, Eqs. (112)-(113) have their own grand name, the Wiener-Khinchin theorem, though the math rigor aside, they are just a straightforward corollary of the standard Fourier integral transform (115). Chapter 7

Page 24 of 54

Symmetrized spectral density

Essential Graduate Physics

QM: Quantum Mechanics

where the function Fˆ (also a Heisenberg-picture operator rather than a c-number!) is defined as 1 Fˆ  2



~ˆ it  F (t )e dt,

~ˆ so that F (t ) 





 Fˆ e

 it

d .

(7.115)



The physical meaning of the function SF() becomes clear if we write Eq. (112) for the particular case  = 0:   ~ˆ 2 (7.116) K F (0)  F   S F ( )d  2  S F ( )d . 

0

This formula infers that if we pass the function F(t) through a linear filter cutting from its frequency spectrum a narrow band dω of physical (positive) frequencies, then the variance Ff2 of the filtered signal Ff(t) would be equal to 2SF(ω)dω – hence the name “spectral density”.41 ~ Let us use Eqs. (111) and (113) to calculate the spectral density of fluctuations F t  in our model, using the same -trick as at the deviation of Eq. (108), to quench the upper-limit substitution: ~  E   i 1 S F     Wn Fnn' e e d lim 0  cos 2  n ,n '     iE~    i 1 2 Wn Fnn' lim  0  exp    c.c. e e d  2 n ,n '     0   2



1 2

W n,n '

n

(7.117)

  1 1 2 Fnn' lim  0  ~  . ~ i E /      i  E /      









Now it is a convenient time to recall that each of the two summations here is over the eigenenergies of the environment, whose spectrum is virtually continuous because of its large size, so we may transform each sum into an integral – just as this was done in Sec. 6.6:

 ...   ...dn  ... E  dE n

n

,

(7.118)

n

where (E)  dn/dE is the environment’s density of states at a given energy. This transformation yields S F   

 1 1 1 2  lim  0  dE nW ( E n )  E n  dE n'  ( E n' ) Fnn'  ~  . (7.119) ~ 2 i E /      i  E /      









Since the expression inside the square bracket depends only on a specific linear combination of two ~ energies, namely on E  E n  E n' , it is convenient to introduce another, linearly-independent combination of the energies, for example, the average energy E  E n  E n'  / 2 , so the state energies may be represented as ~ ~ E E E n  E  , E n'  E  . (7.120) 2 2

An alternative popular measure of the spectral density of a process F(t) is SF()  Ff /d = 4SF(), where  = /2 is the “cyclic” frequency (measured in Hz).

41

Chapter 7

2

Page 25 of 54

Essential Graduate Physics

QM: Quantum Mechanics

With this notation, Eq. (119) becomes ~ ~ ~  ~   E  E  E 1 2      S F     lim 0  dE   dE W  E     E     E   Fnn' ~ 2 2  2  2 i E       ~ ~ ~  1 E  E  E ~  2        dE W  E     E     E   Fnn' . ~ 2  2  2 i  E      









(7.121)

Due to the smallness of the parameter  (which should be much smaller than all genuine energies of the problem, including kBT, , En, and En’), each of the internal integrals in Eq. (121) is dominated by ~ an infinitesimal vicinity of one point, E    . In these vicinities, the state densities, the matrix elements, and the Gibbs probabilities do not change considerably and may be taken out of the integral, which may be then worked out explicitly:42 ~ ~      dE dE 2 2 lim  0  dE     W F  ~ S F      W F   ~ 2   i E       i  E       ~ ~    i E      i E     ~  ~ 2 2 d E  d E W F  lim  0  dE     W F  ~    2 2  ~ 2 2 2           E     E  









 2     W F  W F  2

2



















dE ,



(7.122)

~ where the indices  mark the functions’ values at the special points E    , i.e. En = En’  . The physics of these points becomes simple if we interpret the state n, for which the equilibrium Gibbs distribution function equals Wn, as the initial state of the environment, and n’ as its final state. Then the top-sign point corresponds to En’ = En – , i.e. to the result of emission of one energy quantum  of the “observation” frequency  by the environment to the system s of our interest, while the bottom-sign point En’ = En + , corresponds to the absorption of such quantum by the environment. As Eq. (122) shows, both processes give similar, positive contributions to the force fluctuations. The situation is different for the Fourier image of the response function G(),43 

 ( )   G ( ) e i d ,

(7.123)

0

that is usually called generalized susceptibility – in our case, of the environment. Its physical meaning is that according to Eq. (107), the complex function () = ’() + i”() relates the Fourier amplitudes of the generalized coordinate and the generalized force: 44

42

Using, e.g., MA Eq. (6.5a). (The imaginary parts of the integrals vanish, because the integration in infinite limits may be always re-centered to the finite points .) A math-enlightened reader may have noticed that the integrals might be taken without the introduction of small , by using the Cauchy theorem – see MA Eq. (15.1). 43 The integration in Eq. (123) may be extended to the whole time axis, –  <  < +, if we complement the definition (107) of the function G() for  > 0 with its definition as G( ) = 0 for  < 0, in correspondence with the causality principle. Chapter 7

Page 26 of 54

Generalized susceptibility

Essential Graduate Physics

QM: Quantum Mechanics

Fˆ   ( ) xˆ .

(7.124)

The physics of its imaginary part ”() is especially clear. Indeed, if x represents a sinusoidal classical oscillation waveform, say x x x x(t )  x0 cos t  0 e it  0 e it , i.e. x  x   0 , (7.125) 2 2 2 then, in accordance with the correspondence principle, Eq. (124) should hold for the c-number complex amplitudes F and x, enabling us to calculate the time dependence of the force as F (t )  F e it  F e it    x e it      x  e it 







x0    e it   *   e it 2

x  0  χ'  i"  e it   χ'  i"  e it  x0  χ'   cos t  "  sin t . 2



(7.126)

We see that ”() weighs the force’s part (frequently called quadrature) that is /2-shifted from the coordinate x, i.e. is in phase with its velocity and hence characterizes the time-average power flow from the system into its environment, i.e. the energy dissipation rate:45 x02 P   F (t ) x (t )   x0  χ'   cos t  "  sin t  x0 sin t   "   . 2

(7.127)

Let us calculate this function from Eqs. (108) and (123), just as we have done for the spectral density of fluctuations: ~    i    E  2 1 2 " ( )  Im  G ( ) e i d   Wn Fnn' lim  0 Im   exp i d   c.c.  e e 2 i   , ' n n   0 0  

1 1 2     Wn Fnn' lim  0 Im ~  ~    E     i  E     i   n,n '    2   Wn Fnn' lim  0  ~  ~ 2 2 2 2  E      n,n ' E      









(7.128)  .  

Making the transfer (118) from the double sum to the double integral, and then the integration variable transfer (120), we get

To prove this relation, it is sufficient to plug the expression xˆ s  xˆ e  it , or any sum of such exponents, into Eqs. (107) and then use the definition (123). This (simple) exercise is highly recommended to the reader. 45 The sign minus in Eq. (127) is due to the fact that according to Eq. (90), F is the force exerted on our system (s) by the environment, so the force exerted by our system on the environment is –F. With this sign clarification, the expression P   Fx   Fv for the instant power flow is evident if x is the usual Cartesian coordinate of a 1D particle. However, according to analytical mechanics (see, e.g., CM Chapters 2 and 10), it is also valid for any {generalized coordinate, generalized force} pair that forms the interaction Hamiltonian (90). 44

Chapter 7

Page 27 of 54

Essential Graduate Physics

QM: Quantum Mechanics

~ ~ ~    E  E  E ~ 2      dE " ( )  lim  0  dE   W  E     E     E   Fnn' ~ 2 2 2  2  2    E      ~ ~ ~   E  E  E  ~ 2 d E   W  E     E     E   Fnn' ~ . 2 2 2  2  2  E       







(7.129)



Now using the same argument about the smallness of parameter  as above, we may take the spectral densities, the matrix elements of force, and the Gibbs probabilities out of the integrals, and work out the remaining integrals, getting a result very similar to Eq. (122):



" ( )        W F  W F 2

2

 dE .

(7.130)

To relate these results, it is sufficient to notice that according to Eq. (24), the Gibbs probabilities W are related by a coefficient depending on only the temperature T and observation frequency : ~      E   / 2  E    1    W  E    (7.131) W  W  E  exp W E exp     ,    2  2  Z k BT    2k BT    so both the spectral density (122) and the dissipative part (130) of the generalized susceptibility may be expressed via the same integral over the environment energies:   S F     cosh   2 k BT



      W E  F 

2

 F



   2      W E  F  F  2 k BT 

"    2 sinh 

2

 dE ,

(7.132)

2

 dE ,

(7.133)

and hence are universally related as

S F ( ) 

  . " ( ) coth 2 2 k BT

(7.134)

This is, finally, the much-celebrated Callen-Welton’s fluctuation-dissipation theorem (FDT). It reveals a fundamental, intimate relationship between these two effects of the environment (“no dissipation without fluctuation”) – hence the name. A curious feature of the FDT is that Eq. (134) includes the same function of temperature as the average energy (26) of a quantum oscillator of frequency , though, as the reader could witness, the notion of the oscillator was by no means used in its derivation. As will see in the next section, this fact leads to rather interesting consequences and even conceptual opportunities. In the classical limit,  >1.

82

See SM Secs. 5.6-5.7. Some examples of quantum effects in dissipative systems with continuous spectra, mostly for particular models of the environment, may be found, e.g., in the monographs by U. Weiss, Quantum Dissipative Systems, 2nd ed., World Scientific, 1999, and H.-P. Breuer and F. Petruccione, The Theory of Open Quantum Systems, Oxford U. Press, 2007, and in references therein. 83 This state is frequently used to discuss the well-known Schrödinger cat paradox – see Sec. 10.1 below. Chapter 7

Page 52 of 54

Essential Graduate Physics

QM: Quantum Mechanics

7.5.* A harmonic oscillator is weakly coupled to an Ohmic environment that is in thermal equilibrium at temperature T. (i) Use the rotating-wave approximation to write the reduced equations of motion for the Heisenberg operators of the complex amplitude of oscillations. (ii) Calculate the expectation values of the correlators of the fluctuation force operators participating in these equations, and express them via the average number ne of thermally-induced excitations in equilibrium, given by the second of Eqs. (26b). 7.6. Calculate the average potential energy of the long-range electrostatic interaction between two similar isotropic 3D harmonic oscillators, each with the electric dipole moment d = qs, where s is the oscillator’s displacement from its equilibrium position, at arbitrary temperature T. 7.7. A semi-infinite string with mass  per unit length is attached to a wall and stretched with a constant force (tension) T. Calculate the spectral density of the transverse force exerted on the wall, in thermal equilibrium at temperature T. 7.8.* Calculate the low-frequency spectral density of small fluctuations of the voltage V across a Josephson junction shunted with an Ohmic conductor and biased with a dc external currentI > Ic. Hint: You may use Eqs. (1.73)-(1.74) to describe the junction’s dynamics, and assume that the shunting conductor remains in thermal equilibrium. 7.9. Prove that in the interaction picture of quantum dynamics, the expectation value of an arbitrary observable A may be indeed calculated using Eq. (167). 7.10. Show that the quantum-mechanical Golden Rule (6.149) and the master equation (196) give the same results for the rate of spontaneous quantum transitions n’  n in a system with a discrete energy spectrum, which is weakly coupled to a low-temperature heat bath (with kBT > kBT/, and let relax into its ground state. Then the direction of the field was suddenly changed by /2 and kept constant after that. (i) Calculate the time evolution of the spin’s density matrix, in any basis you like. (ii) Calculate the time evolution of the expectation value S of the spin vector, and sketch its trajectory.

~

7.12. A spin-½ with gyromagnetic ratio  is placed into the magnetic field B t   B0  B (t ) with an arbitrary but relatively small time-dependent component, and is also weakly coupled to a dissipative environment in thermal equilibrium at temperature T. Derive the differential equations describing the time evolution of the expectation values of the spin’s Cartesian components.

Chapter 7

Page 53 of 54

Essential Graduate Physics

QM: Quantum Mechanics

7.13. Use the Bloch equations derived in the previous problem to analyze the magnetic resonance84 in a spin-½ which is weakly connected to a dissipative environment in thermal equilibrium. Use the result for a semi-quantitative discussion of the environmental broadening of arbitrary quantum transitions in systems with discrete energy spectra. Hint: You may use the same rotating field model as in Problem 5.5. 7.14. Use the Bloch equations (see the solution of Problem 12) to analyze the dynamics of spin½ with gyromagnetic ratio , under the effect of an external ac magnetic field with a relatively low frequency  and large amplitude Bmax (so that  Bmax  >> , 1/T1,2), assuming that the constants T1,2 are field-independent. 7.15. Derive Eq. (7.220) of the lecture notes from Eq. (7.222). 7.16. For a harmonic oscillator with weak Ohmic dissipation, use Eqs. (223)-(224) to find the time evolution of the expectation value E of oscillator’s energy for an arbitrary initial state, and compare the result with that following from the Heisenberg-Langevin approach. 7.17. Derive Eq. (234) in an alternative way – by using an expression dual to Eq. (4.244).

84

See the discussion in Sec. 5.2 and the solution of Problem 5.5.

Chapter 7

Page 54 of 54

Essential Graduate Physics

QM: Quantum Mechanics

Chapter 8. Multiparticle Systems This chapter provides a brief introduction to the quantum mechanics of systems of similar particles, with special attention to the case when they are indistinguishable. For such systems, theory predicts (and experiment confirms) very specific effects even in the case of negligible explicit (“direct”) interactions between the particles, in particular, the Bose-Einstein condensation of bosons and the exchange interaction of fermions. In contrast, the last section of the chapter is devoted to quite a different topic – quantum entanglement of distinguishable systems, and attempts to use this effect for high-performance processing of information. 8.1. Distinguishable and indistinguishable particles The importance of quantum systems of many similar particles is probably self-evident; just the very fact that most atoms include several/many electrons is sufficient to attract our attention. There are also important systems where the total number of electrons is much higher than in one atom; for example, a cubic centimeter of typical metal houses ~1023 conduction electrons that cannot be attributed to particular atoms, and have to be considered common parts of the system as the whole. Though quantum mechanics offers virtually no exact analytical results for systems of substantially interacting particles,1 it reveals very important new quantum effects even in the simplest cases when particles do not interact, and least explicitly (directly). If non-interacting particles are either different from each other by their nature, or physically similar but still distinguishable because of other reasons, everything is simple – at least, conceptually. Then, as was already discussed in Sec. 6.7, a system of two particles, 1 and 2, each in a pure quantum state, may be described by a state vector

   1   ' 2, Distinguishable particles

(8.1a)

which is a direct product of single-particle vectors, describing their states  and ’ defined in different Hilbert spaces. (Below, I will frequently use, for this direct product, the following convenient shorthand:

   ' ,

(8.1b)

in which the particle’s number is coded by the state symbol’s position.) Hence the permuted state Pˆ  '   ' ,

(8.2)

where Pˆ is the permutation operator (which is defined by Eq. (2) itself), is different from the initial one. The permutation operator may be also used for states of systems of identical particles. In physics, the last term may be used to describe:

1

As was already noted in Sec. 7.3, for such systems of similar particles, the powerful methods discussed in the last chapter do not work well, because of the absence of a clear difference between some “system of interest” and elementary parts of its “environment”.

© K. Likharev

Essential Graduate Physics

QM: Quantum Mechanics

(i) the “really elementary” particles like electrons, which (at least at this stage of development of physics) are considered as structure-less entities, and hence are all identical; (ii) any objects (e.g., hadrons or mesons) that may be considered as a system of “more elementary” particles (e.g., quarks and gluons), but are placed in the same internal quantum state – most simply, though not necessarily, in the ground state.2 It is important to note that identical particles still may be distinguishable – say by their clear spatial separation. Such systems of similar but distinguishable particles (or subsystems) are broadly discussed nowadays in the context of quantum computing and encryption – see Sec. 5 below. This is why it is insufficient to use the term “identical particles” if we want to say that they are genuinely indistinguishable, so I below I will use the latter term, despite it being rather unpleasant grammatically. It turns out that for a quantitative description of systems of indistinguishable particles, we need to use, instead of direct products of the type (1), linear combinations of such direct products; in the above example, of ’ and ’.3 To see that, let us discuss the properties of the permutation operator defined by Eq. (2). Consider an observable A, and a system of eigenstates/eigenvalues of its operator: Aˆ a j  A j a j .

(8.3)

If the particles are indistinguishable, the observable’s expectation value should not be affected by their permutation. Hence the operators Aˆ and Pˆ have to commute and share their eigenstates. This is why the eigenstates of the operator Pˆ are so important: in particular, they include the eigenstates of the Hamiltonian, i.e. the stationary states of the system. Let us have a look at the action, on an elementary direct product, of the permutation operator squared:





Pˆ 2  '  Pˆ Pˆ  '  Pˆ  '   ' ,

(8.4)

i.e. Pˆ 2 brings the state back to its original form. Since any pure state of a two-particle system may be represented as a linear combination of such products, this result does not depend on the state, and may be represented as the following operator relation:

Pˆ 2  Iˆ.

(8.5)

Now let us find the possible eigenvalues Pj of the permutation operator. Acting by both sides of Eq. (5) on any of the eigenstates j of the permutation operator, we get a very simple equation for its eigenvalues: 2

Note that from this point of view, even complex atoms or molecules, in the same internal quantum state, may be considered on the same footing as the “really elementary” particles. For example, the already mentioned recent spectacular interference experiments by R. Lopes et al., which require particle identity, were carried out with couples of 4He atoms in the same internal quantum state. 3 A very legitimate question is why, in this situation, we need to introduce the particles’ numbers to start with. A partial answer is that in this approach, it is much simpler to derive (or guess) the system’s Hamiltonians from the correspondence principle – see, e.g., Eq. (27) below. Later in this chapter, we will discuss an alternative approach (the so-called “second quantization”), in which particle numbering is avoided. While that approach is more logical, writing adequate Hamiltonians (which, in particular, would avoid spurious self-interaction of the particles) within it is more challenging – see Sec. 3 below.

Chapter 8

Page 2 of 52

Essential Graduate Physics

QM: Quantum Mechanics

P j2  1 ,

(8.6)

P j  1 .

(8.7)

with two possible solutions: Let us find the eigenstates of the permutation operator in the simplest case when each of the two particles can be only in one of two single-particle states – say,  and ’. Evidently, none of the simple products ’ and ’, taken alone, qualifies for such an eigenstate – unless the states  and ’ are identical. This is why let us try their linear combination

 j  a  '  b  ' ,

(8.8)

Pˆ  j  P j  j  a  '  b  ' .

(8.9)

giving

Symmetric entangled eigenstate

For the case Pj = +1 we have to require the states (8) and (9) to be the same, so a = b, giving the socalled symmetric eigenstate4 1   '   ' ,   (8.10) 2

Antisymmetric entangled eigenstate

where the front coefficient guarantees the orthonormality of the two-particle state vectors, provided that the single-particle vectors are orthonormal. Similarly, for Pj = –1 we get a = –b, i.e. an antisymmetric eigenstate 1   '   '  .   (8.11) 2 These are the simplest (two-particle, two-state) examples of entangled states, defined as multiparticle system states whose vectors cannot be factored into direct products of single-particle vectors. So far, our math does not preclude either sign of Pj, in particular the possibility that the sign would depend on the state (i.e. on the index j). Here, however, comes a crucial fact: all indistinguishable particles fall into two groups:5 (i) bosons, particles with integer spin s, for whose states Pj = +1, and (ii) fermions, particles with half-integer spin, with Pj = –1. This fundamental connection between the particle’s spin and parity (“statistics”) can be proved using quantum field theory.6 In non-relativistic quantum mechanics we are discussing now, it is usually considered experimental; however, our discussion of spin in Chapter 5 enables its following interpretation. In free space, the permutation of particles 1 and 2 may be viewed as a result of their pair’s common rotation by angle  =  about an arbitrary z-axis. As we have seen in Sec. 5.7, at the 4

As in many situations we have met before, the kets given by Eqs. (10) and (11) may be multiplied by the common factor exp{i} with an arbitrary real phase . However, until we discuss coherent superpositions of various states , there is no good motivation for taking  different from 0; that would only clutter the notation. 5 This fact is often described as two different “statistics”: the Bose-Einstein statistics of bosons and Fermi-Dirac statistics of fermions because their statistical distributions in thermal equilibrium are indeed different – see, e.g., SM Sec. 2.8. However, this difference is actually deeper: we are dealing with two different quantum mechanics. 6 Such proofs were first offered by M. Fierz in 1939 and W. Pauli in 1940, and later refined by others. Chapter 8

Page 3 of 52

Essential Graduate Physics

QM: Quantum Mechanics

rotation by this angle, the state vector  of a particle with a definite quantum number ms acquires an extra factor exp{ims}, where the quantum number ms may be either an integer or a half-integer. As a result, for bosons, i.e. the particles with integer s, ms can take only integer values, so exp{ims} = 1, and the product of two such factors in the state vector ’ is equal to +1. On the contrary, for the fermions with their half-integer s, all ms are half-integer as well, so exp{ims} = i, and the product of two such factors in the state vector ’ is equal to (i)2 = –1.7 The most impressive corollaries of Eqs. (10) and (11) are for the case when the partial states of the two particles are the same:  = ’. The corresponding Bose state + defined by Eq. (10) is possible; in particular, at sufficiently low temperatures, a set of many non-interacting Bose particles may be in the same ground state – the so-called Bose-Einstein condensate (“BEC”).8 The most fascinating feature of the condensates is that their dynamics is governed by quantum mechanical laws, which may show up in the behavior of their observables with virtually no quantum uncertainties9 – see, e.g., Eqs. (1.73)-(1.74). On the other hand, if we take  = ’ in Eq. (11), we see that the Fermi state – becomes the null state, i.e. cannot exist at all. This is the mathematical expression of Pauli’s exclusion principle:10 two indistinguishable fermions cannot be placed into the same quantum state. (As will be discussed below, this is true for systems with more than two fermions as well.) Perhaps, the key importance of this principle is obvious: if it were not valid for electrons (that are fermions), all electrons of each atom would condense in their ground (1s-like) state, and all the usual chemistry (and biochemistry, and biology, including dear us!) would not exist. Thus, the Pauli principle makes fermions indirectly interact even if they do not interact directly, in the usual sense of the word “interaction”. 8.2. Singlets, triplets, and the exchange interaction Now let us discuss possible approaches to quantitative analyses of identical particles, starting from a simple case of two spin-½ particles (say, electrons), whose explicit interaction with each other and the external world does not involve spin. The description of such a system may be based on factorable states with ket-vectors    o12  s12 , (8.12) with the orbital state vector o12 and the spin vector s12 belonging to different Hilbert spaces. It is frequently convenient to use the coordinate representation of such a state, sometimes called the spinor: r1 , r2    r1 , r2 o12  s12   (r1 , r2 ) s12 .

(8.13)

Unfortunately, the simple generalization of these arguments to an arbitrary quantum state runs into problems, so they cannot serve as a strict proof of the universal relation between s and Pj. 8 For a quantitative discussion of the Bose-Einstein condensation, see, e.g., SM Sec. 3.4. Examples of such condensates include superfluids like helium, Cooper-pair condensates in superconductors, and BECs of weakly interacting atoms. 9 For example, for a coherent condensate of N >> 1 particles, Heisenberg’s uncertainty relation takes the form xp = x(Nmv)  /2, so its coordinate x and velocity v may be measured simultaneously with much higher precision than those of a single particle. 10 It was first formulated for electrons by Wolfgang Pauli in 1925, on the background of less general rules suggested by Gilbert Lewis (1916), Irving Langmuir (1919), Niels Bohr (1922), and Edmund Stoner (1924) for the explanation of experimental spectroscopic data. Chapter 8

Page 4 of 52

2-particle spinor

Essential Graduate Physics

QM: Quantum Mechanics

Since the spin-½ particles are fermions, the particle permutation has to change the spinor’s sign: Pˆ (r1 , r2 ) s12   (r2 , r1 ) s 21   (r1 , r2 ) s12 ,

(8.14)

i.e. to change the sign of either its orbital factor or the spin factor. In particular, in the case of symmetric orbital factor,

 (r2 , r1 )   (r1 , r2 ),

(8.15)

s 21   s12 .

(8.16)

the spin factor has to obey the relation Let us use the ordinary z-basis (where z, in the absence of an external magnetic field, is an arbitrary spatial axis) for both spins. In this basis, the ket-vector of any two spins-½ may be represented as a linear combination of the following four basis vectors:

 ,

 ,

 , and  .

(8.17)

The first two kets evidently do not satisfy Eq. (16), and cannot participate in the state. Applying to the remaining kets the same argumentation as has resulted in Eq. (11), we get 1

s12  s  

Singlet state

2

 



  .

(8.18)

Such an orbital-symmetric and spin-antisymmetric state is called the singlet. The origin of this term becomes clear from the analysis of the opposite (orbital-antisymmetric and spin-symmetric) case:  (r2 , r1 )   (r1 , r2 ), s12  s 21 . (8.19) For the composition of such a symmetric spin state, the first two kets of Eq. (17) are completely acceptable (with arbitrary weights), and so is an entangled spin state that is the symmetric combination of the two last kets, similar to Eq. (10): 1 s     , (8.20) 2 so the general spin state is a triplet:



s12  c    c    c 0

Triplet state



1 2

 



  .

(8.21)

Note that any such state (with any values of the coefficients c satisfying the normalization condition), corresponds to the same orbital wavefunction and hence the same energy. However, each of these three states has a specific value of the z-component of the net spin – evidently equal to, respectively, +, –, and 0. Because of this, even a small external magnetic field lifts their degeneracy, splitting the energy level in three; hence the term “triplet”. In the particular case when the particles do not interact directly, for example Hˆ  hˆ1  hˆ2 ,

Chapter 8

2 ˆh  pˆ k  uˆ (r ), k k 2m

with k  1,2 ,

(8.22)

Page 5 of 52

Essential Graduate Physics

QM: Quantum Mechanics

the two-particle Schrödinger equation for the symmetrical orbital wavefunction (15) is obviously satisfied by the direct products,  (r1 , r2 )   n (r1 ) n' (r2 ), (8.23) of single-particle eigenfunctions, with arbitrary sets n, n’ of quantum numbers. For the particular but very important case n = n’, this means that the eigenenergy of the (only acceptable) singlet state, 1 2

 



   n (r1 ) n (r2 ) ,

(8.24)

is just 2n, where n is the single-particle energy level.11 In particular, for the ground state of the system, such singlet spin state gives the lowest energy Eg = 2g, while any triplet spin state (19) would require one of the particles to be in a different orbital state, i.e. in a state of higher energy, so the total energy of the system would be also higher. Now moving to the systems in which two indistinguishable spin-½ particles do interact, let us consider, as the simplest but important12 example, the lower energy states of a neutral atom13 of helium – more exactly, 4He. Such an atom consists of a nucleus with two protons and two neutrons, with the total electric charge q = +2e, and two electrons “rotating” about the nucleus. Neglecting the small relativistic effects that were discussed in Sec. 6.3, the Hamiltonian describing the electron motion may be expressed as pˆ 2 e2 2e 2 . (8.25) Hˆ  hˆ1  hˆ2  Uˆ int , hˆk  k  Uˆ int  , 4 0 r1  r2 2m 4 0 rk As with most problems of multiparticle quantum mechanics, the eigenvalue/eigenstate problem for this Hamiltonian does not have an exact analytical solution, so let us carry out its approximate analysis considering the electron-electron interaction Uint as a perturbation. As was discussed in Chapter 6, we have to start with the “0th-order” approximation in which the perturbation is ignored, so the Hamiltonian is reduced to the sum (22). In this approximation, the ground state of the atom is the singlet (24), with the orbital factor  g (r1 , r2 )   100 (r1 ) 100 (r2 ) , (8.26)

and energy Eg = 2g. Here each factor 100(r) is the single-particle wavefunction of the ground (1s) state of the hydrogen-like atom with Z = 2, with quantum numbers n = 1, l = 0, and m = 0 – hence the wavefunctions’ indices. According to Eqs. (3.174) and (3.208),

 100 (r )  Y00 ( ,  )R 1, 0 (r ) 

1

2

4 r

3/ 2 0

e

 r / r0

, with r0 

rB rB  , Z 2

(8.27)

and according to Eqs. (3.191) and (3.201), in this approximation the total ground state energy is

In this chapter, I try to use lowercase letters for all single-particle observables (in particular,  for their energies), in order to distinguish them as clearly as possible from the system’s observables (including the total energy E of the system), which are typeset in uppercase (capital) letters. 12 Indeed, helium makes up more than 20% of all “ordinary” matter of our Universe. 13 Note that the positive ion He+1 of this atom, with just one electron, is fully described by the hydrogen-like atom theory with Z = 2, whose ground-state energy, according to Eq. (3.191), is –Z2EH/2 = –2EH  –55.4 eV. 11

Chapter 8

Page 6 of 52

Essential Graduate Physics

QM: Quantum Mechanics

 Z 2 EH     2   E g(0)  2 g(0)  2   02  2  2n  n 1, Z  2 

   4 E H  109 eV.  Z 2

(8.28)

This is still somewhat far (though not terribly far!) from the experimental value Eg  –78.8 eV – see the bottom level in Fig. 1a. ΔE (eV) 0

(a) “parahelium”

“orthohelium”

E ex

3s 3p 3d 2s 2p

3s 3p 3d 2p 2s

E ex

-5

(s )

triplet state (“orthohelium”)

(s )

E dir

-20

-25

singlet state (“parahelium”)

(b)

B 0

1s (ground state) l

l

 100   nlm

Fig. 8.1. The lower energy levels of a helium atom: (a) experimental data and (b) a schematic structure of an excited state in the first order of the perturbation theory. On panel (a), all energies are referred to that (-2EH  –55.4 eV) of the ground state of the positive ion He+1, so their magnitudes are the (readily measurable) energies of the atom’s single ionization starting from the corresponding state of the neutral atom. Note that the “spin direction” nomenclature on panel (b) is rather crude: it does not reflect the difference between the entangled states s+ and s–.

Making a minor (but very useful) detour from our main topic, let us note that we can get a much better agreement with experiment by accounting for the electron interaction energy in the 1st order of the perturbation theory. Indeed, in application to our system, Eq. (6.14) reads E g(1)  g Uˆ int g   d 3 r1  d 3 r2  g* (r1 , r2 )U int (r1 , r2 ) g (r1 , r2 ).

(8.29)

Plugging in Eqs. (25)-(27), we get E

(1) g

 1 4    3  4  r 0  

2

 2(r1  r2 )  e2 d r d r exp 1 2   4 0 r1  r2  r0 . 3

3

(8.30)

As may be readily evaluated analytically (this exercise is left for the reader), this expression equals (5/4)EH, so the corrected ground state energy, E g  E g(0)  E g(1)   4  5 / 4 E H  74.8 eV ,

(8.31)

is much closer to experiment. There is still room here for a ready improvement by using the variational method discussed in Sec. 2.9. For our particular case of the 4He atom, we may try to use, as the trial state, the orbital wavefunction given by Eqs. (26)-(27), but with the atomic number Z considered as an adjustable Chapter 8

Page 7 of 52

Essential Graduate Physics

QM: Quantum Mechanics

parameter Zef < Z = 2 rather than a fixed number. The physics behind this approach is that the electric charge density (r) = –e(r)2 of each electron forms a negatively charged “cloud” that reduces the effective charge of the nucleus, as seen by the other electron, to Zefe, with some Zef < 2. As a result, the single-particle wavefunction spreads further in space (with the scale r0 = rB/Zef > rB/Z), while keeping its functional form (27) nearly intact. Since the kinetic energy T in the system’s Hamiltonian (25) is proportional to r0–2  Zef2, while the potential energy is proportional to r0–1  Zef1, we can write 2

Z  E g ( Z ef )   ef  Tg  2 

Z 2



Z ef Ug 2

Z 2

.

(8.32)

Now we can use the fact that according to Eq. (3.212), for any stationary state of a hydrogen-like atom (just as for the classical circular motion in the Coulomb potential), U = 2E, and hence T = E – U = –E. Using Eq. (30), and adding the correction (31) to the potential energy, we get   Z 2  5 Z  E g ( Z ef )  4 ef     8   ef  E H . (8.33) 4  2    2   This expression allows an elementary calculation of the optimal value of Zef, and the corresponding minimum of the function Eg(Zef):

5  ( Z ef ) opt  2 1    1.6875,  32 

E 

g min

 2.85E H  77.5 eV .

(8.34)

Given the trial state’s crudeness, this number is in surprisingly good agreement with the experimental value cited above, with a difference of the order of 1%. Now let us return to the main topic of this section – the effects of the particle (in this case, electron) indistinguishability. As we have just seen, the ground-level energy of the helium atom is not affected directly by this fact; the situation is different for its excited states – even the lowest ones. The reasonably good precision of the perturbation theory, which we have seen for the ground state, tells us that we can base our analysis of wavefunctions (e) of the lowest excited state orbitals, on products like 100(rk)nlm(rk’), with n > 1. To satisfy the fermion permutation rule, Pj = –1, we have to take the orbital factor of the state in either the symmetric or the antisymmetric form:

 e (r1 , r2 ) 

1 2

 100 (r1 ) nlm (r2 )   nlm (r1 ) 100 (r2 ) ,

(8.35)

with the proper total permutation asymmetry provided by the corresponding spin factor (18) or (21), so the upper/lower sign in Eq. (35) corresponds to the singlet/triplet spin state. Let us calculate the expectation values of the total energy of the system in the first order of the perturbation theory. Plugging Eq. (35) into the 0th-order expression Ee

(0)





  d 3 r1  d 3 r2  e* r1 , r2  hˆ1  hˆ2  e r1 , r2  ,

(8.36)

we get two groups of similar terms that differ only by the particle index. We can merge the terms of each pair by changing the notation as (r1  r, r2  r’ ) in one of them, and (r1  r’, r2  r) in the counterpart term. Using Eq. (25), and the mutual orthogonality of the wavefunctions 100(r) and nlm(r), we get the following result:

Chapter 8

Page 8 of 52

Orthohelium and parahelium: orbital wavefunctions

Essential Graduate Physics

Ee

(0)

QM: Quantum Mechanics

  2  r2   2  r2' 2e 2  2e 2  * *  100 (r )d 3 r   nlm  nlm (r' )d 3 r'   100   (r )   (r' )   (8.37) 2 m 4  r 2 m 4  r' 0  0      100   nlm , with n  1.

It may be interpreted as the sum of eigenenergies of two separate single particles, one in the ground state 100, and another in the excited state nlm – although actually the electron states are entangled. Thus, in the 0th order of the perturbation theory, the electrons’ entanglement does not affect their total energy. However, the potential energy of the system also includes the interaction term Uint, which does not allow such separation. Indeed, in the 1st approximation of the perturbation theory, the total energy Ee of the system may be expressed as 100 + nlm + Eint(1), with (1) E int  U int   d 3 r1  d 3 r2  e* (r1 , r2 )U int (r1 , r2 ) e (r1 , r2 ) ,

(8.38)

Plugging Eq. (35) into this result, using the symmetry of the function Uint with respect to the particle number permutation, and the same particle coordinate re-numbering as above, we get (1) Eint  Edir  Eex ,

(8.39)

with the following, deceivingly similar expressions for the two components of this sum/difference: Direct interaction energy

* * E dir   d 3 r  d 3 r' 100 (r ) nlm (r' )U int (r, r' ) 100 (r ) nlm (r' ) ,

(8.40)

Exchange interaction energy

* * E ex   d 3 r  d 3 r' 100 (r ) nlm (r' )U int (r, r' ) nlm (r ) 100 (r' ) .

(8.41)

Since the single-particle orbitals can be always made real, both components are positive – or at least non-negative. However, their physics and magnitude are different. The integral (40), called the direct interaction energy, allows a simple semi-classical interpretation as the Coulomb energy of interacting electrons, each distributed in space with the electric charge density (r) = –e*(r)(r):14 E dir   d 3 r  d 3 r'

100 (r )  nlm (r' )   100 (r ) nlm (r )d 3 r    nlm (r ) (r )d 3 r , 100 4 0 r  r'

(8.42)

where (r) are the electrostatic potentials created by the electron “charge clouds”:15

100 (r ) 

1 4 0

d

3

r'

100 (r' ) r  r'

,

 nlm (r ) 

1 4 0

d

3

r'

 nlm (r' ) r  r'

.

(8.43)

However, the integral (41), called the exchange interaction energy, evades a classical interpretation, and (as it is clear from its derivation) is the direct corollary of electrons’ indistinguishability. The magnitude of Eex is also very much different from Edir because the function under the integral (41) disappears in the regions where the single-particle wavefunctions 100(r) and nlm(r) do not overlap. This is in full agreement with the discussion in Sec. 1: if two particles are identical but well separated, i.e. their wavefunctions do not overlap, the exchange interaction disappears, 14

See, e.g., EM Sec. 1.3, in particular Eq. (1.54). Note that the result for Edir correctly reflects the basic fact that a charged particle does not interact with itself, even if its wavefunction is quantum-mechanically spread over a finite space volume. Unfortunately, this is not true for some popular approximate theories of multiparticle systems – see Sec. 4 below.

15

Chapter 8

Page 9 of 52

Essential Graduate Physics

QM: Quantum Mechanics

i.e. measurable effects of particle indistinguishability vanish. (In contrast, the integral (40) decreases with the growing separation of the electrons only slowly, due to their long-range Coulomb interaction.) Figure 1b shows the structure of an excited energy level, with certain quantum numbers n > 1, l, and m, given by Eqs. (39)-(41). The upper, so-called parahelium16 level, with the energy

E para   100   nlm   E dir  E ex   100   nlm ,

(8.44)

corresponds to the symmetric orbital state and hence to the singlet-spin state (18), while the lower, orthohelium level, with E orth   100   nlm   E dir  E ex  E para , (8.45) corresponds to the degenerate triplet-spin state (21). This degeneracy may be lifted by an external magnetic field, whose effect on the electron spins17 is described by the following evident generalization of the Pauli Hamiltonian (4.163), Hˆ field  sˆ 1  B  sˆ 2  B  Sˆ  B ,

where

with    e  

Sˆ  sˆ 1  sˆ 2 ,

 e  2 B , me 

(8.46) (8.47)

is the operator of the (vector) sum of the system of two spins.18 To analyze this effect, we need first to make one more detour, to address the general issue of spin addition. The main rule19 here is that in a full analogy with the net spin of a single particle, defined by Eq. (5.170), the net spin operator (47) of any system of two spins, and its component Sˆ z along the (arbitrarily selected) z-axis, obey the same commutation relations (5.168) as the component operators, and hence have the properties similar to those expressed by Eqs. (5.169) and (5.175): Sˆ 2 S , M S   2 S S  1 S , M S ,

Sˆ z S , M S  M S S , M S ,

with  S  M S   S , (8.48)

where the ket vectors correspond to the coupled basis of joint eigenstates of the operators of S2 and Sz (but not necessarily all component operators – see again the Venn shown in Fig. 5.12 and its discussion, with the replacements S, L  s1,2 and J  S). Repeating the discussion of Sec. 5.7 with these replacements, we see that in both the coupled and the uncoupled bases, the net magnetic number MS is simply expressed via those of the components 16

This terminology reflects the historic fact that the observation of two different hydrogen-like spectra, corresponding to the opposite signs in Eq. (39), was first taken as evidence for two different species of 4He, which were called, respectively, the “orthohelium” and the “parahelium”. 17 As we know from Sec. 6.4, the field also affects the orbital motion of the electrons, so the simple analysis based on Eq. (46) is strictly valid only for the s excited state (l = 0, and hence m = 0). However, the orbital effects of a weak magnetic field do not affect the triplet-level splitting we are analyzing now. 18 Note that similarly to Eqs. (22) and (25), here the uppercase notation of the component spins is replaced with their lowercase notation, to avoid any possibility of their confusion with the total spin of the system. 19 Since we already know that the spin of a particle is physically nothing more than some (if specific) part of its angular momentum, the similarity of the properties (48) of the sum (47) of spins of different particles to those of the sum (5.170) of different spin components of the same particle it very natural, but still has to be considered as a new fact – confirmed by a vast body of experimental data. Chapter 8

Page 10 of 52

Essential Graduate Physics

QM: Quantum Mechanics

M S  m s 1  m s 2 .

(8.49)

However, the net spin quantum number S (in contrast to the Nature-given spins s1,2 of its elementary components) is not universally definite, and we may immediately say only that it has to obey the following analog of the relation  l – s   j  (l + s) discussed in Sec. 5.7: s1  s 2  S  s1  s 2 .

(8.50)

What exactly S is (within these limits), depends on the spin state of the system. For the simplest case of two spin-½ components, each with s = ½ and ms = ½, Eq. (49) gives three possible values of MS, equal to 0 and 1, while Eq. (50) limits the possible values of S to just either 0 or 1. Using the last of Eqs. (48), we see that the possible combinations of the quantum numbers are  S  0,  M S  0,

 S  1, and  M S  0,  1.

(8.51)

It is virtually evident that the singlet spin state s– belongs to the first class, while the simple (separable) triplet states  and  belong to the second class, with MS = +1 and MS = –1, respectively. However, for the entangled triplet state s+, evidently with MS = 0, the value of S is less obvious. Perhaps the easiest way to recover it20 to use the “rectangular diagram”, similar to that shown in Fig. 5.14, but redrawn for our case of two spins, i.e., with the replacements ml  (ms)1 = ½, ms  (ms)2 = ½ – see Fig. 2.

m s 2 ½



s S  0, 1 ½

 S 1

0 ½

½

 S 1

m s 1 

Fig. 8.2. The “rectangular diagram” showing the relation between the uncoupled-representation states (dots) and the coupled-representation states (straight lines) of a system of two spins½ – cf. Fig. 5.14.

Just as at the addition of various angular momenta of a single particle, the top-right and bottomleft corners of this diagram correspond to the factorable triplet states  and , which participate in both the uncoupled-representation and coupled-representation bases, and have the largest value of S, i.e. 1. However, the entangled states s, which are linear combinations of the uncoupled-representation states  and , cannot have the same value of S, so for the triplet state s+, S has to take the value different from that (0) of the singlet state, i.e. 1. With that, the first of Eqs. (48) gives the following expectation values for the square of the net spin operator:  2 2 , S2    0,

for each triplet state, for the singlet state.

(8.52)

Another, a bit longer but perhaps more prudent way is to directly calculate the expectation values of Sˆ 2 for the states s, and then find S by comparing the results with the first of Eqs. (48); it is highly recommended to the reader as a useful exercise.

20

Chapter 8

Page 11 of 52

Essential Graduate Physics

QM: Quantum Mechanics

Note that for the entangled triplet state s+, whose ket-vector (20) is a linear superposition of two kets of states with opposite spins, this result is highly counter-intuitive, and shows how careful we should be interpreting entangled quantum states. (As will be discussed in Chapter 10, quantum entanglement brings even more surprises for measurements.) 4

Now we may return to the particular issue of the magnetic field effect on the triplet state of the He atom. Directing the z-axis along the field, we may reduce Eq. (46) to

Sˆ Hˆ field   e Sˆ z B  2 B B z . 

(8.53)

Since all three triplet states (21) are eigenstates, in particular, of the operator Sˆ z , and hence of the Hamiltonian (53), we may use the second of Eqs. (48) to calculate their energy change simply as E field

  1,   2 B BM S  2 B B   0,  1, 

for the factorable triplet state , for the entangled triplet state s  , for the factorable triplet state  .

(8.54)

This splitting of the “orthohelium” level is schematically shown in Fig. 1b.21 8.3. Multiparticle systems Leaving several other problems on two-particle systems for the reader’s exercise, let me proceed to the discussion of systems with N > 2 indistinguishable particles, whose list notably includes atoms, molecules, and condensed-matter systems. In this case, Eq. (7) for fermions is generalized as

Pˆkk '       , for all k , k'  1, 2,..., N ,

(8.55)

where the operator Pˆkk ' permutes particles with numbers k and k’. As a result, for systems with nondirectly-interacting fermions, the Pauli principle forbids any state in which any two particles have similar single-particle wavefunctions. Nevertheless, it permits two fermions to have similar orbital wavefunctions, provided that their spins are in the singlet state (18), because this satisfies the permutation requirement (55). This fact is of paramount importance for the ground state of the systems whose Hamiltonians do not depend on spin because it allows the fermions to be in their orbital singleparticle ground states, with two electrons of the spin singlet sharing the same orbital state. Hence, for the limited (but very important!) goal of finding ground-state energies of multi-fermion systems with negligible direct interaction, we may ignore the actual singlet spin structure, and reduce the Pauli 21

It is interesting that another very important two-electron system, the hydrogen (H2) molecule, which was briefly discussed in Sec. 2.6, also has two similarly named forms, parahydrogen and orthohydrogen. However, their difference is due to two possible (respectively, singlet and triplet) states of the system of two spins of the two hydrogen nuclei – protons, which are also spin-½ particles. The resulting ground-state energy of the parahydrogen is lower than that of the orthohydrogen by only ~15 meV per molecule – the difference lower than kBT at room temperature (~26 meV). As a result, at very low temperatures, hydrogen at equilibrium is dominated by parahydrogen, but at ambient conditions, the orthohydrogen is nearly three times more abundant, due to its triple nuclear spin degeneracy. Curiously, the theoretical prediction of this effect by W. Heisenberg (together with F. Hund) in 1927 was cited in his 1932 Nobel Prize award as the most noteworthy application of quantum theory. Chapter 8

Page 12 of 52

Essential Graduate Physics

QM: Quantum Mechanics

exclusion principle to the rudimentary picture of single-particle orbital energy levels, each “occupied with two fermions”. As a very simple example, let us find the ground energy of five fermions confined in a hard-wall, cubic-shaped 3D volume of side a, ignoring their direct interaction. From Sec. 1.7, we know the singleparticle energy spectrum of the system:

 nx ,n y ,nz   0 n x2  n y2  n z2 ,

with  0 

 2 2

2ma 2

, and n x , n y , n z  1, 2, ,

(8.56)

so the lowest-energy states are: – one ground state with {nx,ny,nz} = {1,1,1}, and energy 111= (12+12+12)0 = 30, and – three excited states, with {nx,ny,nz} equal to either {2,1,1}, or {1,2,1}, or {1,1,2}, with equal energies 211= 121 = 112 = (22+12+12)0 = 60. According to the above simple formulation of the Pauli principle, each of these orbital energy levels can accommodate up to two fermions. Hence the lowest-energy (ground) state of the five-fermion system is achieved by placing two of them on the ground level 111 = 30, and the remaining three particles, in any of the degenerate “excited” states of energy 60, so the ground-state energy of the system is 12 2  2 . E g  2  3 0  3  6 0  24 0  ma 2

(8.57)

Moreover, in many cases, relatively weak interaction between fermions does not blow up such a simple quantum state classification scheme qualitatively, and the Pauli principle allows tracing the order of single-particle state filling. This is exactly the simple approach that was used in our discussion of atoms in Sec. 3.7. Unfortunately, it does not allow for a more specific characterization of the ground states of most atoms, in particular the evaluation of the corresponding values of the quantum numbers S, L, and J that characterize the net angular momenta of the atom, and hence its response to an external magnetic field. These numbers are defined by relations similar to Eqs. (48), each for the corresponding vector operator of the net angular momenta: N

Sˆ   sˆ k , k 1

N

Lˆ   ˆl k , k 1

N

Jˆ   ˆj k ;

(8.58)

k 1

note that these definitions are consistent with Eq. (5.170) applied both to the angular momenta sk, lk, and jk of each particle, and to the full vectors S, L, and J. When the numbers S, L, and J for a state are known, they are traditionally recorded in the form of the so-called Russell-Saunders symbols:22 2 S 1

LJ ,

(8.59)

where S and J are the corresponding values of these quantum numbers, while L is a capital letter, encoding the quantum number L – via the same spectroscopic notation as for single particles (see Sec. 3.6): L = S for L = 0, L = P for L = 1, L = D for L = 2, etc. (The reason why the front superscript of the Russel-Saunders symbol lists 2S + 1 rather than just S, is that according to the last of Eqs. (48), it 22

Named after Henry Russell and Frederick Saunders, whose pioneering (circa 1925) processing of experimental spectral-line data has established the very idea of the vector addition of the electron spins, described by the first of Eqs. (58). Chapter 8

Page 13 of 52

Essential Graduate Physics

QM: Quantum Mechanics

shows the number of possible values of the quantum number MS, which characterizes the state’s spin degeneracy, and is called its multiplicity.) For example, for the simplest, hydrogen atom (Z = 1), with its single electron in the ground 1s state, L = l = 0, S = s = ½, and J = S = ½, so its Russell-Saunders symbol is 2S1/2. Next, the discussion of the helium atom (Z = 2) in the previous section has shown that in its ground state L = 0 (because of the 1s orbital state of both electrons), and S = 0 (because of the singlet spin state), so the total angular momentum also vanishes: J = 0. As a result, the Russell-Saunders symbol for this state is 1S0. The structure of the next atom, lithium (Z = 3) is also easy to predict, because, as was discussed in Sec. 3.7, its ground-state electron configuration is 1s22s1, i.e. includes two electrons in the “helium shell”, i.e. on the 1s orbitals (now we know that they are actually in an entangled singlet spin state), and one electron in the 2s state, of higher energy, also with zero orbital momentum, l = 0. As a result, the total L in this state is evidently equal to 0, and S is equal to ½, so J = ½, meaning that the Russell-Saunders symbol of the lithium’s ground state is 2P1/2. Even in the next atom, beryllium (Z = 4), with the ground-state configuration 1s22s2, the symbol is readily predictable, because none of its electrons has non-zero orbital momentum, giving L = 0. Also, each electron pair is in the singlet spin state, i.e. we have S = 0, so J = 0 – the quantum number set described by the Russell-Saunders symbol 1S0 – just as for helium. However, for the next, boron atom (Z = 5), with its ground-state electron configuration 1s22s22p1 (see, e.g., Fig. 3.24), there is no obvious way to predict the result. Indeed, this atom has two pairs of electrons, with opposite spins, on its two lowest s-orbitals, giving zero contributions to the net S, L, and J. Hence these total quantum numbers may be only contributed by the last, fifth electron with s = ½ and l = 1, giving S = ½, L = 1. As was discussed in Sec. 5.7 for the single-particle case, the vector addition of the angular momenta S and L enables two values of the quantum number J: either L + S = ³/2 or L – S = ½. Experiment shows that the difference between the energies of these two states of boron is very small (~2 meV), so at room temperature (with kBT  26 meV) they are both partly occupied, with the genuine ground state having J = ½, so its Russell-Saunders symbol is 2P1/2. Such energy differences, which become larger for heavier atoms, are determined both by the Coulomb and spin-orbit23 interactions between the electrons. Their quantitative analysis is rather involved (see below), but the results tend to follow simple phenomenological Hund rules, with the following hierarchy: Rule 1. For a given electron configuration, the ground state has the largest possible S, and hence the largest possible multiplicity 2S + 1. Rule 2. For a given S, the ground state has the largest possible L. Rule 3. For given S and L, J has its smallest possible value,  L – S , if the given sub-shell {n, l} is filled not more than by half, while in the opposite case, J has its largest possible value, L + S. Let us see how these rules work for the boron atom we have just discussed. For it, the Hund Rules 1 and 2 are satisfied automatically, while the sub-shell {n = 2, l = 1}, which can house up to 2(2l + 1) = 6 electrons, is filled with just one 2p electron, i.e. by less than a half of the maximum value. As a result, Rule 3 predicts the ground state’s value J = ½, in agreement with experiment. Generally, for 23

In light atoms, the spin-orbit interaction is so weak that it may be reasonably well described as an interaction of the total momenta L and S of the system – the so-called LS (or “Russell-Saunders”) coupling. On the other hand, in very heavy atoms, the interaction is effectively between the net momenta jk = lk + sk of the individual electrons – the so-called jj coupling. This is the reason why in such atoms the Hund Rule 3 may be violated. Chapter 8

Page 14 of 52

Essential Graduate Physics

QM: Quantum Mechanics

lighter atoms, the Hund rules are well obeyed. However, the lower down the Hund rule hierarchy, the less “powerful” the rules are, i.e. the more often they are violated in heavier atoms. Now let us discuss possible approaches to a quantitative theory of multiparticle systems – not only atoms. As was discussed in Sec. 1, if fermions do not interact directly, the stationary states of the system have to be the antisymmetric eigenstates of the permutation operator, i.e. to satisfy Eq. (55). To understand how such states may be formed from the single-electron ones, let us return for a minute to the case of two electrons, and rewrite Eq. (11) in the following compact form: state 1 state 2

 

1 2



 '  '   







β '  particle number 1,

2 

β '  particle number 2,

1

(8.60a)

where, in the last form, the direct product signs are just implied. In this way, the Pauli principle is mapped on the well-known property of matrix determinants: if any two columns of a matrix coincide, its determinant vanishes. This Slater determinant approach24 may be readily generalized to N fermions occupying any N (not necessarily the lowest-energy) single-particle states , ’, ’’, etc: state list     particle β'  "   list N β'  "           N



Slater determinant

 

1

N!1 / 2

β'

"

(8.60b)

The Slater determinant form is extremely nice and compact – in comparison with direct writing of a sum of N! products, each of N ket factors. However, there are two major problems with using it for practical calculations: (i) For the calculation of any bra-ket product (say, within the perturbation theory) we still need to spell out each bra- and ket-vector as a sum of component terms. Even for a limited number of electrons (say N ~ 102 in a typical atom), the number N! ~ 10160 of terms in such a sum is impracticably large for any analytical or numerical calculation. (ii) In the case of interacting fermions, the Slater determinant does not describe the eigenvectors of the system; rather the stationary state is a superposition of such basis functions, i.e. of the Slater determinants – each for a specific selection of N states from the full set of single-particle states – that is generally larger than N. For atoms and simple molecules, whose filled-shell electrons may be excluded from an explicit analysis (by describing their effects, approximately, with effective pseudo-potentials), the effective number N may be reduced to a smaller number Nef of the order of 10, so Nef! < 106, and the Slater 24

It was suggested in 1929 by John C. Slater.

Chapter 8

Page 15 of 52

Essential Graduate Physics

QM: Quantum Mechanics

determinants may be used for numerical calculations – for example, in the Hartree-Fock theory – see the next section. However, for condensed-matter systems, such as metals and semiconductors, with the number of free electrons is of the order of 1023 per cm3, this approach is generally unacceptable, though with some smart tricks (such as using the crystal’s periodicity) it may be still used for some approximate (also mostly numerical) calculations. These challenges make the development of a more general theory that would not use particle numbers (which are superficial for indistinguishable particles to start with) a must for getting any final analytical results for multiparticle systems. The most effective formalism for this purpose, which avoids particle numbering at all, is called the second quantization.25 Actually, we have already discussed a particular version of this formalism, for the case of the 1D harmonic oscillator, in Sec. 5.4. As a reminder, after the definition (5.65) of the “creation” and “annihilation” operators via those of the particle’s coordinate and momentum, we have derived their key properties (5.89), aˆ n  n1 / 2 n  1 ,

1/ 2 aˆ † n  n  1 n  1

,

(8.61)

where n are the stationary (Fock) states of the oscillator. This property allows an interpretation of the operators’ actions as the creation/annihilation of a single excitation with the energy 0 – thus justifying the operator names. In the next chapter, we will show that such excitation of an electromagnetic field mode may be interpreted as a massless boson with s = 1, called the photon. In order to generalize this approach to arbitrary bosons, not appealing to a specific system, we may use relations similar to Eq. (61) to define the creation and annihilation operators. The definitions look simple in the language of the so-called Dirac states, described by ket-vectors

N1 , N 2 , N j , ,

(8.62)

Dirac state

where Nj is the state occupancy, i.e. the number of bosons in the single-particle state j. Let me emphasize that here the indices 1, 2, …j,… number single-particle states (including their spin parts) rather than particles. Thus the very notion of an individual particle’s number is completely (and for indistinguishable particles, very relevantly) absent from this formalism. Generally, the set of singleparticle states participating in the Dirac state may be selected arbitrarily, provided that it is full and orthonormal in the sense N 1' ,N 2'  , N 'j' ,  N 1 ,N 2 ..., N j ,    N N'  N N'  N N'  , 1 1 2 2 j j

(8.63)

though for systems of non- (or weakly) interacting bosons, using the stationary states of individual particles in the system under analysis is almost always the best choice. Now we can define the particle annihilation operator as follows: aˆ j N 1 , N 2 , N j ,  N 1j / 2 N 1 , N 2 , N j  1,  .

(8.64)

Note that the pre-ket coefficient, similar to that in the first of Eqs. (61), guarantees that any attempt to annihilate a particle in an initially unpopulated state gives the non-existing (“null”) state: 25

It was invented (first for photons and then for arbitrary bosons) by P. Dirac in 1927, and then (in 1928) adjusted for fermions by E. Wigner and P. Jordan. Note that the term “second quantization” is rather misleading for the non-relativistic case discussed here, but finds certain justification in the quantum field theory. Chapter 8

Page 16 of 52

Boson annihilation operator

Essential Graduate Physics

QM: Quantum Mechanics

aˆ j N 1 , N 2 , 0 j ,   0 ,

(8.65)

where the symbol 0j means zero occupancy of the jth state. According to Eq. (63), an equivalent way to write Eq. (64) is

N1' , N 2' ,..., N 'j ,... aˆ j .N1 , N 2 ,.., N j ,...  N 1j / 2 N N'  N N' ...  N' , N 1... j 1 1 2 2 j

(8.66)

According to the general Eq. (4.65), the matrix element of the Hermitian-conjugate operator aˆ †j is * N1' ,N 2' ,..., N 'j ,... aˆ †j N1 , N 2 ,...N j ,...  N1 , N 2 ,..., N j ,... aˆ j N1' ,N 2' ,..., N 'j ,... 

N1 , N 2 ,..., N j ,... N 'j 

1/ 2

 N j  1

1/ 2

meaning that Boson creation operator

 N N'  N 1

1

2 N'2

N1' ,N 2' ,..., N 'j  1,...  N 'j   N N'  N N' ...  N , N' 1... 1 1 2 2 j j 1/ 2

(8.67)

... N 1, N' ... , j j

1/ 2 aˆ †j N 1 , N 2 ,..., N j ,...  N j  1 N 1 , N 2 ,..., N j  1,... ,

(8.68)

in total compliance with the second of Eqs. (61). In particular, this particle creation operator allows a description of the generation of a single particle from the vacuum (not null!) state  0, 0, …: aˆ †j 0, 0,...,0 j ,...,0  0, 0,...,1 j ,...0 ,

(8.69)

and hence a product of such operators may create, from vacuum, a multiparticle state with an arbitrary set of occupancies: 26 1/ 2 aˆ1† aˆ1† ...aˆ1† aˆ 2† aˆ 2† ...aˆ 2† ... 0, 0,...   N 1! N 2 !... N 1 , N 2 ,... .    

N1 times

(8.70)

N 2 times

Next, combining Eqs. (64) and (68), we get aˆ †j aˆ j N1 , N 2 ,...N j ,...  N j N1 , N 2 ,..., N j ,... ,

(8.71)

so, just as for the particular case of the harmonic-oscillator excitations, the operator Numbercounting operator

Nˆ j  aˆ †j aˆ j

(8.72)

“counts” the number of particles in the jth single-particle state, while preserving the whole multiparticle state. Acting on a state by the creation-annihilation operators in the reverse order, we get aˆ j aˆ †j N 1 , N 2 ,..., N j ,...  N j  1 N 1 , N 2 ,..., N j ,... .

(8.73)

Eqs. (71) and (73) show that for any state of a multiparticle system (which may be represented as a linear superposition of Dirac states with all possible sets of numbers Nj), we may write

26

The resulting Dirac state is not an eigenstate of every multiparticle Hamiltonian. However, we will see below that for a set of non-interacting particles it is a stationary state, so the full set of such states may be used as a good basis in perturbation theories of systems of weakly interacting particles.

Chapter 8

Page 17 of 52

Essential Graduate Physics

QM: Quantum Mechanics

aˆ j aˆ †j  aˆ †j aˆ j  aˆ j , aˆ †j   Iˆ,  

(8.74)

again in agreement with what we had for the 1D oscillator – cf. Eq. (5.68). According to Eqs. (63), (64), and (68), the creation and annihilation operators corresponding to different single-particle states do commute, so Eq. (74) may be generalized as aˆ , aˆ †   Iˆ , jj '  j j ' 

(8.75)

while similar operators commute, regardless of which states they act upon: aˆ † , aˆ †   aˆ , aˆ   0ˆ .  j j'   j j' 

Bosonic operators: commutation relations

(8.76)

As was mentioned earlier, a major challenge in the Dirac approach is to rewrite the Hamiltonian of a multiparticle system, that naturally carries particle numbers k (see, e.g., Eq. (22) for k = 1, 2), in the second quantization language, in which there are no these numbers. Let us start with single-particle components of such Hamiltonians, i.e. operators of the type N

Fˆ   fˆk .

(8.77)

k 1

where all N operators fˆk are similar, besides that each of them acts on one specific (kth) particle, and N is the total number of particles in the system, which is evidently equal to the sum of single-particle state occupancies: N   N j. (8.78) j

The most important examples of such operators are the kinetic energy of N similar single particles and their potential energy in an external field: N pˆ 2 Tˆ   k , k 1 2m

N

Uˆ   uˆ (rk ).

(8.79)

k 1

For bosons, instead of the Slater determinant (60), we have to write a similar expression, but without the sign alternation at permutations:

 N 1!...N j !...   N 1 ,...N j ,...   N !  

1/ 2

 '"...  ...   P

,

(8.80)

N operands

sometimes called the permanent. Note again that the left-hand side of this relation is written in the Dirac notation (that does not use particle numbering), while on its right-hand side, just in formulas of Secs. 1 and 2, the particle numbers are coded with the positions of the single-particle states inside the state vectors, and the summation is over all different permutations of the states in the ket – cf. Eq. (10). (According to the basic combinatorics,27 there are N!/(N1!...Nj!...) such permutations, so the front coefficient in Eq. (80) ensures the normalization of the Dirac state, provided that the single-particle

27

See, e.g., MA Eq. (2.3).

Chapter 8

Page 18 of 52

Singleparticle operator

Essential Graduate Physics

QM: Quantum Mechanics

states , ’, …are normalized.) Let us use Eq. (80) to spell out the following matrix element for a system with (N –1) particles: ...N j ,...N j'  1,... Fˆ ...N j  1,...N j' ,... 

N 1!...( N j  1)!...(N j '  1)!... ( N  1)!

N

j N j' 

 

1/ 2

P N 1

P N 1

N 1

... ' "...  fˆk ... '"... ,

(8.81)

k 1

where all non-specified occupation numbers in the corresponding positions of the bra- and ket-vectors are equal to each other. Each single-particle operator fˆk participating in the operator sum acts on the bra- and ket-vectors of states only in one (kth) position, giving the following result, independent of the position number: j fˆk  j'   j fˆ  j'  f jj' . (8.82) th th in

k position

in k position

Since in both permutation sets participating in Eq. (81), with (N – 1) state vectors each, all positions are equivalent, we can fix the position (say, take the first one) and replace the sum over k with the multiplication by of the bracket by (N – 1). The fraction of permutations with the necessary bra-vector (with number j) in that position is Nj/(N – 1), while that with the necessary ket-vector (with number j’) in the same position is Nj’/(N – 1). As the result, the permutation sum in Eq. (81) reduces to

( N  1)

Nj

N j'

N 1 N 1

f jj '

  ... '"... ... '"... ,

(8.83)

P N 2 P N 2

where our specific position k is now excluded from both the bra- and ket-vector permutations. Each of these permutations now includes only (Nj – 1) states j and (Nj’ – 1) states j’, so using the state orthonormality, we finally arrive at a very simple result: ...N j ,...N j'  1,... Fˆ ...N j  1,...N j' ,... 

N 1!...( N j  1)!...( N j'  1)!... ( N  1)!

 N j N j' 

1/ 2

N

N j'  ( N  1) 1/ 2

j

Nj

N j'

N 1 N 1

f jj '

( N  2)! (8.84) N 1!...( N j  1)!...( N j '  1)!...

f jj' .

On the other hand, let us calculate the matrix elements of the following operator:

f

jj'

† aˆ j aˆ j' .

(8.85)

j, j'

A direct application of Eqs. (64) and (68) shows that the only non-vanishing elements are 1/ 2 † ...N j ,...N j '  1,... f jj ' aˆ j aˆ j ' ...N j  1,..., N j ' ,...  N j N j '  f jj ' .

Singleparticle operator in Dirac representation

(8.86)

But this is exactly the last form of Eq. (84), so in the basis of Dirac states, the operator (77) may be represented as Fˆ   f jj' aˆ †j aˆ j' . (8.87) j , j'

Chapter 8

Page 19 of 52

Essential Graduate Physics

QM: Quantum Mechanics

This beautifully simple relation is the key formula of the second quantization theory and is essentially the Dirac-representation analog of Eq. (4.59) of the single-particle quantum mechanics. Each term of the sum (87) may be described by a very simple mnemonic rule: for each pair of single-particle states j and j’, first, annihilate a particle in the state j’, then create one in the state j, and finally weigh the result with the corresponding single-particle matrix element. One of the corollaries of Eq. (87) is that the expectation value of an operator whose eigenstates coincide with the Dirac states is F  ...N j ,... Fˆ ...N j ,...   f jj N j ,

(8.88)

j

with an evident physical interpretation as the sum of single-particle expectation values over all states, weighed by the occupancy of each state. Proceeding to fermions, which have to obey the Pauli principle, we immediately notice that any occupation number Nj may only take two values, 0 or 1. To account for that, and also make the key relation (87) valid for fermions as well, the creation-annihilation operators are defined by the following relations: aˆ j N 1 , N 2 ,...,0 j ,...  0,

aˆ j N 1 , N 2 ,...,1 j ,...  (1) (1, j1) N 1 , N 2 ,...,0 j ,... ,

aˆ †j N 1 , N 2 ,...,0 j ,...  (1)  (1, j 1) N 1 , N 2 ,...,1 j ,... ,

aˆ †j N 1 , N 2 ,...,1 j ,...  0,

(8.89) (8.90)

where the symbol (J, J’) means the sum of all occupancy numbers in the states with numbers from J to J’, including the border points: J'

( J , J' )   N j ,

(8.91)

jJ

so the sum participating in Eqs. (89)-(90) is the total occupancy of all states with the numbers below j. (The states are supposed to be numbered in a fixed albeit arbitrary order.) As a result, these relations may be conveniently summarized in the following verbal form: if an operator replaces the jth state’s occupancy with the opposite one (either 1 with 0, or vice versa), it also changes the sign before the result if (and only if) the total number of particles in the states with j’ < j is odd. Let us use this (perhaps somewhat counter-intuitive) sign alternation rule to spell out the ketvector 11 of a completely filled two-state system, formed from the vacuum state 00 in two different ways. If we start by creating a fermion in state 1, we get aˆ1† 0, 0  (1) 0 1, 0  1, 0 ,

aˆ 2† aˆ1† 0, 0  aˆ 2† 1, 0  (1)1 1,1   1,1 ,

(8.92a)

while if the operator order is different, the result is aˆ 2† 0, 0  (1) 0 0,1  0,1 ,

aˆ1† aˆ 2† 0, 0  aˆ1† 0,1  (1) 0 1,1  1,1 ,

(8.92b)

so  aˆ † aˆ †  aˆ † aˆ †  0, 0  0 . (8.93)  1 2 2 1    Since the action of any of these operator products on any initial state rather than the vacuum one also gives the null ket, we may write the following operator equality:

Chapter 8

Page 20 of 52

Fermion creationannihilation operators

Essential Graduate Physics

QM: Quantum Mechanics

aˆ1† aˆ 2†  aˆ 2† aˆ1†  aˆ1† , aˆ 2†   0ˆ. (8.94)   It is straightforward to check that this result is valid for Dirac vectors of an arbitrary length, and does not depend on the occupancy of other states, so we may generalize it as aˆ † , aˆ †   aˆ , aˆ   0ˆ ;  j j'   j j'      Fermionic operators: commutation relations

(8.95)

these equalities hold for j = j’ as well. On the other hand, an absolutely similar calculation shows that the mixed creation-annihilation commutators do depend on whether the states are different or not:28 aˆ , aˆ †   Iˆ .  j j'  jj'  

(8.96)

These equations look very much like Eqs. (75)-(76) for bosons, “only” with the replacement of commutators with anticommutators. Since the core laws of quantum mechanics, including the operator compatibility (Sec. 4.5) and the Heisenberg equation (4.199) of operator evolution in time, involve commutators rather than anticommutators, one might think that all the behavior of bosonic and fermionic multiparticle systems should be dramatically different. However, the difference is not as big as one could expect; indeed, a straightforward check shows that the sign factors in Eqs. (89)-(90) just compensate those in the Slater determinant, and thus make the key relation (87) valid for the fermions as well. (Indeed, this is the very goal of the introduction of these factors.) To illustrate this fact on the simplest example, let us examine what the second quantization formalism says about the dynamics of non-interacting particles in the system whose single-particle properties we have discussed repeatedly, namely two nearly-similar potential wells, coupled by tunneling through the separating potential barrier – see, e.g., Figs. 2.21 or 7.4. If the coupling is so small that the states localized in the wells are only weakly perturbed, then in the basis of these states, the single-particle Hamiltonian of the system may be represented by the 22 matrix (5.3). With the energy reference selected in the middle between the energies of unperturbed states, the coefficient b vanishes, this matrix is reduced to c  c , h  c  σ   z with c   c x  ic y , (8.97)  c  c z  and its eigenvalues to

    c,



with c  c  c x2  c y2  c z2



1/ 2

.

(8.98)

Using the key relation (87) together with Eq. (97), we may represent the Hamiltonian of the whole system of particles in terms of the creation-annihilation operators: † † † † Hˆ  c z aˆ1 aˆ1  c  aˆ1 aˆ 2  c  aˆ 2 aˆ1  c z aˆ 2 aˆ 2 ,

(8.99)

where aˆ1†, 2 and aˆ1, 2 are the operators of creation and annihilation of a particle in the corresponding potential well. (Again, in the second quantization approach the particles are not numbered at all!) As

28 A

by-product of this calculation is proof that the operator defined by Eq. (72) counts the number of particles Nj (now equal to either 1 or 0), just at it does for bosons.

Chapter 8

Page 21 of 52

Essential Graduate Physics

QM: Quantum Mechanics

Eq. (72) shows, the first and the last terms of the right-hand side of Eq. (99) describe the particle energies 1,2 = cz in uncoupled wells, c z aˆ1† aˆ1  c z Nˆ 1   1 Nˆ 1 ,

 c z aˆ 2† aˆ 2  c z Nˆ 2   2 Nˆ 2 ,

(8.100)

while the sum of the middle two terms is the second-quantization description of tunneling between the wells. Now we can use the general Eq. (4.199) of the Heisenberg picture to spell out the equations of motion of the creation-annihilation operators. For example,





† † † † (8.101) iaˆ1  aˆ1 , Hˆ  c z aˆ1 , aˆ1 aˆ1   c  aˆ1 , aˆ1 aˆ 2   c  aˆ1 , aˆ 2 aˆ1   c z aˆ1 , aˆ 2 aˆ 2 .         Since the Bose and Fermi operators satisfy different commutation relations, one could expect the righthand side of this equation to be different for bosons and fermions. However, it is not so! Indeed, all commutators on the right-hand side of Eq. (101) have the following form:

aˆ , aˆ † aˆ   aˆ aˆ † aˆ  aˆ † aˆ aˆ . (8.102) j j' j" j' j" j  j j' j"  As Eqs. (74) and (94) show, the first pair product of operators on the right-hand side may be recast as aˆ j aˆ †j'  Iˆ jj'  aˆ †j' aˆ j ,

(8.103)

where the upper sign pertains to bosons and the lower one to fermions, while according to Eqs. (76) and (95), the very last pair product in Eq. (102) is aˆ j" aˆ j   aˆ j aˆ j" ,

(8.104)

with the same sign convention. Plugging these expressions into Eq. (102), we see that regardless of the particle type, there is a universal (and generally very useful) commutation relation aˆ , aˆ † aˆ   aˆ  , (8.105) j" jj'  j j' j"  valid for both bosons and fermions. As a result, the Heisenberg equation of motion for operator aˆ1 , and the equation for aˆ 2 (which may be obtained absolutely similarly), are also universal:29 iaˆ1  c z aˆ1  c  aˆ 2 , iaˆ 2  c  aˆ1  c z aˆ 2 .

(8.106)

This is a system of two coupled linear differential equations, which is similar to the equations for the c-number probability amplitudes of single-particle wavefunctions of a two-level system – see, e.g., Eq. (2.201) and the model solution of Problem 4.25. Their general solution is a linear superposition aˆ1, 2 (t )   ˆ 1(,2) exp t.

(8.107)



29



Equations of motion for the creation operators aˆ1, 2 are just the Hermitian conjugates of Eqs. (106), and do not

add any new information about the system’s dynamics. Chapter 8

Page 22 of 52

Essential Graduate Physics

QM: Quantum Mechanics

As usual, in order to find the exponents , it is sufficient to plug a particular solution aˆ1, 2 (t )  ˆ 1, 2 expt into Eq. (106) and require that the determinant of the resulting linear system for the “coefficients” (actually, time-independent operators) ˆ1, 2 equals zero. This gives us the following characteristic equation c z  i  c

c

 c z  i 

 0,

(8.108)

with two roots  = i/2, where   2c/ – cf. Eq. (5.20). Now plugging each of the roots, one by one, into the system of equations for ˆ1, 2 , we can find these operators, and hence the general solution of system (98) for arbitrary initial conditions. Let us consider the simple case cy = cz = 0 (meaning in particular that the wells are exactly aligned, see Fig. 2.21), so /2  c = cx; then the solution of Eq. (106) is aˆ1 (t )  aˆ1 (0) cos

Quantum oscillations: second quantization form

t t ,  iaˆ 2 (0) sin 2 2

aˆ 2 (t )  iaˆ1 (0) sin

t t  aˆ 2 (0) cos . 2 2

(8.109)

Multiplying the first of these relations by its Hermitian conjugate, and ensemble-averaging the result, we get t t N 1  aˆ1† (t )aˆ1 (t )  aˆ1† (0)aˆ1 (0) cos 2  aˆ 2† (0)aˆ 2 (0) sin 2 2 2 (8.110)  t  t † †  i aˆ1 (0)aˆ 2 (0)  aˆ 2 (0)aˆ1 (0) sin cos . 2 2 Let the initial state of the system be a single Dirac state, i.e. have a definite number of particles in each well; in this case, only the two first terms on the right-hand side of Eq. (110) are different from zero, giving:30 t t (8.111)  N 2 (0) sin 2 . N 1  N 1 (0) cos 2 2 2 For one particle, initially placed in either well, this gives us our old result (2.181) describing the usual quantum oscillations of the particle between two wells with the frequency . However, Eq. (111) is valid for any set of initial occupancies; let us use this fact. For example, starting from two particles, with initially one particle in each well, we get N1 = 1, regardless of time. So, the occupancies do not oscillate, and no experiment may detect the quantum oscillations, though their frequency  is still formally present in the time evolution equations. This fact may be interpreted as the simultaneous quantum oscillations of two particles between the wells, exactly in anti-phase. For bosons, we can go on to even larger occupancies by preparing the system, for example, in the state with N1(0) = N, N2(0) = 0. The result (111) says that in this case, we see that the quantum oscillation amplitude increases N-fold; this is a particular manifestation of the general fact that bosons can be (and in time, stay) in the same quantum state. On the other hand, for fermions we cannot increase the initial occupancies beyond 1, so the largest oscillation amplitude we can get is if we initially fill just one well.

For the second well’s occupancy, the result is complementary, N2(t) = N1(0) sin2t + N2(0) cos2t, giving a good sanity check: N1(t) + N2(t) = N1(0) + N2(0) = const.

30

Chapter 8

Page 23 of 52

Essential Graduate Physics

QM: Quantum Mechanics

The Dirac approach may be readily generalized to more complex systems. For example, Eq. (99) implies that an arbitrary system of potential wells with weak tunneling coupling between the adjacent wells may be described by the Hamiltonian Hˆ    j aˆ †j aˆ j  j

  

jj'

aˆ †j aˆ j'  h.c.,

(8.112)

j , j'

where the symbol {j, j’} means that the second sum is restricted to pairs of next-neighbor wells – see, e.g., Eq. (2.203) and its discussion. Note that this Hamiltonian is still a quadratic form of the creationannihilation operators, so the Heisenberg-picture equations of motion of these operators are still linear, and its exact solutions, though possibly cumbersome, may be studied in detail. Due to this fact, the Hamiltonian (112) is widely used for the study of some phenomena, for example, the very interesting Anderson localization effects, in which a random distribution of the localized-site energies j prevents tunneling particles, within a certain energy range, from spreading to unlimited distances.31 8.4. Perturbative approaches The situation becomes much more difficult if we need to account for explicit interactions between the particles. Let us assume that the interaction may be reduced to that between their pairs (as in the case at the Coulomb forces and most other interactions32), so it may be described by the following “pair-interaction” Hamiltonian 1 N Uˆ int   uˆ int (rk , rk' ), (8.113) 2 k ,k '1 k k '

with the front factor of ½ compensating the double-counting of each particle pair by this double sum. The translation of this operator to the second-quantization form may be done absolutely similarly to the derivation of Eq. (87), and gives a similar (though naturally more involved) result 1 Uˆ int  2

u

jj'll'

aˆ †j aˆ †j' aˆl' aˆl ,

(8.114)

j , j' ,l ,l'

where the two-particle matrix elements are defined similarly to Eq. (82): u jj'll'   j  j' uˆint  l  l' .

(8.115)

The only new feature of Eq. (114) is a specific order of the indices of the creation operators. Note the mnemonic rule of writing this expression, similar to that for Eq. (87): each term corresponds to moving a pair of particles from states l and l’ to states j’ and j (in this order!) factored with the corresponding two-particle matrix element (115). However, with the account of this term, the resulting Heisenberg equations of the time evolution of the creation/annihilation operators become nonlinear, so solving them and calculating observables from the results is usually impossible, at least analytically. The only case when some general results 31

For a review of the 1D version of this problem, see, e.g., J. Pendry, Adv. Phys. 43, 461 (1994). A simple but important example from the condensed matter theory is the so-called Hubbard model, in which particle repulsion limits their number on each of localized sites to either 0, or 1, or 2, with negligible interaction of the particles on different sites – though the next-neighbor sites are still connected by tunneling, as in Eq. (112).

32

Chapter 8

Page 24 of 52

Pairinteraction Hamiltonian: two forms

Essential Graduate Physics

QM: Quantum Mechanics

may be obtained is the weak interaction limit. In this case, the unperturbed Hamiltonian contains only single-particle terms such as (79), and we can always (at least as a matter of principle :-) find such a basis of orthonormal single-particle states j in which that Hamiltonian is diagonal in the Dirac representation: Hˆ ( 0)    (j 0) aˆ †j aˆ j . (8.116) j

Now we can use Eq. (6.14), in this basis, to calculate the interaction energy as a first-order perturbation: 1 (1) Eint  N 1 , N 2 ,... Uˆ int N 1 , N 2 ,...  N 1 , N 2 ,...  u jj'll' aˆ †j aˆ †j' aˆ l' aˆ l N 1 , N 2 ,... 2 j , j' ,l ,l' 1 † †   u jj'll' N 1 , N 2 ,... aˆ j aˆ j' aˆ l' aˆ l N 1 , N 2 ,... . 2 j , j' ,l ,l'

(8.117)

Since, according to Eq. (63), the Dirac states with different occupancies are orthogonal, the last long bracket is different from zero only for three particular subsets of its indices: (i) j  j’, l = j, and l’ = j’. In this case, the four-operator product in Eq. (117) is equal to aˆ †j aˆ †j' aˆ j' aˆ j , and applying the proper commutation rules twice, we can bring it to the so-called normal ordering, with each creation operator standing to the right of the corresponding annihilation operator, thus forming the particle number operator (72): aˆ †j aˆ †j' aˆ j' aˆ j   aˆ †j aˆ †j' aˆ j aˆ j'   aˆ †j   aˆ j aˆ †j'  aˆ j'  aˆ †j aˆ j aˆ †j' aˆ j'  Nˆ j Nˆ j' ,  

(8.118)

with a similar sign of the final result for bosons and fermions. (ii) j  j’, l = j’, and l’ = j. In this case, the four-operator product is equal to aˆ †j aˆ †j' aˆ j aˆ j' , and bringing it to the form Nˆ j Nˆ j' requires only one commutation: aˆ †j aˆ †j' aˆ j aˆ j'  aˆ †j   aˆ j aˆ †j'  aˆ j'   aˆ †j aˆ j aˆ †j' aˆ j'   Nˆ j Nˆ j' ,  

(8.119)

with the upper sign for bosons and the lower sign for fermions. (iii) All indices are equal to each other, giving aˆ †j aˆ †j' aˆl' aˆl  aˆ †j aˆ †j aˆ j aˆ j . For fermions, such an operator (that “tries” to create or to kill two particles in a row, in the same state) immediately gives the null vector. In the case of bosons, we may use Eq. (74) to commute the internal pair of operators, getting aˆ †j aˆ †j aˆ j aˆ j  aˆ †j  aˆ j aˆ †j  Iˆ aˆ j  Nˆ j ( Nˆ j  Iˆ) .  

(8.120)

Note, however, that this expression formally covers the fermion case as well (always giving zero). As a result, Eq. (117) may be rewritten in the following universal form: Particle interaction: energy correction

(1) Eint 

1 1 N j N j' u jj'jj'  u jj'j'j    N j ( N j  1) u jjjj .  2 j , j' 2 j

(8.121)

j j'

The corollaries of this important result are very different for bosons and fermions. In the former case, the last term usually dominates, because the matrix elements (115) are typically the largest when Chapter 8

Page 25 of 52

Essential Graduate Physics

QM: Quantum Mechanics

all basis functions coincide. Note that this term allows a very simple interpretation: the number of the diagonal matrix elements it sums up for each state (j) is just the number of interacting particle pairs residing in that state. In contrast, for fermions, the last term is zero, and the interaction energy is proportional to the difference between the two terms inside the first parentheses. To spell them out, let us consider the case when there is no direct spin-orbit interaction. Then the vectors j of the single-particle state basis may be represented as direct products o j m j  of their orbital and spin-orientation parts. (Here, for the brevity of notation, I am using m instead of ms.) For spin-½ particles, including electrons, mj may equal only either +½ or –½; in this case, the spin part of the first matrix element proportional to ujj’jj’ equals m  m' m  m' ,

(8.122)

where, as in the general Eq. (115), the position of a particular state vector in each direct product is encoding the particle’s number. Since the spins of different particles are defined in different Hilbert spaces, we may swap their state vectors to get m  m' m  m'   m m

   m' 1



m'

2

 1,

(8.123)

for any pair of j and j’. On the other hand, the second matrix element, ujj’j’j, is factored as m  m' m'  m   m m'

   m' 1

m



2

  mm' .

(8.124)

In this case, it is convenient to rewrite Eq. (121) in the coordinate representation, by using single-particle wavefunctions called spin-orbitals

 j (r )  r  j   r o  m

. j

(8.125)

Spinorbital

They differ from the spatial parts of the usual orbital wavefunctions of the type (4.233) only in that their index j should be understood as the set of the orbital-state and the spin-orientation indices.33 Also, due to the Pauli-principle restriction of the numbers Nj to either 0 or 1, Eq. (121) may be also rewritten without the explicit occupancy numbers, with the understanding that the summation is extended only over the pairs of occupied states. As a result, it becomes (1) Eint

 * (r) * (r' ) u (r, r' ) (r) (r' )  1 j j' int j j' 3 3 .    d r  d r'   *  * 2 j, j' j  j'  j (r) j' (r' ) uint (r, r' ) j' (r) j (r' )

(8.126)

In particular, for a system of two electrons, we may limit the summation to just two states (j, j’ = 1, 2). As a result, we return to Eqs. (39)-(41), with the bottom (minus) sign in Eq. (39), corresponding to the triplet spin states. Hence, Eq. (126) may be considered as the generalization of the direct and exchange interaction picture to an arbitrary number of orbitals and an arbitrary total number N of electrons. Note, however, that this formula cannot correctly describe the energy of the singlet spin

33

The spin-orbitals (125) are also close to spinors (13), besides that the former definition takes into account that the spin s of a single particle is fixed, so the spin-orbital may be indexed by the spin’s orientation m  ms only. Also, if an orbital index is used, it should be clearly distinguished from j, i.e. the set of the orbital and spin indices. This is why I believe that the frequently met notation of spin-orbitals as j,s(r) may lead to confusion. Chapter 8

Page 26 of 52

Energy correction due to fermion interaction

Essential Graduate Physics

QM: Quantum Mechanics

states, corresponding to the plus sign in Eq. (39), and also of the entangled triplet states.34 The reason is that the description of entangled spin states, given in particular by Eqs. (18) and (20), requires linear superpositions of different Dirac states. (Proof of this fact is left for the reader’s exercise.) Now comes a very important fact: the approximate result (126), added to the sum of unperturbed energies j(0), equals the sum of exact eigenenergies of the so-called Hartree-Fock equation:35   2 2     u (r )  j (r )   2m (8.127) * * 3       j' (r' )u int (r, r' ) j (r ) j' (r' )   j' (r' )u int (r, r' ) j' (r ) j (r ) d r'   j j (r ),   j'  j 

HartreeFock equation

where u(r) is the external-field potential acting on each particle separately – see the second of Eqs. (79). An advantage of this equation in comparison with Eq. (126) is that it allows the (approximate) calculation of not only the energy spectrum of the system but also the corresponding spin-orbitals, taking into account their electron-electron interaction. Of course, Eq. (127) is an integro-differential rather than just a differential equation. There are, however, efficient methods of numerical solution of such equations, typically based on iterative approaches. One more important practical trick is the exclusion of the filled internal electron shells (see Sec. 3.7) from the explicit calculations because the shell states are virtually unperturbed by the valence electrons involved in typical atomic phenomena and chemical reactions. In this approach, the Coulomb field of the shells, described by fixed pre-calculated pseudo-potentials, is added to that of the nuclei. This approach dramatically cuts the computing resources necessary for systems of relatively heavy atoms, enabling a pretty accurate simulation of electronic and chemical properties of rather complex molecules, with thousands of electrons.36 As a result, the Hartree-Fock approximation has become the de-facto baseline of all so-called ab-initio (“firstprinciple”) calculations in the very important field of quantum chemistry.37 In departures from this baseline, there are two opposite trends. For larger accuracy (and typically smaller systems), several “post-Hartree-Fock methods”, notably including the configuration interaction method,38 that are more complex but may provide higher accuracy, have been developed. There is also a strong opposite trend of extending such ab initio (“first-principle”) methods to larger systems while sacrificing some of the results’ accuracy and reliability. The ultimate limit of this trend is applicable when the single-particle wavefunction overlaps are small and hence the exchange interaction is negligible. In this limit, the last term in the square brackets in Eq. (127) may be ignored, and the Indeed, due to the condition j’  j, and Eq. (124), the calculated negative exchange interaction is limited to electron state pairs with the same spin direction – such as the factorable triplet states ( and ) of a twoelectron system, in which the contribution of the Eex given by Eq. (41), to the total energy is also negative. 35 This equation was suggested in 1929 by Douglas Hartree for the direct interaction and extended to the exchange interaction by Vladimir Fock in 1930. To verify its compliance with Eq. (126), it is sufficient to multiply all terms of Eq. (127) by *j(r), integrate them over all r-space (so the right-hand side would give j), and then sum the single-particle energies over all occupied states j. 36 For condensed-matter systems, this and other computational methods are applied to single elementary 34

spatial cells, with a limited number of electrons in them, using cyclic boundary conditions. 37 See,

e.g., A. Szabo and N. Ostlund, Modern Quantum Chemistry, Revised ed., Dover, 1996. That method, in particular, allows the calculation of proper linear superpositions of the Dirac states (such as the entangled states for N = 2, discussed above) which are missing in the generic Hartree-Fock approach – see, e.g., the just-cited monograph by Szabo and Ostlund.

38

Chapter 8

Page 27 of 52

Essential Graduate Physics

QM: Quantum Mechanics

multiplier j(r) taken out of the integral, which is thus reduced to a differential equation – formally just the Schrödinger equation for a single particle in the following self-consistent effective potential: u ef (r )  u (r )  u dir (r ), u dir (r )    *j ' (r' )u int (r, r' ) j' (r' )d 3 r' .

(8.128)

j'  j

Hartree approximation

This is the so-called Hartree approximation – which gives reasonable results for some systems,39 especially those with low electron density. However, in dense electron systems (such as typical atoms, molecules, and condensed matter) the exchange interaction, described by the second term in the square brackets of Eqs. (126)-(127), may be as high as ~30% of the direct interaction, and frequently cannot be ignored. The tendency of taking this interaction in the simplest possible form is currently dominated by the so-called Density-Functional Theory,40 universally known by its acronym DFT. In this approach, the equation solved for each eigenfunction j(r) is a differential, Schrödinger-like Kohn-Sham equation  2 2  KS   u (r )  u dir (r )  u xc (r ) j (r )   j j (r ) ,   2m 

where KS u dir (r )  e (r ),

 (r ) 

1 4 0

d

3

r'

 (r' ) r  r'

,

 (r )  en(r ),

(8.129)

(8.130)

and n(r) is the total electron density in a particular point, calculated as n(r )   *j (r ) j (r ).

(8.131)

j

The most important feature of the Kohn-Sham Hamiltonian is the simplified description of the exchange and correlation effects by the effective exchange-correlation potential uxc(r). This potential is calculated in various approximations, most of them valid only in the limit when the number of electrons in the system is very high. The simplest of them (proposed by Kohn et al. in the 1960s) is the Local Density Approximation (LDA) in which the effective exchange potential at each point r is a function only of the electron density n at the same point, taken from the theory of a uniform gas of free electrons.41 However, for many tasks of quantum chemistry, the accuracy given by the LDA is insufficient because inside molecules the density n typically changes very fast. As a result, the DFT has become widely accepted in that field only after the introduction, in the 1980s, of more accurate though more cumbersome models for uxc(r), notably the so-called Generalized Gradient Approximations (GGAs). Due to its relative simplicity, the DFT enables calculation, with the same computing resources and reasonable precision, some properties of much larger systems than the methods based on the 39

An extreme example of the Hartree approximation is the Thomas-Fermi model of heavy atoms (with Z >> 1), in which the atom’s electrons, at each distance r from the nucleus, are treated as an ideal, uniform Fermi gas, with a certain density n(r) corresponding to the local value uef(r), but a global value of their highest full single-particle energy,  = 0, to ensure the equilibrium. (The analysis of this model is left for the reader’s exercise.) 40 It had been developed by Walter Kohn and his associates (notably Pierre Hohenberg) in 1965-66, and eventually (in 1998) was marked with a Nobel Prize in Chemistry for W. Kohn. 41 Just for the reader’s reference: for a uniform, degenerate Fermi-gas of electrons (with the Fermi energy  >> F kBT), the most important, exchange part ux of uxc may be calculated analytically: ux = –(3/4)e2kF/40, where the Fermi momentum kF = (2meF)1/2/ is defined by the electron density: n = 2(4/3)kF3/(2)3  kF3/32. Chapter 8

Page 28 of 52

KohnSham equation

Essential Graduate Physics

QM: Quantum Mechanics

Hartree-Fock theory. As the result, it has become a very popular tool for ab initio calculations. This popularity is enhanced by the availability of several advanced DFT software packages, some of them in the public domain. Please note, however, that despite this undisputable success, this approach has its problems. From my personal point of view, the most offensive of them is the implicit assumption of unphysical Coulomb interaction of an electron with itself – by dropping, on the way from Eq. (128) to Eq. (130), KS ). As a result, all the available DFT packages I am aware of the condition j’  j at the calculation of u dir are either unable to account for some charge transfer effects or require substantial artificial tinkering.42 Unfortunately, because of a lack of time/space, for details I have to refer the interested reader to specialized literature.43 8.5. Quantum computation and cryptography44 Now I have to review the emerging fields of quantum computation and cryptography.45 These fields are currently the subject of intensive research and development efforts, which have already brought, besides much hype, some results of general importance. My coverage will focus on these results, referring the reader interested in details to special literature.46 Because of the very active stage of the field, the style of this section is closer to a brief literature review than to a textbook’s section. Presently, most work on quantum computation and encryption is based on systems of spatially separated (and hence distinguishable) two-level systems – in this context, universally called qubits.47 Due to this distinguishability, the issues that were the focus of the first sections of this chapter, including the second quantization approach, are irrelevant here. On the other hand, systems of qubits have some interesting properties that have not been discussed in this course yet. First of all, a system of N >> 1 qubits may contain much more information than the same number of N classical bits. Indeed, according to the discussions in Chapter 4 and Sec. 5.1, an arbitrary pure state of a single qubit may be represented by its ket vector (4.37) – see also Eq. (5.1): 48



N 1

  1 u1   2 u 2 ,

(8.132)

42

For just a few examples, see N. Simonian et al., J. Appl. Phys. 113, 044504 (2013); M. Medvedev et al., Science 335, 49 (2017); A. Hutama et al., J. Phys. Chem. C 121, 14888 (2017). 43 See, e.g., either the monograph by R. Parr and W. Yang, Density-Functional Theory of Atoms and Molecules, Oxford U. Press, 1994, or the later textbook J. A. Steckel and D. Sholl, Density Functional Theory: Practical Introduction, Wiley, 2009. For a popular review and references to more recent work in this still-developing field, see A. Zangwill, Phys. Today 68, 34 (July 2015). 44 Due to contractual obligations, the much-needed substantial update of this section is postponed until mid-2024. 45 Since these fields are much related, they are often referred to under the common title of “quantum information science”, though this term is rather misleading, de-emphasizing physical aspects of the topic. 46 Despite the recent flood of new books on the field, one of its first surveys, by M. Nielsen and I. Chuang, Quantum Computation and Quantum Information, Cambridge U. Press, 2000, is perhaps still the best one. 47 In some texts, the term qubit (or “Qbit”, or “Q-bit”) is used instead for the information contents of a two-level system – very much like the classical bit of information (in this context, frequently called “Cbit” or “C-bit”) describes the information contents of a classical bistable system – see, e.g., SM Sec. 2.2. 48 It is natural and common to employ, as u , the eigenstates of the observable that is eventually measured in the j particular physical implementation of the qubit. Chapter 8

Page 29 of 52

Essential Graduate Physics

QM: Quantum Mechanics

where {uj} is any orthonormal two-state basis. It is common to write the kets of these base states as 0 and 1, so Eq. (132) takes the form



N 1

 a 0 0  a1 1   a j j .

(8.133)

j

(Here, and in the balance of this section, the letter j is used to denote an integer equal to either 0 or 1.) According to this relation, any state  of a qubit is completely defined by two complex c-numbers aj, i.e. by 4 real numbers. Moreover, due to the normalization condition a12 + a22 = 1, we need just 3 independent real numbers – say, the Bloch sphere coordinates  and  (see Fig. 5.3), plus the common phase , which becomes important only when we consider states of a several-qubit system. This is a good time to note that a qubit is very much different from any classical bistable system used to store single bits of information – such as two possible voltage states of the usual SRAM cell (essentially, a positive-feedback loop of two transistor-based inverters). Namely, the stationary states of a classical bistable system, due to its nonlinearity, are stable with respect to small perturbations, so they may be very robust to unintentional interactions with their environment. In contrast, the qubit’s state may be disturbed (i.e. its representation point on the Bloch sphere shifted) by even minor perturbations, because it does not have such an internal state stabilization mechanism.49 Due to this reason, qubit-based systems are rather vulnerable to environment-induced drifts, including the dephasing and relaxation discussed in the previous chapter, creating major experimental challenges – see below. Now, if we have a system of two qubits, the vectors of its arbitrary pure state may be represented as a sum of 22 = 4 terms,50



N 2

 a 00 00  a01 01  a10 10  a11 11 

 aj j

j1 , j2

1 2

j1 j 2 ,

(8.134)

with four complex coefficients, i.e. eight real numbers, subject to just one normalization condition that follows from the requirement  = 1:

 j 1, 2

aj j 1 2

2

 1.

(8.135)

The evident generalization of Eqs. (133)-(134) to an arbitrary pure state of an N-qubit system is a sum of 2N terms:  N   a j j ... j j1 j 2 ... j N , (8.136) j1 , j2 , j N

1 2

N

including all possible combinations of 0s and 1s for N indices j, so the state is fully described by 2N complex numbers, i.e. 22N  2N+1 real numbers, with only one constraint, similar to Eq. (135), imposed by the normalization condition. Let me emphasize that this exponential growth of the information contents would not be possible without the qubit state entanglement. Indeed, in the particular case when qubit states are not entangled, i.e. are factorable: 49

In this aspect as well, the information processing systems based on qubits are much closer to classical analog computers (which were popular once, but nowadays are used for a few special applications only) rather than classical digital ones. 50 Here and in most instances below I use the same shorthand notation as was used at the beginning of this chapter – cf. Eq. (1b). In this short form, the qubit’s number is coded by the order of its state index inside a full ketvector, while in the long form, such as in Eq. (137), by the order of its single-qubit vector in a full direct product. Chapter 8

Page 30 of 52

Qubit state’s representation

Essential Graduate Physics

QM: Quantum Mechanics



N

  1  2 ...  N ,

(8.137)

where each n is described by an equality similar to Eq. (133) with its individual expansion coefficients, the system state description requires only 3N – 1 real numbers – e.g., N sets {, , } less one common phase. However, it would be wrong to project this exponential growth of information contents directly on the capabilities of quantum computation, because this process has to include the output information readout, i.e. qubit state measurements. Due to the fundamental intrinsic uncertainty of quantum systems, the measurement of a single qubit even in a pure state (133) generally may give either of two results, with probabilities W0 = a02 and W1 = a12. To comply with the general notion of computation, any quantum computer has to provide certain (or virtually certain) results, and hence the probabilities Wj have to be very close to either 0 or 1, so before the measurement, each measured qubit has to be in one of the basis states – either 0 or 1. This means that the computational system with N output qubits, just before their final readout, has to be in one of the factorable states



N

 j1 j 2 ... j N  j1 j 2 ... j N ,

(8.138)

which is a very small subset even of the set of all unentangled states (137), and whose maximum information contents is just N classical bits. Now the reader may start thinking that this constraint strips quantum computations of any advantages over their classical counterparts, but such a view is also superficial. To show that, let us consider the scheme of the “baseline” type of quantum computation, shown in Fig. 3.51

 j1 in classical bits of the input number

 j2 in

qubit 1 qubit 2



 jN in

 qubit N

qubit state preparation

j1 j2

jN



in

j1

in

j2 U

jN

in

in

unitary transform



out

out

 j1 out

 j2 out

  out

out

 jN out

classical bits of the output number

qubit state measurement

Fig. 8.3. The baseline scheme of quantum computation.

Here each horizontal line (sometimes called a “wire”52) corresponds to a single qubit, tracing its time evolution in the same direction as at the usual time function plots: from left to right. This means 51

Numerous modifications of this scheme have been suggested, for example with the number of output qubits different from that of input qubits, etc. Some of these options are discussed at the end of this section. 52 The notion of “wires” stems from the similarity between such quantum schemes and the schematics of classical computation circuits – see, e.g., Fig. 4a below. In the classical case, the lines may be indeed understood as physical wires connecting physical devices: logic gates and/or memory cells. In this context, note that classical computer components also have non-zero time delays, so even in that case, the left-to-right device ordering is useful to indicate the timing of (and frequently the causal relation between) the signals. Chapter 8

Page 31 of 52

Essential Graduate Physics

QM: Quantum Mechanics

that the left column in of ket-vectors describes the initial state of the qubits,53 while the right column out describes their final (but pre-measurement) state. The box labeled U represents the qubit evolution in time due to their specially arranged interactions between each other and/or external drive “forces”. These forces are assumed to be noise-free, and the system, during this evolution, is supposed to be ideally isolated from any dephasing and energy-dissipating environment, so the process may be described by a unitary operator defined in the 2N-dimensional Hilbert space of N qubits:



out

 Uˆ 

in

.

(8.139)

With the condition that the input and output states have the simple form (138), this equality reads

 j1 out  j 2 out ... j N out

 Uˆ

 j1 in  j 2 in ... j N in

.

(8.140)

The art of quantum computer design consists of selecting such unitary operators Uˆ that would: – satisfy Eq. (140), – be physically implementable, and –enable substantial performance advantages of the quantum computation over its classical counterparts with similar functionality, at least for some digital functions (algorithms). I will have time/space to demonstrate the possibility of such advantages on just one, perhaps the simplest example – the so-called Deutsch problem,54 on the way discussing several common notions and issues of this field. Let us consider the family of single-bit classical Boolean functions jout = f(jin). Since both j are Boolean variables, i.e. may take only values 0 and 1, there are evidently only 22 = 4 such functions – see the first four columns of the following table: f

f(0)

f(1)

class

F

f(1)–f(0)

f1

0

0

constant

0

0

f2

0

1

balanced

1

+1

f3

1

0

balanced

1

–1

f4

1

1

constant

0

0

(8.141)

Of them, the functions f1 and f4, whose values are independent of their arguments, are called constants, while the functions f2 (called “YES” or “IDENTITY”) and f3 (“NOT” or “INVERSION”) are called balanced. The Deutsch problem is to determine the class of a single-bit function, implemented in a “black box”, as being either constant or balanced, using just one experiment. 53

As was discussed in Chapter 7, the preparation of a pure state (133) is (conceptually :-) straightforward. Placing a qubit into a weak contact with an environment of temperature T > 1) qubits and, for some tasks, provide a dramatic performance increase – for example, reducing the necessary circuit component number from O(2N) to O(N p), where p is a finite (and not very big) number. However, this efficiency comes at a high price. Indeed, let us discuss the possible physical implementation of quantum gates, starting from the single-qubit case, on an example of the Hadamard gate (146). With the linearity requirement, its action on the arbitrary state (133) should be 1  0  1   a1 1  0  1   1 a0  a1  0  1 a0  a1  1 , (8.153) Hˆ   a 0Hˆ 0  a1Hˆ 1  a 0 2 2 2 2 meaning that the state probability amplitudes in the end (t = T) and in the beginning (t = 0) of the qubit evolution in time have to be related as

a 0 (T ) 

a0 (0)  a1 (0) 2

, a1 (T ) 

a0 (0)  a1 (0) 2

.

(8.154)

This task may be again performed using the Rabi oscillations, which were discussed in Sec. 6.5, i.e. by applying to the qubit (a two-level system), for a limited time period T, a weak sinusoidal external signal of frequency  equal to the intrinsic quantum oscillation frequency nn’ defined by Eq. (6.85). The analysis of the Rabi oscillations was carried out in Sec. 6.5, even for non-vanishing (though small) detuning  =  – nn, but only for the particular initial conditions when at t = 0 the system was fully in one on the basis states (there labeled as n’), i.e. the counterpart state (there labeled n) was empty. For our current purposes, we need to find the amplitudes a0,1(t) for arbitrary initial conditions a0,1(0), subject only to the time-independent normalization condition a02 + a12 = 1. For the case of exact tuning,  = 0, the solution of the system (6.94) is elementary,61 and gives the following solution:62

a 0 (t )  a0 (0) cos t  ia1 (0)e i sin t , a1 (t )  a1 (0) cos t  ia 0 (0)e i sin t ,

(8.155)

where  is the Rabi oscillation frequency (6.99), in the exact-tuning case proportional to the amplitude A of the external ac drive A = Aexp{i} – see Eq. (6.86). Comparing these expressions with Eqs. (154), we see that for t = T = /4 and  = /2 they “almost” coincide, besides the opposite sign of a1(T). Conceptually the simplest way to correct this deficiency is to follow the ac “/4-pulse”, just discussed, by a short dc “-pulse” of the duration T’ = /, which temporarily creates a small additional energy difference  between the basis states 0 and 1. According to the basic Eq. (1.62), such difference creates an additional phase difference T’/ between the states, equal to  for the “-pulse”. Another way (that may be also useful for two-qubit operations) is to use another, auxiliary state with energy E2 whose distances from the basic levels E1 and E0 are significantly different from the difference (E1 – E0) – see Fig. 6a. In this case, the weak external ac field tuned to any of the three potential quantum transition frequencies nn’  (En – En’)/ initiates such transitions between the corresponding states only, with a negligible perturbation of the third state. (Such transitions may be 61

An alternative way to analyze the qubit evolution is to use the Bloch equation (5.21), with an appropriate function (t) describing the control field. 62 To comply with our current notation, the coefficients a and a of Sec. 6.5 are replaced with a and a . n’ n 0 1 Chapter 8

Page 36 of 52

Essential Graduate Physics

QM: Quantum Mechanics

again described by Eqs. (155), with the appropriate index changes.) For the Hadamard transform implementation, it is sufficient to apply (after the already discussed /4-pulse of frequency 10, and with the initially empty level E2), an additional -pulse of frequency 20, with any phase . Indeed, according to the first of Eqs. (155), with the due replacement a1(0)  a2(0) = 0, such pulse flips the sign of the amplitude a0(t), while the amplitude a1(t), not involved in this additional transition, remains unchanged. (a)

E2

2

E0

10

(c)

11

1 0



01 , 10 1  2



11

1   2



20 21

E1

(b)

00

10 01 00

Fig. 8.6. Energy-level schemes used for unitary transformations of (a) single qubits and (b, c) two-qubit systems.

Now let me describe the conceptually simplest (though, for some qubit types, not the most practically convenient) scheme for the implementation of two-qubit gates, on an example of the CNOT gate whose operation is described by Eq. (145). For that, evidently, the involved qubits have to interact for some time T. As was repeatedly discussed in the two last chapters, in most cases such interaction of two subsystems is factorable – see Eq. (6.145). For qubits, i.e. two-level systems, each of the component operators may be represented by a 22 matrix in the basis of states 0 and 1. According to Eq. (4.106), such a matrix may be always expressed as a linear combination (bI + c), where b and three Cartesian components of the vector c are c-numbers. Let us consider the simplest form of such factorable interaction Hamiltonian: ˆ 1ˆ 2  , for 0  t  T , Hˆ int t    z z (8.156) 0, otherwise,  where the upper index is the qubit number and  is a c-number constant.63 According to Eq. (4.175), by the end of the interaction period, this Hamiltonian produces the following unitary transform:  i   i  Uˆ int  exp Hˆ intT   exp ˆ z1ˆ z2 T .      

(8.157)

Since in the basis of unperturbed two-bit basis states j1j2, the product operator ˆ z1ˆ z2  is diagonal, so is the unitary operator (157), with the following action on these states:

63

The assumption of simultaneous time independence of the basis state vectors and the interaction operator (within the time interval 0 < t < T) is possible only if the basis state energy difference  of both qubits is exactly the same. In this case, the simple Eq. (156) follows from Figs. 6b, which shows the spectrum of the total energy E = E1 + E2 of the two-bit system. In the absence of interaction (Fig. 6b), the energies of two basis states, 01 and 10, are equal, enabling even a weak qubit interaction to cause their substantial evolution in time – see Sec. 6.7. If the qubit energies are different (Fig. 6c), the interaction may still be reduced, in the rotating-wave approximation, to Eq. (156), by compensating the energy difference (1 – 2) with an external ac signal of frequency  = (1 – 2)/ – see Sec. 6.5. Chapter 8

Page 37 of 52

Essential Graduate Physics

QM: Quantum Mechanics

Uˆ int j1 j 2  expi z(1) z( 2)  j1 j 2 ,

(8.158)

where   –T/, and z are the eigenvalues of the Pauli matrix z for the basis states of the corresponding qubit: z = +1 for j = 0, and z = –1 for j = 1. Let me, for clarity, spell out Eq. (158) for the particular case  = –/4 (corresponding to the qubit coupling time T = /4):

Uˆ int 00  e i / 4 00 , Uˆ int 01  e i / 4 01 , Uˆ int 10  e i / 4 10 , Uˆ int 11  e i / 4 11 . (8.159) In order to compensate for the undesirable parts of this joint phase shift of the basis states, let us now apply similar individual “rotations” of each qubit by angle ’ = +/4, using the following product of two independent operators, plus (just for the result’s clarity) a common, and hence inconsequential, phase shift ” = –/4:64       Uˆ com  exp i ' ˆ z1  ˆ z2   i"  expi ˆ z1  expi ˆ z2  e i / 4 .  4   4 

 





(8.160)

Since this operator is also diagonal in the j1j2 basis, it is easy to calculate the change of the basis states by the total unitary operator Uˆ tot  Uˆ comUˆ int :

Uˆ tot 00  00 ,

Uˆ tot 01  01 ,

Uˆ tot 10  10 ,

Uˆ tot 11   11 .

(8.161)

This result already shows the main “miracle action” of two-qubit gates, such as the one shown in Fig. 4b: the source qubit is left intact (only if it is in one of the basis states!), while the state of the target qubit is altered. True, this change (of the sign) is still different from the CNOT operator’s action (145), but may be readily used for its implementation by sandwiching the transform Utot between two Hadamard transforms of the target qubit alone: 1 Cˆ  Hˆ 2 Uˆ totHˆ 2  . (8.162) 2 So, we have spent quite a bit of time on the discussion of the very simple CNOT gate,65 and now I can reward the reader for their effort with a bit of good news: it has been proved that an arbitrary unitary transform that satisfies Eq. (140), i.e. may be used to implement the general scheme outlined in Fig. 3, may be decomposed into a set of CNOT gates, possibly augmented with simpler single-qubit gates.66 Unfortunately, I have no time for a detailed discussion of more complex circuits.67 The most 64

As Eq. (4.175) shows, each of the component unitary transforms exp{i 'ˆ z } may be created by applying to

each qubit, for time interval T’ = ’/’, a constant external field described by Hamiltonian Hˆ  'ˆ z . We already know that for a charged, spin-½ particle, such Hamiltonian may be created by applying a z-oriented external dc magnetic field – see Eq. (4.163). For most other physical implementations of qubits, the organization of such a Hamiltonian is also straightforward – see, e.g., Fig. 7.4 and its discussion. 65 As was discussed above, this gate is identical to the two-qubit gate shown in Fig. 5a for f = f , i.e. f(j) = j. The 3 implementation of the gate of f for 3 other possible functions f requires straightforward modifications, whose analysis is left for the reader’s exercise. 66 This fundamental importance of the CNOT gate was perhaps a major reason why David Wineland, the leader of the NIST group that had demonstrated its first experimental implementation in 1995 (following the theoretical suggestion by J. Cirac and P. Zoller), was awarded the 2012 Nobel Prize in Physics – shared with Serge Haroche, the leader of another group working towards quantum computation. Chapter 8

Page 38 of 52

Essential Graduate Physics

QM: Quantum Mechanics

famous of them is the scheme for integer number factoring, suggested in 1994 by Peter Winston Shor.68 Due to its potential practical importance for breaking the broadly used communication encryption schemes such as the RSA code,69 this opportunity has incited much enthusiasm and triggered experimental efforts to implement quantum gates and circuits using a broad variety of two-level quantum systems. By now, the following experimental options have given the most significant results:70 (i) Trapped ions. The first experimental demonstrations of quantum state manipulation (including the already mentioned first CNOT gate) have been carried out using deeply cooled atoms in optical traps, similar to those used in frequency and time standards. Their total spins are natural qubits, whose states may be manipulated using the Rabi transfers excited by suitably tuned lasers. The spin interactions with the environment may be very weak, resulting in large dephasing times T2 – up to a few seconds. Since the distances between ions in the traps are relatively large (of the order of a micron), their direct spin-spin interaction is even weaker, but the ions may be made effectively interacting either via their mechanical oscillations about the potential minima of the trapping field or via photons in external electromagnetic resonators (“cavities”).71 Perhaps the main challenge of using this approach to quantum computation is poor “scalability”, i.e. the enormous experimental difficulty of creating and managing large ordered systems of individually addressable qubits. So far, only some few-qubit systems have been demonstrated.72 (ii) Nuclear spins are also typically very weakly connected to their environment, with dephasing times T2 exceeding 10 seconds in some cases. Their eigenenergies E0 and E1 may be split by external dc magnetic fields (typically, of the order of 10 T), while the interstate Rabi transfers may be readily achieved by using the nuclear magnetic resonance, i.e. the application of external ac fields with frequencies  = (E1 – E0)/ – typically, of a few hundred MHz. The challenges of this option include the weakness of spin-spin interactions (typically mediated through molecular electrons), resulting in a very slow spin evolution, whose time scale / may become comparable with T2, and also very small level separations E1 – E0, corresponding to a few K, i.e. much smaller than the thermal energy at room temperature, creating a challenge of qubit state preparation.73 Despite these challenges, the nuclear spin option was used for the first implementation of the Shor algorithm for factoring of a small number (15 = 53) as early as 2001.74 However, the extension of this success to larger systems, beyond the set of spins inside one molecule, is extremely challenging.

67

For that, the reader may be referred to either the monographs by Nielsen-Chuang and Reiffel-Polak, cited above, or to a shorter (but much more formal) textbook by N. Mermin, Quantum Computer Science, Cambridge U. Press, 2007. 68 A clear description of this algorithm may be found in several accessible sources, including Wikipedia – see the article Shor’s Algorithm. 69 Named after R. Rivest, A. Shamir, and L. Adleman, the authors of the first open publication of the code in 1977, but actually invented earlier (in 1973) by C. Cocks. 70 For a discussion of other possible implementations (such as quantum dots and dopants in crystals) see, e.g., T. Ladd et al., Nature 464, 45 (2010), and references therein. 71 A brief discussion of such interactions (so-called Cavity QED) will be given in Sec. 9.4 below. 72 See, e.g., S. Debnath et al., Nature 536, 63 (2016). Note also the related work on arrays of trapped, opticallycoupled neutral atoms – see, e.g., J. Perczel et al., Phys. Rev. Lett. 119, 023603 (2017) and references therein. 73 This challenge may be partly mitigated using ingenious spin manipulation techniques such as refocusing – see, e.g., either Sec. 7.7 in Nielsen and Chuang, or J. Keeler’s monograph cited at the end of Sec. 6.5. 74 B. Lanyon et al., Phys. Rev. Lett. 99, 250505 (2001). Chapter 8

Page 39 of 52

Essential Graduate Physics

QM: Quantum Mechanics

(iii) Josephson-junction devices. Much better scalability may be achieved with solid-state devices, especially using superconductor integrated circuits including weak contacts – Josephson junctions – see their brief discussion in Sec. 1.6. The qubits of this type are based on the fact that the energy U of such a junction is a highly nonlinear function of the Josephson phase difference  – see Sec. 1.6. Indeed, combining Eqs. (1.73) and (1.74), we can readily calculate U() as the work W of an external circuit increasing the phase from, say, zero to some value : ' 

' 

' 

2eI c 2eI c d' 1  cos   . sin ' U    U 0   dW   IVdt  dt     '0  dt ' 0 ' 0

(8.163)

There are several options for using this nonlinearity for creating qubits;75 currently, the leading option, called the phase qubit, is using the two lowest eigenstates localized in one of the potential wells of the periodic potential (163). A major problem with such qubits is that at the very bottom of this well the potential U() is almost quadratic, so the energy levels are nearly equidistant – cf. Eqs. (2.262), (6.16), and (6.23). This is even more true for the so-called “transmons” (and “Xmons”, and “Gatemons”, and several other very similar devices76) – the currently used phase qubits versions, where a Josephson junction is made a part of an external electromagnetic oscillator, making its relative total nonlineartity (anharmonism) even smaller. As a result, the external rf drive of frequency  = (E1 – E0)/, used to arrange the state transforms described by Eq. (155), may induce simultaneous undesirable transitions to (and between) higher energy levels. This effect may be mitigated by a reduction of the ac drive amplitude, but at a price of the proportional increase of the operation time and hence of dephasing effects – see below. (I am leaving a quantitative estimate of such an increase for the reader’s exercise.) Since the coupling of Josephson-junction qubits may be most readily controlled (and, very importantly, kept stable if so desired), they have been used to demonstrate the largest prototype quantum computing systems to date, despite quite modest dephasing times T2 – for purely integrated circuits, in the tens of microseconds at best, even at operating temperatures in tens of mK. By the time of this writing (mid-2019), several groups have announced chips with a few dozen of such qubits, but to the best of my knowledge, only their smaller subsets could be used for high-fidelity quantum operations.77 (iv) Optical systems, attractive because of their inherently enormous bandwidth, pose a special challenge for quantum computation: due to the virtual linearity of most electromagnetic media at reasonable power, the implementation of qubits (i.e. two-level systems), and interaction Hamiltonians such as the one given by Eq. (156), is problematic. In 2001, a very smart way around this hurdle was 75

The “most quantum” option in this technology is to use Josephson junctions very weakly coupled to their dissipative environment (so the effective resistance shunting the junction is much higher than the quantum 2 resistance unit RQ  (/2) /e ~ 104 ). In this case, the Josephson phase variable  behaves as a coordinate of a 1D quantum particle, moving in the 2-periodic potential (163), forming the energy band structure E(q) similar to those discussed in Sec. 2.7. Both theory and experiment show that in this case, the quantum states in adjacent Brillouin zones differ by the charge of one Cooper pair 2e. (This is exactly the effect responsible for the Bloch oscillations of frequency (2.252).) These two states may be used as the basis states of charge qubits. Unfortunately, such qubits are rather sensitive to charged impurities, randomly located in the junction’s vicinity, causing uncontrollable changes of its parameters, so currently, to the best of my knowledge, this option is not actively pursued. 76 For a recent review of these devices see, e.g., G. Wendin, Repts. Progr. Phys. 80, 106001 (2017), and references therein. 77 See, e.g., C. Song et al., Phys. Rev. Lett. 119, 180511 (2017) and references therein. Chapter 8

Page 40 of 52

Essential Graduate Physics

QM: Quantum Mechanics

invented.78 In this KLM scheme (also called “linear optical quantum computing”), nonlinear elements are not needed at all, and quantum gates may be composed just of linear devices (such as optical waveguides, mirrors, and beam splitters), plus single-photon sources and detectors. However, estimates show that this approach requires a much larger number of physical components than those using nonlinear quantum systems such as usual qubits,79 so right now it is not very popular. So, despite nearly three decades of large-scale (multi-billion-dollar) efforts, the progress of quantum computing development has been rather gradual. The main culprit here is the unintentional coupling of qubits to their environment, leading most importantly to their state dephasing, and eventually to errors. Let me discuss this major issue in detail. Of course, some error probability exists in classical digital logic gates and memory cells as well.80 However, in this case, there is no conceptual problem with the device state measurement, so the error may be detected and corrected in many ways. Conceptually,81 the simplest of them is the so-called majority voting logic – using several similar logic circuits operating in parallel and fed with identical input data. Evidently, two such devices can detect a single error in one of them, while three devices in parallel may correct such an error, by taking two coinciding output signals for the genuine one. For quantum computation, the general idea of using several devices (say, qubits) for coding the same information remains valid; however, there are two major complications. First, as we know from Chapter 7, the environment’s dephasing effect may be described as a slow random drift of the probability amplitudes aj, leading to the deviation of the output state fin from the required form (140), and hence to a non-vanishing probability of wrong qubit state readout – see Fig. 3. Hence the quantum error correction has to protect the result not against possible random state flips 0  1, as in classical digital computers, but against these “creeping” analog errors. Second, the qubit state is impossible to copy exactly (clone) without disturbing it, as follows from the following simple calculation.82 Cloning some state  of one qubit to another qubit that is initially in an independent state (say, the basis state 0), without any change of , means the following transformation of the two-qubit ket: 0  . If we want such transform to be performed by a real quantum system, whose evolution is described by a unitary operator uˆ , and to be correct for an arbitrary state , it has to work not only for both basis states of the qubit: uˆ 00  00 ,

uˆ 10  11 ,

(8.164)

but also for their arbitrary linear combination (133). Since the operator uˆ has to be linear, we may use that relation, and then Eq. (164) to write

78

E. Knill et al., Nature 409, 46 (2001). See, e.g., Y. Li et al., Phys. Rev. X 5, 041007 (2015). 80 In modern integrated circuits, such “soft” (runtime) errors are created mostly by the high-energy neutrons of cosmic rays, and also by the -particles emitted by radioactive impurities in silicon chips and their packaging. 81 Practically, the majority voting logic increases circuit complexity and power consumption, so it is used only in the most critical points. Since in modern digital integrated circuits, the bit error rate is very small (< 10-5), in most of them, less radical but also less penalizing schemes are used – if used at all. 82 Amazingly, this simple no-cloning theorem was discovered as late as 1982 (to the best of my knowledge, independently by W. Wooters and W. Zurek, and by D. Dieks), in the context of work toward quantum cryptography – see below. 79

Chapter 8

Page 41 of 52

Essential Graduate Physics

QM: Quantum Mechanics

uˆ  0  uˆ a 0 0  a1 1  0  a 0 uˆ 00  a1uˆ 10  a 0 00  a1 11 .

(8.165)

On the other hand, the desired result of state cloning is

  a 0 0  a1 1 a 0 0  a1 1   a 02 00  a 0 a1  10  01   a12 11 ,

(8.166)

i.e. is evidently different, so, for an arbitrary state , and an arbitrary unitary operator uˆ , uˆ  0   ,

(8.167)

No-cloning theorem

meaning that the qubit state cloning is indeed impossible.83 This problem may be, however, indirectly circumvented – for example, in the way shown in Fig. 7a. (a)

a0 0  a1 1 0

(b)

a0 0  a1 1

a0 00

0

 a1 11

0



H

A

B

H

C

D

E

F

Fig. 8.7. (a) Quasi-cloning, and (b) detection and correction of dephasing errors in a single qubit.

Here the CNOT gate, whose action is described by Eq. (145), entangles an arbitrary input state (133) of the source qubit with a basis initial state of an ancillary target qubit – frequently called the ancilla. Using Eq. (145), we can readily calculate the output two-qubit state’s vector:



N 2

 Cˆ a 0 0  a1 1  0  a 0 Cˆ 00  a1Cˆ 10  a0 00  a1 11 .

(8.168)

We see that this circuit does perform the desired operation (165), i.e. gives the initial source qubit’s probability amplitudes a0 and a1 equally to two qubits, i.e. duplicates the input information. However, in contrast with “genuine” cloning, it changes the state of the source qubit as well, making it entangled with the target (ancilla) qubit. Such “quasi-cloning” is the key element of most suggested quantum error correction techniques. Consider, for example, the three-qubit “circuit” shown in Fig. 7b, which uses two ancilla qubits – see the two lower “wires”. At its first two stages, the double application of the quasi-cloning produces an intermediate state A with the following ket-vector: A  a 0 000  a1 111 ,

(8.169)

which is an evident generalization of Eq. (168).84 Next, subjecting the source qubit to the Hadamard transform (146), we get the three-qubit state B represented by the state vector 83

Note that this does not mean that two (or several) qubits cannot be put into the same, arbitrary quantum state – theoretically, with arbitrary precision. Indeed, they may be first set into their lowest-energy stationary states, and then driven into the same arbitrary state (133) by exerting on them similar classical external fields. So, the nocloning theorem pertains only to qubits in unknown states  – but this is exactly what we need for error correction – see below. Chapter 8

Page 42 of 52

Quasicloning

Essential Graduate Physics

QM: Quantum Mechanics

B  a0

1 2

0

 1  00  a1

1 2

0

 1  11 .

(8.170)

Now let us assume that at this stage, the source qubit comes into contact with a dephasing environment – in Fig. 7b, symbolized by the single-qubit “gate” . As we know from Chapter 7 (see Eq. (7.22) and its discussion, and also Sec. 7.3), its effect may be described by a random shift of the relative phase of two states:85 0  e i 0 , 1  e  i 1 . (8.171) As a result, for the intermediate state C (see Fig. 7b) we may write C  a0









1 i 1 i e 0  e i 1 00  a1 e 0  e i 1 11 . 2 2

(8.172)

At this stage of this simple theoretical model, the coupling with the environment is completely stopped (ahh, if this could be possible! we might have quantum computers by now :-), and the source qubit is fed into one more Hadamard gate. Using Eqs. (146) again, for state D after this gate we get D  a 0 cos  0  i sin  1  00  a1 i sin  0  cos  1  11 .

(8.173)

Next, the qubits are passed through the second, similar pair of CNOT gates – see Fig. 7b. Using Eq. (145), for the resulting state E we readily get the following expression: E  a 0 cos  000  a 0 i sin  111  a1i sin  011  a1 cos  100 ,

(8.174a)

whose right-hand side may by evidently re-grouped as E  a 0 0  a1 1 cos  00  a1 0  a 0 1  i sin  11 .

(8.174b)

This is already a rather remarkable result. It shows that if we measured the ancilla qubits at stage E, and both results corresponded to states 0, we might be 100% sure that the source qubit (which is not affected by these measurements!) is in its initial state even after the interaction with the environment. The only result of an increase of this unintentional interaction (as quantified by the r.m.s. magnitude of the random phase shift ) is the growth of the probability, W  sin 2  ,

(8.175)

of getting the opposite result, which would signal a dephasing-induced error in the source qubit. Such implicit measurement, without disturbing the source qubit, is called quantum error detection. An even more impressive result may be achieved by the last component of the circuit, the socalled Toffoli (or “CCNOT”) gate, denoted by the rightmost symbol in Fig. 7b. This three-qubit gate is conceptually similar to the CNOT gate discussed above, besides that it flips the basis state of its target qubit only if both source qubits are in state 1. (In the circuit shown in Fig. 7b, the former role is played 84

This state is also the three-qubit example of the so-called Greenberger-Horne-Zeilinger (GHZ) states, which are frequently called the “most entangled” states of a system of N > 2 qubits. 85 Let me emphasize again that Eq. (171) is strictly valid only if the interaction with the environment is a pure dephasing, i.e. does not include the energy relaxation of the qubit or its thermal activation to the higher-energy eigenstate; however, it is a reasonable description of errors in the frequent case when T2 > ), it coincides with kBTN, where TN is a more popular measure of electronics’ noise, called the noise temperature. 20 See, e.g., H. Haus and J. Mullen, Phys. Rev. 128, 2407 (1962). 21 See, e.g., the spectacular experiments by B. Yurke et al., Phys. Rev. Lett. 60, 764 (1988). Note also that the squeezed ground states of light are now used to improve the sensitivity of interferometers in gravitational wave detectors – see, e.g., the recent review by R. Schnabel, Phys. Repts. 684, 1 (2017), and the later paper by F. Acernese et al., Phys. Rev. Lett. 123, 231108 (2019).

Chapter 10

Page 10 of 16

Essential Graduate Physics

QM: Quantum Mechanics

level (i.e. spin-½-like) systems.22 Such measurements may be an important tool for the further progress of quantum computation and cryptography.23 Finally, let me mention that the composite systems consisting of a quantum subsystem, and a classical subsystem performing its continuous weakly-perturbing measurement and using its results for providing specially crafted feedback to the quantum subsystem, may have some curious properties, in particular mock a quantum system detached from the environment.24 10.3. Hidden variables and local reality Now we are ready to proceed to the discussion of the last, hardest group (iii) of the questions posed in Sec. 1, namely on the state of a quantum system just before its measurement. After a very important but inconclusive discussion of this issue by Albert Einstein and his collaborators on one side, and Niels Bohr on the other side, in the mid-1930s, such discussions resumed in the 1950s.25 They have led to a key contribution by John Stewart Bell in the early 1960s, summarized as so-called Bell’s inequalities, and then to experimental work on better and better verification of these inequalities. (Besides that continuing work, the recent progress, in my humble view, has been rather marginal.) The central question may be formulated as follows: what had been the “real” state of a quantummechanical system just before a virtually perfect single-shot measurement was performed on it, and gave a certain documented outcome? To be specific, let us focus again on the example of Stern-Gerlach measurements of spin-½ particles – because of their conceptual simplicity.26 For a single-component system (in this case a single spin-½) the answer to the posed question may look evident. Indeed, as we know, if the spin is in a pure (least-uncertain) state , i.e. its ket-vector may be expressed in the form similar to Eq. (4),        , (10.23) where, as usual,  and  denote the states with definite spin orientations along the z-axis, the probabilities of the corresponding outcomes of the z-oriented Stern-Gerlach experiment are W = 2 and W = 2. Then it looks natural to suggest that if a particular experiment gave the outcome corresponding to the state , the spin had been in that state just before the experiment. For a classical system such an answer would be certainly correct, and the fact that the probability W = 2 defined for the statistical ensemble of all experiments (regardless of their outcome), may be less than 1, would merely reflect our ignorance about the real state of this particular system before the measurement – which just reveals the real situation. However, as was first argued in the famous EPR paper published in 1935 by A. Einstein, B. Podolsky, and N. Rosen, such an answer becomes impossible in the case of an entangled quantum system, if only one of its components is measured with an instrument. The original EPR paper discussed 22

See, e.g., D. Averin, Phys. Rev. Lett. 88, 207901 (2002). See, e.g., G. Jaeger, Quantum Information: An Overview, Springer, 2006. 24 See, e.g., the monograph by H. Wiseman and G. Milburn, Quantum Measurement and Control, Cambridge U. Press (2009), more recent experiments by R. Vijay et al., Nature 490, 77 (2012), and references therein. 25 See, e.g., J. Wheeler and W. Zurek (eds.), Quantum Theory and Measurement, Princeton U. Press, 1983. 26 As was discussed in Sec. 1, the Stern-Gerlach-type experiments may be readily made virtually perfect, provided that we do not care about the evolution of the system after the single-shot measurement. 23

Chapter 10

Page 11 of 16

Essential Graduate Physics

QM: Quantum Mechanics

thought experiments with a pair of 1D particles prepared in a quantum state in that both the sum of their momenta and the difference of their coordinates simultaneously have definite values: p1 + p2 = 0, x1 – x2 = a.27 However, usually this discussion is recast into an equivalent Stern-Gerlach experiment shown in Fig. 4a.28 A source emits rare pairs of spin-½ particles propagating in opposite directions. The particle spin states are random, but with the net spin of the pair is definitely equal to zero. After the spatial separation of the particles has become sufficiently large (see below), the spin state of each of them is measured with a Stern-Gerlach detector, with one of them (in Fig. 1, SG1) somewhat closer to the particle source, so it makes the measurement first, at a time t1 < t2. (a)

a

particle pair source

SG 1

(b)

c



SG 2



b

a ,c    

Stern-Gerlach detectors on both sides

Fig. 10. 4. (a) General scheme of two-particle Stern-Gerlach experiments, and (b) the orientation of the detectors, assumed at Wigner’s deviation of Bell’s inequality (36).

First, let the detectors be oriented say along the same direction, say the z-axis. Evidently, the probability of each detector giving any of the values sz = /2 is 50%. However, if the first detector had given the result Sz = –/2, then even before the second detector’s measurement, we know that it will give the result Sz = +/2 with 100% probability. So far, this situation still allows for a classical interpretation, just as for the single-particle measurements: we may fancy that the second particle has a definite spin before the measurement, and the first measurement just removes our ignorance about that reality. In other words, the change of the probability of the outcome Sz = +/2 at the second detection from 50% to 100% is due to the statistical ensemble re-definition: the 50% probability of this detection belongs to the ensemble of all experiments, while the 100% probability, to the sub-ensemble of experiments with the Sz = –/2 outcome of the first experiment. However, let the source generate the spin pairs in the singlet state (8.18): s12 

1





   . (10.24) 2 As was discussed in Sec. 8.2, this state satisfies the above assumptions: the probability of each value of Sz of any particle is 50%, and the sum of both Sz is definitely zero, so if the first detector’s result is Sz = –/2, then the state of the remaining particle is , with zero uncertainty.29 Now let us use Eqs. (4.123) to represent the same state (24) in a different form: This is possible because the corresponding operators commute:  pˆ 1  pˆ 2 , xˆ1  xˆ 2    pˆ 1 , xˆ1    pˆ 2 , xˆ 2   0 . This version was first proposed by D. Bohm in 1951. Another equivalent and experimentally more convenient (and as a result, frequently used) technique is the degenerate parametric excitation of entangled optical photon pairs – see, e.g., the publications cited at the end of this section. 29 Here we assume that both detectors are perfect in the sense of their readout fidelity. As was discussed in Sec. 1, this condition may be closely approached in practical SG experiments. 27 28

Chapter 10

Page 12 of 16

Essential Graduate Physics

QM: Quantum Mechanics

1  1      1       1      1      . (10.25)  2 2 2 2 2  Opening the parentheses (carefully, without swapping the ket-vector order, which encodes the particle numbers!), we get an expression similar to Eq. (24), but now for the x-basis: s12 

1

     . (10.26) 2 Hence if we use the first detector (closest to the particle source) to measure Sx rather than Sz, then after it had given a certain result (say, Sx = –/2), we know for sure, before the second particle spin’s measurement, that its Sx component definitely equals +/2. s12 

So, depending on the experiment performed on the first particle, the second particle, before its measurement, may be in one of two states – either with a definite component Sz or with a definite component Sx, in each case with zero uncertainty. Evidently, this situation cannot be interpreted in classical terms – if the particles do not interact during the measurements. A. Einstein was deeply unhappy with this situation because it did not satisfy what, in his view, was the general requirement to any theory, which nowadays is called the local reality. His definition of this requirement was as follows: “The real factual situation of system 2 is independent of what is done with system 1 that is spatially separated from the former”. (Here the term “spatially separated” is not defined, but from the context, it is clear that Einstein meant the detector separation by a superluminal interval, i.e. by distance r1  r2  c t1  t 2 ,

(10.27)

where the measurement time difference on the right-hand side includes the measurement duration.) In Einstein’s view, since quantum mechanics did not satisfy the local reality condition, it could not be considered a complete theory of Nature. This situation naturally raises the question of whether something (usually called hidden variables) may be added to the quantum-mechanical description to enable it to satisfy the local reality requirement. The first definite statement in this regard was John von Neumann’s “proof”30 (first famous, then infamous :-) that such variables cannot be introduced; for a while, his work satisfied the quantum mechanics’ practitioners, who apparently did not pay much attention.31 A major new contribution to the problem was made only in the 1960s by John Bell.32 First of all, he has found an elementary (in his words, “foolish”) error in von Neumann’s logic, which voids his “proof”. Second, he has demonstrated that Einstein’s local reality condition is incompatible with conclusions of quantum mechanics – that had been, by that time, confirmed by too many experiments to be seriously questioned. Let me describe a particular version of Bell’s result (suggested by E. Wigner), using the same EPR pair experiment (Fig. 4a) where each SG detector may be oriented in any of three directions: a, b, or c – see Fig. 4b. As we already know from Chapter 4, if a fully-polarized beam of spin-½ particles is

30

In his very early book J. von Neumann, Mathematische Grundlagen der Quantenmechanik [Mathematical Foundations of Quantum Mechanics], Springer, 1932. (The first English translation was published only in 1955.) 31 Perhaps it would not satisfy A. Einstein, but reportedly he did not know about von Neumann’s publication before signing the EPR paper. 32 See, e.g., either J. Bell, Rev. Mod. Phys. 38, 447 (1966) or J. Bell, Foundations of Physics 12, 158 (1982). Chapter 10

Page 13 of 16

Essential Graduate Physics

QM: Quantum Mechanics

passed through a Stern-Gerlach apparatus forming angle  with the polarization axis, the probabilities of two alternative outcomes of the experiment are W (  )  cos 2



,

W (  )  sin 2



. (10.28) 2 2 Let us use this formula to calculate all joint probabilities of measurement outcomes, starting from the detectors 1 and 2 oriented, respectively, in the directions a and c. Since the angle between the negative direction of the a-axis and the positive direction of the c-axis is  a–,c+ =  –  (see the dashed arrow in Fig. 4b), we get 1   1 2  W (a   c  )  W (a  )W (c  a  )  W (a  ) W  a  ,c    cos 2  sin , (10.29) 2 2 2 2 where W(x  y) is the joint probability of both outcomes x and y, while W(x  y) is the conditional probability of the outcome x, provided that the outcome y has happened.33 Absolutely similarly,

 1 W (c  b )  W (c )W (b c )  sin 2 , 2 2 1   2 1 2 W (a  b )  W (a )W (b a )  cos 2  sin  . 2 2 2

(10.30)

(10.31)

Now note that for any angle  smaller than /2 (as in the case shown in Fig. 4b), trigonometry gives 1 2 1  1   sin   sin 2  sin 2  sin 2 . (10.32) 2 2 2 2 2 2 (For example, for   0, the left-hand side of this inequality tends to 2/2, while the right-hand side, to 2/4.) Hence the quantum-mechanical result gives, in particular, W (a  b )  W (a  c )  W (c  b ),

for    / 2 .

(10.33)

On the other hand, we can get a different inequality for these probabilities without calculating them from any particular theory, but using the local reality assumption. For that, let us prescribe some probability to each of 23 = 8 possible outcomes of a set of three spin measurements. (Due to zero net spin of particle pairs, the probabilities of the sets shown in both columns of the table have to be equal.)

W (a  b)

W (a  c) W(c  b)

33

Detector 1

Detector 2

Probability

a+  b+  c+ a+  b+  c– a+  b–  c+ a+  b–  c– a–  b+  c+ a–  b+  c– a–  b–  c+ a–  b–  c–

a–  b–  c– a–  b–  c+ a–  b+  c– a–  b+  c+ a+  b–  c– a+  b–  c+ a+  b+  c– a+  b+  c+

W1 W2 W3 W4 W5 W6 W7 W8

The first equality in Eq. (29) is the well-known identity of the basic probability theory.

Chapter 10

Page 14 of 16

Quantummechanical result

Essential Graduate Physics

QM: Quantum Mechanics

From the local reality point of view, these measurement options are independent, so we may write (see the arrows on the left of the table): W (a  c )  W2  W4 ,

W (c  b )  W3  W7 ,

W (a  b )  W3  W4 .

(10.34)

On the other hand, since no probability may be negative (by its very definition), we may always write W3  W4  W2  W4   W3  W7  . Bell’s inequality (local-reality theory)

(10.35)

Plugging into this inequality the values of these two parentheses, given by Eq. (34), we get W (a  b )  W (a  c )  W (c  b ).

(10.36)

This is Bell’s inequality, which has to be satisfied by any local-reality theory; it directly contradicts the quantum-mechanical result (33) – opening the issue to direct experimental testing. Such tests were started in the late 1960s, but the first results were vulnerable to two criticisms: (i) The detectors were not fast enough and not far enough to have the relation (27) satisfied. This is why, as a matter of principle, there was a chance that information on the first measurement outcome had been transferred (by some, mostly implausible) means to particles before the second measurement – the so-called locality loophole. (ii) The particle/photon detection efficiencies were too low to have sufficiently small error bars for both parts of the inequality – the detection loophole. Gradually, these loopholes have been closed.34 As expected, substantial violations of the Bell inequalities (36) (or their equivalent forms) have been proved, essentially rejecting any possibility to reconcile quantum mechanics with Einstein’s local reality requirement. 10.4. Interpretations of quantum mechanics The fact that quantum mechanics is incompatible with local reality, makes its reconciliation with our (classically bred) “common sense” rather challenging. Here is a brief list of the major interpretations of quantum mechanics, that try to provide at least a partial reconciliation of this kind. (i) The so-called Copenhagen interpretation – to which most physicists adhere. This “interpretation” does not really interpret anything; it just accepts the intrinsic stochasticity of measurement results in quantum mechanics and the absence of local reality, essentially saying: “Do not worry; this is just how it is; live with it”. I generally subscribe to this school of thought, with the following qualification. While the Copenhagen interpretation implies statistical ensembles (otherwise, how would you define the probability? – see Sec. 1.3), its most frequently stated formulations35 do not put sufficient emphasis on their role, in particular on the ensemble re-definition as the only point of human observer’s involvement in a nearly-perfect measurement process – see Sec.1 above. The most 34

Important milestones in that way were the experiments by A. Aspect et al., Phys. Rev. Lett. 49, 91 (1982) and M. Rowe et al., Nature 409, 791 (2001). Detailed reviews of the experimental situation were given, for example, by M. Genovese, Phys. Repts. 413, 319 (2005) and A. Aspect, Physics 8, 123 (2015); see also the later paper by J. Handsteiner et al., Phys. Rev. Lett. 118, 060401 (2017). Presently, a high-fidelity demonstration of the Bell inequality violation has become a standard test in virtually every experiment with entangled qubits used for quantum encryption research – see Sec. 8.5, and in particular, the paper by J. Lin cited there. 35 With certain pleasant exceptions – see, e.g. L. Ballentine, Rev. Mod. Phys. 42, 358 (1970). Chapter 10

Page 15 of 16

Essential Graduate Physics

QM: Quantum Mechanics

famous objection to the Copenhagen interpretation belongs to A. Einstein: “God does not play dice.” OK, when Einstein speaks, we all should listen, but perhaps when God speaks (through random results of the same experiment), we have to pay even more attention. (ii) Non-local reality. After the dismissal of J. von Neumann’s “proof” by J. Bell, to the best of my knowledge, there has been no proof that hidden parameters could not be introduced, provided that they do not imply the local reality. Of constructive approaches, perhaps the most notable contribution was made by David Bohm,36 who developed the initial Louis de Broglie’s interpretation of the wavefunction as a “pilot wave”, making it quantitative. In the wave-mechanics version of this concept, the wavefunction governed by the Schrödinger equation just guides a “real”, point-like classical particle whose coordinates serve as hidden variables. However, this concept does not satisfy the notion of local reality. For example, the measurement of the particle’s coordinate at a certain point r1 has to instantly change the wavefunction everywhere in space, including the points r2 in the superluminal range (27). After A. Einstein’s private criticism, D. Bohm essentially abandoned his theory. (iii) The many-world interpretation introduced in 1957 by Hugh Everett and popularized in the 1960s and 1970s by Bruce de Witt. In this interpretation, all possible measurement outcomes do happen, splitting the Universe into the corresponding number of “parallel multiverses”, so from one of them, other multiverses and hence other outcomes cannot be observed. Let me leave to the reader an estimate of the rate at which the parallel multiverses have to be constantly generated (say, per second), taking into account that such generation should take place not only at explicit lab experiments but at every irreversible process – such as fission of every atomic nucleus or an absorption/emission of every photon, everywhere in each multiverse – whether its result is formally recorded or not. Nicolaas van Kampen has called this a “mind-boggling fantasy”.37 Even the main proponent of this interpretation, B. de Witt has confessed: “The idea is not easy to reconcile with common sense.” I agree. To summarize, as far as I know, neither of these interpretations has yet provided a suggestion on how it might be tested experimentally to exclude the other ones. On the other hand, quantum mechanics makes correct (if sometimes probabilistic) predictions, which do not contradict any reliable experimental results we are aware of. Maybe, this is not that bad for a scientific theory.38 10.5. Exercise problem *

10.1. The original (circa 1964) J. Bell’s inequality was derived for the results of SG measurements performed on two non-interacting particles with zero net spin, by using the following local-reality-based assumption: the result of each single-particle measurement is uniquely determined (besides the experimental setup) by some c-number hidden parameter  that may be random, i.e. change from experiment to experiment. Derive such inequality for the experiment shown in Fig. 4 and compare it with the corresponding quantum-mechanical result for the singlet state (24).

36 D.

Bohm, Phys. Rev. 85, 165; 180 (1952). N. van Kampen, Physica A 153, 97 (1988). By the way, I highly recommend the very reasonable summary of the quantum measurement issues, given in this paper, though believe that the quantitative theory of dephasing, discussed in Chapter 7 of this course, might give additional clarity to some of van Kampen’s statements. 38 For the reader who is not satisfied with this “positivistic” approach and wants to improve the situation, my earnest advice is to start not from square one but from reading what other (including some very clever!) people thought about it. The review collection by J. Wheeler and W. Zurek, cited above, may be a good starting point. 37

Chapter 10

Page 16 of 16

Konstantin K. Likharev

Essential Graduate Physics Lecture Notes and Problems Beta version Open online access at http://commons.library.stonybrook.edu/egp/ and https://sites.google.com/site/likharevegp/

Part SM: Statistical Mechanics Last corrections: September 27, 2023

A version of this material was published in 2019 under the title

Statistical Mechanics: Lecture notes IOPP, Essential Advanced Physics – Volume 7, ISBN 978-0-7503-1416-6, with the model solutions of the exercise problems published under the title

Statistical Mechanics: Problems with solutions IOPP, Essential Advanced Physics – Volume 8, ISBN 978-0-7503-1420-6 However, by now this online version of the lecture notes and the problem solutions available from the author, have been better corrected

About the author: https://you.stonybrook.edu/likharev/

© K. Likharev

Essential Graduate Physics

SM: Statistical Mechanics

Table of Contents Chapter 1. Review of Thermodynamics (24 pp.) 1.1. Introduction: Statistical physics and thermodynamics 1.2. The 2nd law of thermodynamics, entropy, and temperature 1.3. The 1st and 3rd laws of thermodynamics, and heat capacity 1.4. Thermodynamic potentials 1.5. Systems with a variable number of particles 1.6. Thermal machines 1.7. Exercise problems (18) Chapter 2. Principles of Physical Statistics (44 pp.) 2.1. Statistical ensemble and probability 2.2. Microcanonical ensemble and distribution 2.3. Maxwell’s Demon, information, and computing 2.4. Canonical ensemble and the Gibbs distribution 2.5. Harmonic oscillator statistics 2.6. Two important applications 2.7. Grand canonical ensemble and distribution 2.8. Systems of independent particles 2.9. Exercise problems (36) Chapter 3. Ideal and Not-so-Ideal Gases (34 pp.) 3.1. Ideal classical gas 3.2. Calculating  3.3. Degenerate Fermi gas 3.4. The Bose-Einstein condensation 3.5. Gases of weakly interacting particles 3.6. Exercise problems (30) Chapter 4. Phase Transitions (36 pp.) 4.1. First order phase transitions 4.2. Continuous phase transitions 4.3. Landau’s mean-field theory 4.4. Ising model: Weiss’ molecular-field approximation 4.5. Ising model: Exact and numerical results 4.6. Exercise problems (24) Chapter 5. Fluctuations (44 pp.) 5.1. Characterization of fluctuations 5.2. Energy and the number of particles 5.3. Volume and temperature 5.4. Fluctuations as functions of time 5.5. Fluctuations and dissipation

Table of Contents

Page 2 of 4

Essential Graduate Physics

SM: Statistical Mechanics

5.6. The Kramers problem and the Smoluchowski equation 5.7. The Fokker-Planck equation 5.8. Back to the correlation function 5.9. Exercise problems (30) Chapter 6. Elements of Kinetics (38 pp.) 6.1. The Liouville theorem and the Boltzmann equation 6.2. The Ohm law and the Drude formula 6.3. Electrochemical potential and drift-diffusion equation 6.4. Charge carriers in semiconductors: Statics and kinetics 6.5. Thermoelectric effects 6.6. Exercise problems (18) *** Additional file (available from the author upon request): Exercise and Test Problems with Model Solutions (156 + 26 = 182 problems; 272 pp.) ***

Author’s Remarks This graduate-level course in statistical mechanics (including an introduction to thermodynamics in Chapter 1) has a more or less traditional structure, with two notable exceptions. First, because of the growing interest in nanoscale systems and ultrasensitive physical measurements, large attention is paid to fluctuations of various physical variables. Namely, their discussion (in Chapter 5) includes not only the traditional topic of variances in thermal equilibrium, but also the characterization of their time dependence (including the correlation function and the spectral density), and also the Smoluchowski and Fokker–Planck equations for Brownian systems. A part of this chapter, including the discussion of the most important fluctuation-dissipation theorem (FDT), has an unavoidable overlap with Chapter 7 of the QM part of this series, devoted to open quantum systems. I tried to keep this overlap to a minimum, for example using a different (and much shorter) derivation of the FDT in this course. The second deviation from the tradition is Chapter 6 on physical kinetics, reflecting my belief that such key theoretical tools as the Boltzmann kinetic equation and such key notions as the difference between the electrostatic and electrochemical potentials in conductors, and band bending in semiconductors have to be in the arsenal of every educated scientist.

Table of Contents

Page 3 of 4

Essential Graduate Physics

SM: Statistical Mechanics

This page is intentionally left blank

Table of Contents

Page 4 of 4

Essential Graduate Physics

SM: Statistical Mechanics

Chapter 1. Review of Thermodynamics This chapter starts with a brief discussion of the subject of statistical physics and thermodynamics, and the relation between these two disciplines. Then I proceed to review the basic notions and relations of thermodynamics. Most of this material is supposed to be known to the reader from their undergraduate studies,1 so the discussion is rather brief. 1.1. Introduction: Statistical physics and thermodynamics Statistical physics (alternatively called “statistical mechanics”) and thermodynamics are two different but related approaches to the same goal: an approximate description of the “internal”2 properties of large physical systems, notably those consisting of N >> 1 identical particles – or other components. The traditional example of such a system is a human-scale portion of gas, with the number N of atoms/molecules3 of the order of the Avogadro number NA ~ 1024 (see Sec. 4 below). The motivation for the statistical approach to such systems is straightforward: even if the laws governing the dynamics of each particle and their interactions were exactly known, and we had infinite computing resources at our disposal, calculating the exact evolution of the system in time would be impossible, at least because it is completely impracticable to measure the exact initial state of each component – in the classical case, the initial position and velocity of each particle. The situation is further exacerbated by the phenomena of chaos and turbulence,4 and the quantum-mechanical uncertainty, which do not allow the exact calculation of positions and velocities of the component particles even if their initial state is known with the best possible precision. As a result, in most situations, only statistical predictions about the behavior of such systems may be made, with probability theory becoming a major tool of the mathematical arsenal. However, the statistical approach is not as bad as it may look. Indeed, it is almost self-evident that any measurable macroscopic variable characterizing a stationary system of N >> 1 particles as a whole (think, e.g., about the stationary pressure P of the gas contained in a fixed volume V) is nearly constant in time. Indeed, as we will see below, besides certain exotic exceptions, the relative magnitude of fluctuations – either in time, or among many macroscopically similar systems – of such a variable is of the order of 1/N1/2, and for N ~ NA is extremely small. As a result, the average values of appropriate macroscopic variables may characterize the state of the system quite well – satisfactory for nearly all practical purposes. The calculation of relations between such average values is the only task of thermodynamics and the main task of statistical physics. (Fluctuations may be important, but due to their smallness, in most cases their analysis may be based on perturbative approaches – see Chapter 5.) 1

For remedial reading, I can recommend, for example (in the alphabetical order): C. Kittel and H. Kroemer, Thermal Physics, 2nd ed., W. H. Freeman (1980); F. Reif, Fundamentals of Statistical and Thermal Physics, Waveland (2008); D. V. Schroeder, Introduction to Thermal Physics, Addison Wesley (1999). 2 Here “internal” is an (admittedly loose) term meaning all the physics unrelated to the motion of the system as a whole. The most important example of internal dynamics is the thermal motion of atoms and molecules. 3 This is perhaps my best chance to reverently mention Democritus (circa 460-370 BC) – the Ancient Greek genius who was apparently the first one to conjecture the atomic structure of matter. 4 See, e.g., CM Chapters 8 and 9. © K. Likharev

Essential Graduate Physics

SM: Statistical Mechanics

Now let us have a fast look at typical macroscopic variables that statistical physics and thermodynamics should operate with. Since I have already mentioned pressure P and volume V, let us start with this famous pair of variables. First of all, note that volume is an extensive variable, i.e. a variable whose value for a system consisting of several non-interacting parts is the sum of those of its parts. On the other hand, pressure is an example of an intensive variable whose value is the same for different parts of a system – if they are in equilibrium. To understand why P and V form a natural pair of variables, let us consider the classical playground of thermodynamics: a portion of a gas contained in a cylinder closed with a movable piston of area A (Fig. 1).

P, V

F

A

Fig. 1.1. Compressing gas.

x Neglecting the friction between the walls and the piston, and assuming that it is being moved so slowly that the pressure P is virtually the same for all parts of the volume at any instant,5 the elementary work of the external force F = PA compressing the gas, at a small piston’s displacement dx = –dV/A, is

F  dW  Fdx    Adx    PdV .  A

(1.1)

The last expression is more general than the model shown in Fig. 1, and does not depend on the particular shape of the system’s surface. (Note that in the notation of Eq. (1), which will be used throughout this course, the elementary work done by the gas on its environment equals –dW.) From the point of analytical mechanics,6 V and (–P) are just one of many possible canonical pairs of generalized coordinates qj and generalized forces Fj, whose products dWj = Fjdqj give independent contributions to the total work of the environment on the system under analysis. For example, the reader familiar with the basics of electrostatics knows that if the spatial distribution E(r) of an external electric field does not depend on the electric polarization P(r) of a dielectric medium placed into this field, its elementary work on the medium is 3

dW   E r   dP r  d r    E j r dPj r  d 3 r . 3

(1.2a)

j 1

The most important cases when this condition is fulfilled (and hence Eq. (2a) is valid) are, first, long cylindrical samples in a parallel external field (see, e.g., EM Fig. 3.13) and, second, the polarization of a sample (of any shape) due to that of discrete electric dipoles p k, whose electric interaction is negligible. In the latter case, Eq. (2a) may be also rewritten as the sum over the single dipoles located at points rk: 7 5

Such slow processes are called quasistatic; in the absence of static friction, they are (macroscopically) reversible: at their inversion, the system runs back through in the same sequence of its macroscopic states. 6 See, e.g., CM Chapters 2 and 10. 7 Some of my SBU students needed an effort to reconcile the positive signs in Eq. (2b) with the negative sign in the well-known relation dUk = –E(rk)dp k for the potential energy of a dipole in an external electric field – see, e.g., Chapter 1

Page 2 of 24

Work on a gas

Essential Graduate Physics

.

SM: Statistical Mechanics

dW   dWk ,

with dWk  E rk   dpk .

(1.2b)

k

Very similarly, and at similar conditions on an external magnetic field H(r), its elementary work on a magnetic medium may be also represented in either of two forms:8 3

dW   0  H r   dM r  d 3 r   0   H j r dM j r  d 3 r ,

(1.3a)

j 1

dW   dWk ,

with dWk   0 H rk   dm k .

(1.3b)

k

where M and mk are the vectors of, respectively, the medium’s magnetization and the magnetic moment of a single dipole. Formulas (2) and (3) show that the roles of generalized coordinates may be played by Cartesian components of the vectors P (or p ) and M (or m), with the components of the electric and magnetic fields playing the roles of the corresponding generalized forces. This list may be extended to other interactions (such as gravitation, surface tension in fluids, etc.). Following tradition, I will use the {–P, V } pair in almost all the formulas below, but the reader should remember that they all are valid for any other pair {Fj, qj}.9 Again, the specific relations between the variables of each pair listed above may depend on the statistical properties of the system under analysis, but their definitions are not based on statistics. The situation is very different for a very specific pair of variables, temperature T and entropy S, although these “sister variables” participate in many formulas of thermodynamics exactly as if they were just one more canonical pair {Fj, qj}. However, the very existence of these two notions is due to statistics. Namely, temperature T is an intensive variable that characterizes the degree of thermal “agitation” of the system’s components. On the contrary, the entropy S is an extensive variable that in most cases evades immediate perception by human senses; it is a quantitative measure of the disorder of the system, i.e. the degree of our ignorance about its exact microscopic state.10 The reason for the appearance of the {T, S} pair of variables in formulas of thermodynamics and statistical mechanics is that the statistical approach to large systems of particles brings some qualitatively new results, most notably the possibility of irreversible time evolution of collective (macroscopic) variables describing the system. On one hand, irreversibility looks absolutely natural in such phenomena as the diffusion of an ink drop in a glass of water. In the beginning, the ink molecules are located in a certain small part of the system’s volume, i.e. are to some extent ordered, while at the late stages of diffusion, the position of each molecule in the glass is essentially random. However, on EM Eqs. (3.15). The resolution of this paradox is simple: each term of Eq. (2b) describes the work dWk of the electric field on the internal degrees of freedom of the kth dipole, changing its internal energy Ek: dEk = dWk. This energy change may be viewed as coming from the dipole’s potential energy in the field: dEk = –dUk. 8 Here, as in all my series, I am using the SI units; for their conversion to the Gaussian units, I have to refer the reader to the EM part of the series. 9 Note that in systems of discrete particles, most generalized forces including the fields E and H , differ from the mechanical pressure P in the sense that their work may be explicitly partitioned into single-particle components – see Eqs. (2b) and (3b). This fact gives some discretion for the approaches based on thermodynamic potentials – see Sec.4 below. 10 The notion of entropy was introduced into thermodynamics in 1865 by Rudolf Julius Emanuel Clausius on a purely phenomenological basis. In the absence of a clear understanding of entropy’s microscopic origin (which had to wait for the works by L. Boltzmann and J. Maxwell), this was an amazing intellectual achievement. Chapter 1

Page 3 of 24

Essential Graduate Physics

SM: Statistical Mechanics

second thought, the irreversibility is rather surprising, taking into account that the “microscopic” laws governing the motion of the system’s components are time-reversible – such as Newton’s laws or the basic laws of quantum mechanics.11 Indeed, if at a late stage of the diffusion process, we reversed the velocities of all molecules exactly and simultaneously, the ink molecules would again gather (for a moment) into the original spot.12 The problem is that getting the information necessary for the exact velocity reversal is not practicable. A quantitative discussion of the reversibility-irreversibility dilemma requires a strict definition of the basic notion of statistical mechanics (and indeed of the probability theory): the statistical ensemble, and I would like to postpone it until the beginning of Chapter 2. In particular, in that chapter, we will see that the basic law of irreversible behavior is a growth or constancy of the entropy S in any closed system. Thus, the statistical mechanics, without defying the “microscopic” laws governing the evolution of the system’s components, introduces on top of them some new “macroscopic” laws, intrinsically related to information, i.e. the depth of our knowledge of the microscopic state of the system. To conclude this brief discussion of variables, let me mention that as in all fields of physics, a very special role in statistical mechanics is played by the energy E. To emphasize the commitment to disregard the motion of the system as a whole in this subfield of physics, the E considered in thermodynamics it is frequently called the internal energy, though just for brevity, I will skip this adjective in most cases. The simplest example of such E is the sum of kinetic energies of molecules in a dilute gas at their thermal motion, but in general, the internal energy also includes not only the individual energies of the system’s components but also their interactions with each other. Besides a few “pathological” cases of very-long-range interactions, the interactions may be treated as local; in this case, the internal energy is proportional to N, i.e. is an extensive variable. As will be shown below, other extensive variables with the dimension of energy are often very useful as well, including the (Helmholtz) free energy F, the Gibbs energy G, the enthalpy H, and the grand potential . (The collective name for such variables is thermodynamic potentials.) Now, we are ready for a brief discussion of the relationship between statistical physics and thermodynamics. While the task of statistical physics is to calculate the macroscopic variables discussed above13 for various microscopic models of the system, the main role of thermodynamics is to derive some general relations between the average values of the macroscopic variables (also called thermodynamic variables) that do not depend on specific models. Surprisingly, it is possible to accomplish such a feat using just a few either evident or very plausible general assumptions (sometimes called the laws of thermodynamics), which find their proofs in statistical physics.14 Such general relations allow for a substantial reduction of the number of calculations we have to do in statistical physics: in most cases, it is sufficient to calculate from the statistics just one or two variables, and then 11

Because of that, the possibility of the irreversible macroscopic behavior of microscopically reversible systems was questioned by some serious scientists as recently as in the late 19th century – notably by J. Loschmidt in 1876. 12 While quantum-mechanical effects, with their intrinsic uncertainty, may be quantitatively important in this example, our qualitative discussion does not depend on them. 13 Several other important quantities, for example the heat capacity C, may be calculated as partial derivatives of the basic variables discussed above. Also, at certain conditions, the number of particles N in a system cannot be fixed and should also be considered as an (extensive) variable – see Sec. 5 below. 14 Admittedly, some of these proofs are based on other plausible but deeper postulates, for example, the central statistical hypothesis (see Sec. 2.2 below) whose best proof, to my knowledge, is just the whole body of experimental data. Chapter 1

Page 4 of 24

Essential Graduate Physics

SM: Statistical Mechanics

use thermodynamic relations to get all other properties of interest. Thus the science of thermodynamics, sometimes snubbed as a phenomenology, deserves every respect not only as a useful theoretical tool but also as a discipline more general than any particular statistical model. This is why the balance of this chapter is devoted to a brief review of thermodynamics. 1.2. The 2nd law of thermodynamics, entropy, and temperature Thermodynamics accepts a phenomenological approach to the entropy S, postulating that there is such a unique extensive measure of the aggregate disorder, and that in a closed system (defined as a system completely isolated from its environment, i.e. the system with its internal energy fixed) it may only grow in time, reaching its constant (maximum) value at equilibrium:15 nd

2 law of thermodynamics

dS  0 .

(1.4)

This postulate is called the 2nd law of thermodynamics – arguably its only substantial new law.16,17 This law, together with the additivity of S (as an extensive variable) in composite systems of non-interacting parts is sufficient for a formal definition of temperature, and a derivation of its basic properties that comply with our everyday notion of this key variable. Indeed, let us consider a closed system consisting of two fixed-volume subsystems (Fig. 2) whose internal relaxation is very fast in comparison with the rate of the thermal flow (i.e. the energy and entropy exchange) between the parts. In this case, on the latter time scale, each part is always in a quasistatic state, which may be described by a unique relation E(S) between its energy and entropy.18

E1 , S1 E2 , S 2

dE, dS Fig. 1.2. A. composite thermodynamic system.

Neglecting the energy of interaction between the parts (which is always possible at N >> 1, and in the absence of very-long-range interactions), we may use the extensive character of the variables E and S to write E  E1 S1   E 2 S 2 , S  S1  S 2 , (1.5) for the full energy and entropy of the system. Now let us use them to calculate the following derivative: 15

Implicitly, this statement also postulates the existence, in a closed system, of thermodynamic equilibrium, an asymptotically reached state in which all macroscopic variables, including entropy, remain constant. Sometimes this postulate is called the 0th law of thermodynamics. 16 Two initial formulations of this law, later proved equivalent, were put forward independently by Lord Kelvin (born William Thomson) in 1851 and by Rudolf Clausius in 1854. 17 Note that according to Eq. (4), a macroscopically reversible process is possible only when the net entropy (of the system under consideration plus its environment involved in the process) does not change. 18 Here we strongly depend on a very important (and possibly the least intuitive) aspect of the 2nd law, namely that entropy is a unique macroscopic measure of disorder. Chapter 1

Page 5 of 24

Essential Graduate Physics

SM: Statistical Mechanics

dS dS dS dS dE 2 dS1 dS 2 d ( E  E1 ) dS .  1  2  1  2   dE1 dE1 dE1 dE1 dE 2 dE1 dE1 dE 2 dE1

(1.6)

Since the total energy E of the closed system is fixed and hence independent of its re-distribution between the subsystems, we have to take dE/dE1 =0, and Eq. (6) yields dS dS dS  1  2. dE1 dE1 dE 2

(1.7)

According to the 2nd law of thermodynamics, when the two parts have reached the thermodynamic equilibrium, the total entropy S reaches its maximum, so dS/dE1 = 0, and Eq. (7) yields dS1 dS 2  . dE1 dE 2

(1.8)

This equality shows that if a thermodynamic system may be partitioned into weakly interacting macroscopic parts, their derivatives dS/dE should be equal in equilibrium. The reciprocal of this derivative is called temperature. Taking into account that our analysis pertains to the situation (Fig. 2) when both volumes V1,2 are fixed, we may write this definition as  E    T ,  S V

(1.9)

the subscript V meaning that volume is kept constant at the differentiation. (Such notation is common and very useful in thermodynamics, with its broad range of variables.) Note that according to Eq. (9), if the temperature is measured in energy units19 (as I will do in this course for the brevity of notation), then S is dimensionless. The transfer to the SI or Gaussian units, i.e. to the temperature TK measured in kelvins (not “Kelvins”, and not “degrees Kelvin”, please!), is given by the relation T = kBTK, where the Boltzmann constant kB  1.38×10-23 J/K = 1.38×10-16 erg/K.20 In those units, the entropy becomes dimensional: SK = kBS. The definition of temperature given by Eq. (9), is of course in sharp contrast with the popular notion of T as a measure of the average energy of one particle. However, as we will repeatedly see below, in many cases these two notions may be reconciled, with Eq. (9) being more general. In particular, the so-defined T is in agreement with our everyday notion of temperature:21 (i) according to Eq. (9), the temperature is an intensive variable (since both E and S are extensive), i.e., in a system of similar particles, it is independent of the particle number N; 19

Here I have to mention a traditional unit of thermal energy, the calorie, still being used in some applied fields. In the most common modern definition (as the so-called thermochemical calorie) it equals exactly 4.148 J. 20 For the more exact values of this and other constants, see appendix UCA: Selected Units and Constants. Note that both T and TK define the natural absolute (also called “thermodynamic”) scale of temperature, vanishing at the same point – in contrast to such artificial scales as the degrees Celsius (“centigrades”), defined as TC  TK + 273.15, or the degrees Fahrenheit: TF  (9/5)TC + 32. 21 Historically, this notion was initially only qualitative – just as something distinguishing “hot” from “cold”. After the invention of thermometers (the first one by Galileo Galilei in 1592), mostly based on the thermal expansion of fluids, this notion had become quantitative but not very deep: being understood as something “what thermometers measure” – until its physical sense as a measure of thermal motion’s intensity, was revealed in the 19th century. Chapter 1

Page 6 of 24

Definition of temperature

Essential Graduate Physics

SM: Statistical Mechanics

(ii) temperatures of all parts of a system are equal at equilibrium – see Eq. (8); (iii) in a closed system whose parts are not in equilibrium, thermal energy (heat) always flows from the warmer part (with a higher T) to the colder part. In order to prove the last property, let us revisit the closed composite system shown in Fig. 2, and consider another derivative: dS dS1 dS 2 dS1 dE1 dS 2 dE 2 .     dt dt dt dE1 dt dE 2 dt

(1.10)

If the internal state of each part is very close to equilibrium (as was assumed from the very beginning) at each moment of time, we can use Eq. (9) to replace the derivatives dS1,2/dE1,2 with 1/T1,2, getting dS 1 dE1 1 dE 2 .   dt T1 dt T2 dt

(1.11)

Since in a closed system E = E1 + E2 = const, these time derivatives are related as dE2/dt = –dE1/dt, and Eq. (11) yields dS  1 1  dE1    . (1.12) dt  T1 T2  dt But according to the 2nd law of thermodynamics, this derivative cannot be negative: dS/dt ≥ 0. Hence,

1 1    T1 T2

 dE1   0.  dt

(1.13)

For example, if T1 > T2, then dE1/dt  0, i.e. the warmer part gives energy to its colder counterpart. Note also that such a heat exchange, at fixed volumes V1,2, and T1  T2, increases the total system’s entropy, without performing any “useful” mechanical work – see Eq. (1). 1.3. The 1st and 3rd laws of thermodynamics, and heat capacity Now let us consider a thermally insulated system whose volume V may be changed by force – see, for example, Fig. 1. Such a system is different from the fully closed one, because its energy E may be changed by the external force’s work – see Eq. (1): dE  dW   PdV .

(1.14)

Let the volume change be not only quasistatic but also static-friction-free (reversible), so that the system is virtually at equilibrium at any instant. Such reversible process, in the particular case of a thermally insulated system, it is also called adiabatic. If the pressure P (or any generalized external force F j) is deterministic, i.e. is a predetermined function of time, independent of the state of the system under analysis, it may be considered as coming from a fully ordered system, i.e. the one with zero entropy, with the aggregate system (consisting of the system under our analysis plus the source of the force) completely closed. Since the entropy of the total closed system should stay constant (see the second of Eqs. (5) above), the S of the system under analysis should stay constant on its own. Thus we arrive at a very important conclusion: at an adiabatic process in a system, its entropy cannot change. (Sometimes such a process is called isentropic.) This means that we may use Eq. (14) to write Chapter 1

Page 7 of 24

Essential Graduate Physics

SM: Statistical Mechanics

 E  P    .  V  S

(1.15)

Now let us consider a more general thermodynamic system that may also exchange thermal energy (“heat”) with its environment (Fig. 3).

dW

dQ

Fig. 1.3. An example of the thermodynamic process involving both the mechanical work by the environment and the heat exchange with it.

E(S, V)

For such a system, our previous conclusion about the entropy’s constancy is not valid, so in equilibrium, S may be a function of not only the system’s energy E but also of its volume: S = S(E, V). Let us consider this relation resolved for energy: E = E(S, V), and write the general mathematical expression for the full differential of E as a function of these two independent arguments:  E   E  dE    dS    dV .  S V  V  S

(1.16)

This formula, based on the stationary relation E = E(S, V), is evidently valid not only in equilibrium but also for all reversible22 processes. Now, using Eqs. (9) and (15), we may rewrite Eq. (16) as dE  TdS  PdV .

(1.17)

Energy: differential

According to Eq. (1), the second term on the right-hand side of this equation is just the work of the external force, so due to the conservation of energy,23 the first term has to be equal to the heat dQ transferred from the environment to the system (see Fig. 3):

dE  dQ  dW ,

(1.18)

dQ  TdS .

(1.19)

The last relation, divided by T and then integrated along an arbitrary (but reversible!) process,

dQ (1.20)  const, T is sometimes used as an alternative definition of entropy S – provided that temperature is defined not by Eq. (9), but in some independent way. It is useful to recognize that entropy (like energy) may be defined S

22

Let me emphasize again that any adiabatic process is reversible, but not vice versa. Such conservation, expressed by Eqs. (18)-(19), is commonly called the 1st law of thermodynamics. While it (in contrast with the 2nd law) does not present any new law of nature, and in particular was already used de-facto to write the first of Eqs. (5) and also Eq. (14), such a grand name was absolutely justified in the early 19th century when the mechanical nature of internal energy (including the motion of atoms and molecules) was not at all clear. In this context, the names of at least three great scientists: Benjamin Thompson (who gave, in 1799, convincing arguments that heat cannot be anything but a form of particle motion), Julius Robert von Mayer (who conjectured the conservation of the sum of the thermal and macroscopic mechanical energies in 1841), and James Prescott Joule (who proved this conservation experimentally two years later), have to be reverently mentioned. 23

Chapter 1

Page 8 of 24

st

1 law of thermodynamics

Essential Graduate Physics

SM: Statistical Mechanics

only to an arbitrary constant, which does not affect any other thermodynamic observables. The common convention is to take S  0, at T  0 . (1.21) This condition is sometimes called the “3rd law of thermodynamics”, but it is important to realize that this is just a convention rather than a real law.24 Indeed, the convention corresponds well to the notion of the full order at T = 0 in some systems (e.g., separate atoms or perfect crystals), but creates ambiguity for other systems, e.g., amorphous solids (like the usual glasses) that may remain highly disordered for “astronomic” times, even at T  0. Now let us discuss the notion of heat capacity that, by definition, is the ratio dQ/dT, where dQ is the amount of heat that should be given to a system to raise its temperature by a small amount dT. 25 (This notion is important because the heat capacity may be most readily measured experimentally.) The heat capacity depends, naturally, on whether the heat dQ goes only into an increase of the internal energy dE of the system (as it does if its volume V is constant), or also into the mechanical work (–dW) performed by the system at its expansion – as it happens, for example, if the pressure P, rather than the volume V, is fixed (the so-called isobaric process – see Fig. 4).

A Mg dQ

P

Mg  const A

Fig. 1.4. The simplest example of the isobaric process.

Hence we should discuss at least two different quantities, 26 the heat capacity at fixed volume,

Heat capacity: definitions

 Q  CV     T V

(1.22)

 Q  CP    ,  T  P

(1.23)

and the heat capacity at fixed pressure

the 3rd law (also called the Nernst theorem) as postulated by Walter Hermann Nernst in 1912 was different – and really meaningful: “It is impossible for any procedure to lead to the isotherm T = 0 in a finite number of steps.” I will discuss this theorem at the end of Sec. 6. 25 By this definition, the full heat capacity of a system is an extensive variable, but it may be used to form such intensive variables as the heat capacity per particle, called the specific heat capacity, or just the specific heat. (Please note that the last terms are rather ambiguous: they are also used for the heat capacity per unit mass and per unit volume, so some caution is in order.) 26 Dividing both sides of Eq. (19) by dT, we get the general relation dQ/dT = TdS/dT, which may be used to rewrite the definitions (22) and (23) in the following forms: 24Actually,

 S  CV  T   ,  T V

 S  CP  T   ,  T  P

which are more convenient for some applications. Chapter 1

Page 9 of 24

Essential Graduate Physics

SM: Statistical Mechanics

and expect that for all “normal” (mechanically stable) systems, CP  CV. The difference between CP and CV is minor for most liquids and solids, but may be very substantial for gases – see the next section. 1.4. Thermodynamic potentials Since for a fixed volume, dW = –PdV = 0 and Eq. (18) yields dQ = dE, we may rewrite Eq. (22) in another convenient form  E  CV    . (1.24)  T V so to calculate CV from a certain statistical-physics model, we only need to calculate E as a function of temperature and volume. If we want to obtain a similarly convenient expression for CP, the best way is to introduce a new notion of so-called thermodynamic potentials – whose introduction and effective use is perhaps one of the most impressive techniques of thermodynamics. For that, let us combine Eqs. (1) and (18) to write the 1st law of thermodynamics in its most common form dQ  dE  PdV .

(1.25)

At an isobaric process (Fig. 4), i.e. at P = const, this expression is reduced to

dQ P

 dE P  d ( PV ) P  d ( E  PV ) P .

(1.26)

Thus, if we introduce a new function with the dimensionality of energy:27 H  E  PV ,

(1.27)

Enthalpy: definition

called enthalpy (or, sometimes, the “heat function” or the “heat contents”),28 we may rewrite Eq. (23) as  H  CP    .  T  P

(1.28)

Comparing Eqs. (28) and (24), we see that for the heat capacity, the enthalpy H plays the same role at fixed pressure as the internal energy E plays at fixed volume. Now let us explore the enthalpy’s properties at an arbitrary reversible process, lifting the restriction P = const, but keeping the definition (27). Differentiating this equality, we get dH  dE  PdV  VdP .

(1.29)

Plugging into this relation Eq. (17) for dE, we see that the terms PdV cancel, yielding a very simple expression dH  TdS  VdP , (1.30) whose right-hand side differs from Eq. (17) only by the swap of P and V in the second term, with the simultaneous change of its sign. Formula (30) shows that if H has been found (say, experimentally

27

From the point of view of mathematics, Eq. (27) is a particular case of the so-called Legendre transformations. This function (as well as the Gibbs free energy G, see below), had been introduced in 1875 by J. Gibbs, though the term “enthalpy” was coined (much later) by H. Onnes.

28

Chapter 1

Page 10 of 24

Enthalpy: differential

Essential Graduate Physics

SM: Statistical Mechanics

measured or calculated for a certain microscopic model) as a function of the entropy S and the pressure P of a system, we can calculate its temperature T and volume V by simple partial differentiation:  H  T   ,  S  P

 H  V   .  P  S

(1.31)

The comparison of the first of these relations with Eq. (9) shows that not only for the heat capacity but for the temperature as well, at fixed pressure, enthalpy plays the same role as played by internal energy at fixed volume. This success immediately raises the question of whether we could develop this idea further on, by defining other useful thermodynamic potentials – the variables with the dimensionality of energy that would have similar properties – first of all, a potential that would enable a similar swap of T and S in its full differential, in comparison with Eq. (30). We already know that an adiabatic process is a reversible process with constant entropy, inviting analysis of a reversible process with constant temperature. Such an isothermal process may be implemented, for example, by placing the system under consideration into thermal contact with a much larger system (called either the heat bath, or “heat reservoir”, or “thermostat”) that remains in thermodynamic equilibrium at all times – see Fig. 5. heat bath

T

Fig. 1.5. The simplest example of the isothermal process.

dQ

Due to its very large size, the heat bath’s temperature T does not depend on what is being done with our system. If the change is being done sufficiently slowly (i.e. reversibly), this temperature is also the temperature of our system – see Eq. (8) and its discussion. Let us calculate the elementary mechanical work dW (1) at such a reversible isothermal process. According to the general Eq. (18), dW = dE – dQ. Plugging dQ from Eq. (19) into this equality, for T = const we get

dW T

 dE  TdS  d ( E  TS )  dF ,

(1.32)

where the following combination, F  E  TS ,

Free energy: definition

(1.33)

is called the free energy (or the “Helmholtz free energy”, or just the “Helmholtz energy”29). Just as we have done for the enthalpy, let us establish properties of this new thermodynamic potential for an arbitrarily small, reversible (now not necessarily isothermal!) variation of variables, while keeping the definition (33). Differentiating this relation and then using Eq. (17), we get Free energy: differential

dF   SdT  PdV .

(1.34)

29

It was named after Hermann von Helmholtz (1821-1894). The last of the listed terms for F was recommended by the most recent (1988) IUPAC’s decision, but I will use the first term, which prevails in the physics literature. The origin of the adjective “free” stems from Eq. (32): F is may be interpreted as the internal energy’s part that is “free” to be transferred to the mechanical work, at the (very common) isothermal process. Chapter 1

Page 11 of 24

Essential Graduate Physics

SM: Statistical Mechanics

Thus, if we know the function F(T, V), we can calculate S and P by simple differentiation:  F   F  S    .  , P    V  T  T V

(1.35)

Now we may notice that the system of all partial derivatives may be made full and symmetric if we introduce one more thermodynamic potential. Indeed, we have already seen that each of the three already introduced thermodynamic potentials (E, H, and F) has an especially simple full differential if it is considered as a function of its two canonical arguments: one of the “thermal variables” (either S or T) and one of the “mechanical variables” (either P or V):30 E  E ( S ,V );

H  H ( S , P);

F  F (T , V ).

(1.36)

In this list of pairs of four arguments, only one pair is missing: {T, P}. The thermodynamic function of this pair, which gives the two remaining variables (S and V) by simple differentiation, is called the Gibbs energy (or sometimes the “Gibbs free energy”): G = G(T, P). The way to define it in a symmetric fashion is evident from the so-called circular diagram shown in Fig. 6. (a)

S E

H +PV -TS

(b)

P

S

G

E

T

V

H

P G

-TS +PV V

F

F

T

Fig. 1.6. (a) The circular diagram and (b) an example of its use for variable calculation. The thermodynamic potentials are typeset in red, each flanked with its two canonical arguments.

In this diagram, each thermodynamic potential is placed between its two canonical arguments – see Eq. (36). The left two arrows in Fig. 6a show the way the potentials H and F have been obtained from energy E – see Eqs. (27) and (33). This diagram hints that G has to be defined as shown by either of the two right arrows on that panel, i.e. as G  F  PV  H  TS  E  TS  PV .

(1.37)

Gibbs energy: definition

In order to verify this idea, let us calculate the full differential of this new thermodynamic potential, using, e.g., the first form of Eq. (37) together with Eq. (34): dG  dF  d ( PV )  ( SdT  PdV )  ( PdV  VdP)   SdT  VdP,

(1.38)

so if we know the function G(T, P), we can indeed readily calculate both entropy and volume:  G   G  S    , V   .  T  P  P  T

(1.39)

30

Note the similarity of this situation with that is analytical mechanics (see, e.g., CM Chapters 2 and 10): the Lagrangian function may be used for a simple derivation of the equations of motion if it is expressed as a function of generalized coordinates and their velocities, while to use the Hamiltonian function in a similar way, it has to be expressed as a function of the generalized coordinates and the corresponding momenta. Chapter 1

Page 12 of 24

Gibbs energy: differential

Essential Graduate Physics

SM: Statistical Mechanics

The circular diagram completed in this way is a good mnemonic tool for recalling Eqs. (9), (15), (31), (35), and (39), which express thermodynamic variables as partial derivatives of the thermodynamic potentials. Indeed, the variable in any corner of the diagram may be found as a partial derivative of any of two potentials that are not its direct neighbors, over the variable in the opposite corner. For example, the green line in Fig. 6b corresponds to the second of Eqs. (39), while the blue line, to the second of Eqs. (31). At this procedure, all the derivatives giving the variables of the upper row (S and P) have to be taken with negative signs, while those giving the variables of the bottom row (V and T), with positive signs.31 Now I have to justify the collective name “thermodynamic potentials” used for E, H, F, and G. For that, let us consider a macroscopically irreversible process, for example, direct thermal contact of two bodies with different initial temperatures. As was discussed in Sec. 2, at such a process, the entropy may grow even without the external heat flow: dS  0 at dQ = 0 – see Eq. (12). This means that at a more general process with dQ  0, the entropy may grow faster than predicted by Eq. (19), which has been derived for a reversible process, so dQ , (1.40) dS  T with the equality approached in the reversible limit. Plugging Eq. (40) into Eq. (18) (which, being just the energy conservation law, remains valid for irreversible processes as well), we get dE  TdS  PdV .

(1.41)

We can use this relation to have a look at the behavior of other thermodynamic potentials in irreversible situations, still keeping their definitions given by Eqs. (27), (33), and (37). Let us start from the (very common) case when both the temperature T and the volume V of a system are kept constant. If the process is reversible, then according to Eq. (34), the full time derivative of the free energy F would equal zero. Eq. (41) says that in an irreversible process, this is not necessarily so: if dT = dV =0, then dF d dE dS  ( E  TS ) T  T  0. dt dt dt dt

(1.42)

Hence, in the general (irreversible) situation, F can only decrease, but not increase in time. This means that F eventually approaches its minimum value F(T, V) given by the equations of reversible thermodynamics. To re-phrase this important conclusion, in the case T = const and V = const, the free energy F, i.e. the difference E – TS, plays the role of the potential energy in the classical mechanics of dissipative processes: its minimum corresponds to the (in the case of F, thermodynamic) equilibrium of the system. This is one of the key results of thermodynamics, and I invite the reader to give it some thought. One of its possible handwaving interpretations of this fact is that the heat bath with fixed T > 0, i.e. with a substantial thermal agitation of its components, “wants” to impose thermal disorder in the system immersed into it, by “rewarding” it with lower F for any increase of the disorder. 31

There is also a wealth of other relations between thermodynamic variables that may be represented as second derivatives of the thermodynamic potentials, including four Maxwell relations such as (S/V)T = (P/T)V, etc. (They may be readily recovered from the well-known property of a function of two independent arguments, say, f(x, y): (f/x)/y = (f/y)/x.) In this chapter, I will list only the thermodynamic relations that will be used later in the course; a more complete list may be found, e.g., in Sec. 16 of the book by L. Landau and E. Lifshitz, Statistical Physics, Part 1, 3rd ed., Pergamon, 1980 (and its later re-printings). Chapter 1

Page 13 of 24

Essential Graduate Physics

SM: Statistical Mechanics

Repeating the calculation for a different case, T = const, P = const, it is easy to see that in this case the same role is played by the Gibbs energy: dG d dE dS dV dS dV dS dV  ( E  TS  PV )  T P  (T P ) T P  0, dt dt dt dt dt dt dt dt dt

(1.43)

so the thermal equilibrium now corresponds to the minimum of G rather than F. For the two remaining thermodynamic potentials, E and H, calculations similar to Eqs. (42) and (43) are possible but make less sense because that would require keeping S = const (with V = const for E, and P = const for H) for an irreversible process, but it is usually hard to prevent the entropy from growing if initially it had been lower than its equilibrium value, at least on the long-term basis.32 Thus physically, the circular diagram is not so symmetric after all: G and F are somewhat more useful for most practical calculations than E and H. Note that the difference G – F = PV between the two “more useful” potentials has very little to do with thermodynamics at all because this difference exists (although is not much advertised) in classical mechanics as well.33 Indeed, the difference may be generalized as G – F = –Fq, where q is a generalized coordinate, and F is the corresponding generalized force. The minimum of F corresponds to the equilibrium of an autonomous system (with F = 0), while the equilibrium position of the same system under the action of external force F is given by the minimum of G. Thus the external force “wants” the system to subdue to its effect, “rewarding” it with lower G. Moreover, the difference between F and G becomes a bit ambiguous (approach-dependent) when the product Fq may be partitioned into single-particle components – just as it is done in Eqs. (2b) and (3b) for the electric and magnetic fields. Here the applied field may be taken into account on the microscopic level, by including its effect directly into the energy k of each particle. In this case, the field contributes to the total internal energy E directly, and hence the thermodynamic equilibrium (at T = const) is described as the minimum of F. (We may say that in this case F = G, unless a difference between these thermodynamic potentials is created by the actual mechanical pressure P.) However, in some cases, typically for condensed systems with their strong interparticle interactions, the easier (and sometimes the only practicable34) way to account for the field is on the macroscopic level, by taking G = F – Fq. In this case, the same equilibrium state is described as the minimum of G. (Several examples of this dichotomy will be given later in this course.) Whatever the choice, one should mind not take the same field effect into account twice. 32

There are a few practicable systems, notably including the so-called adiabatic magnetic refrigerators (to be discussed in Chapter 2), where the unintentional growth of S is so slow that the condition S = const may be closely approached during a finite but substantial time interval. 33 It is convenient to describe it as the difference between the “usual” (internal) potential energy U of the system to its “Gibbs potential energy” UG – see CM Sec. 1.4. For the readers who skipped that discussion: my pet example is the usual elastic spring with U = kx2/2, under the effect of an external force F, whose equilibrium position (x0 = F/k) evidently corresponds to the minimum of UG = U – Fx, rather than just U. 34 An example of such an extreme situation is the case when an external magnetic field H is applied to a macroscopic sample of a type-1 superconductor in its so-called intermediate state, in which the sample partitions into domains of the “normal” phase with B = 0H, and the superconducting phase with B = 0. (For more on this topic see, e.g., EM Secs. 6.4-6.5.) In this case, the field is effectively applied to the interfaces between the domains, very similarly to the mechanical pressure applied to a gas portion via a piston – see Fig. 1 again. Chapter 1

Page 14 of 24

Essential Graduate Physics

SM: Statistical Mechanics

One more important conceptual question I would like to discuss here is why usually statistical physics pursues the calculation of thermodynamic potentials rather than just of a relation between P, V, and T. (Such relation is called the equation of state of the system.) Let us explore this issue on the particular but very important example of an ideal classical gas in thermodynamic equilibrium, for which the equation of state should be well known to the reader from undergraduate physics:35 Ideal gas: equation of state

PV  NT ,

(1.44)

where N is the number of particles in volume V. (In Chapter 3, we will derive Eq. (44) from statistics.) Let us try to use it for the calculation of all thermodynamic potentials, and all other thermodynamic variables discussed above. We may start, for example, from the calculation of the free energy F. Indeed, integrating the second of Eqs. (35) with the pressure calculated from Eq. (44), P = NT/V, we get F    PdV

T  const

  NT 

dV d (V / N ) V   NT    NT ln  Nf (T ), V (V / N ) N

(1.45)

where V has been divided by N in both instances just to represent F as a manifestly extensive variable, in this uniform system proportional to N. The integration “constant” f(T) is some function of temperature, which cannot be recovered from the equation of state. This function affects all other thermodynamic potentials, and the entropy as well. Indeed, using the first of Eqs. (35) together with Eq. (45), we get  F   V df T   , S     N ln  dT   T V  N

(1.46)

and now may combine Eqs. (33) with (46) to calculate the (internal) energy of the gas,36 V V df T   df T       E  F  TS   NT ln  Nf T   T  N ln  N .  N  f T   T  N N dT  dT     

(1.47)

From here, we may use Eqs. (27), (44), and (47) to calculate the gas’ enthalpy, df T    H  E  PV  E  NT  N  f T   T  T , dT   and, finally, plug Eqs. (44) and (45) into Eq. (37) to calculate its Gibbs energy V   G  F  PV  N  T ln  f T   T  . N  

(1.48)

(1.49)

35 The

long history of the gradual discovery of this relation includes the very early (circa 1662) work by R. Boyle and R. Townely, followed by contributions from H. Power, E. Mariotte, J. Charles, J. Dalton, and J. Gay-Lussac. It was fully formulated by Benoît Paul Émile Clapeyron in 1834, in the form PV = nRTK, where n is the number of moles in the gas sample, and R  8.31 J/moleK is the so-called gas constant. This form is equivalent to Eq. (44), taking into account that R  kBNA, where NA = 6.022 140 761023 mole-1 is the Avogadro number, i.e. the number of molecules per mole. (By the mole’s definition, NA is just the reciprocal mass, in grams, of the 1/12th part of the 12 C atom, which is close to the masses of one proton or one neutron – see Appendix UCA: Selected Units and Constants.) Historically, this equation of state was the main argument for the introduction of the absolute temperature T, because only with it, the equation acquires the spectacularly simple form (44). 36 Note that Eq. (47), in particular, describes a very important property of the ideal classical gas: its energy depends only on temperature (and the number of particles), but not on volume or pressure. Chapter 1

Page 15 of 24

Essential Graduate Physics

SM: Statistical Mechanics

One might ask whether the function f(T) is physically significant, or it is something like the inconsequential, arbitrary constant – like the one that may be always added to the potential energy in non-relativistic mechanics. In order to address this issue, let us calculate, from Eqs. (24) and (28), both heat capacities, which are evidently measurable quantities: d2 f  E  , CV   NT    dT 2  T V

(1.50)

  d2 f  H    CV  N .  (1.51) CP   N T 1     2  dT  T  P   We see that the function f(T), or at least its second derivative, is measurable.37 (In Chapter 3, we will calculate this function for two simple “microscopic” models of the ideal classical gas.) The meaning of this function is evident from the physical picture of the ideal gas: the pressure P exerted on the walls of the containing volume is produced only by the translational motion of the gas molecules, while their internal energy E (and hence other thermodynamic potentials) may be also contributed by the internal dynamics of the molecules – their rotation, vibration, etc. Thus, the equation of state does not give us the full thermodynamic description of a system, while the thermodynamic potentials do.

1.5. Systems with a variable number of particles Now we have to consider one more important case: when the number N of particles in a system is not rigidly fixed, but may change as a result of a thermodynamic process. A typical example of such a system is a gas sample separated from the environment by a penetrable partition – see Fig. 7.38 environment

system

dN

Fig. 1.7. An example of a system with a variable number of particles.

Let us analyze this situation in the simplest case when all the particles are similar. (In Sec. 4.1, this analysis will be extended to systems with particles of several sorts). In this case, we may consider N as an independent thermodynamic variable whose variation may change the energy E of the system, so Eq. (17) (valid for a slow, reversible process) should now be generalized as dE  TdS  PdV  dN ,

(1.52)

37 Note, however, that the difference CP – CV = N is independent of f(T). (If the temperature is measured in kelvins, this relation takes a more familiar form CP – CV = nR.) It is straightforward (and hence left for the reader’s exercise) to prove that the difference CP – CV of any system is fully determined by its equation of state. 38 Another important example is a gas in contact with an open-surface liquid or solid of similar molecules.

Chapter 1

Page 16 of 24

Chemical potential: definition

Essential Graduate Physics

SM: Statistical Mechanics

where  is some new function of state, called the chemical potential.39 Keeping the definitions of other thermodynamic potentials, given by Eqs. (27), (33), and (37) intact, we see that the expressions for their differentials should be generalized as

dH  TdS  VdP  dN ,

(1.53a)

dF   SdT  PdV  dN ,

(1.53b)

dG   SdT  VdP  dN ,

(1.53c)

so the chemical potential may be calculated as either of the following partial derivatives:40  H   F   G   E         .  N  S ,V  N  S , P  N  T ,V  N  T , P

 

(1.54)

Despite the formal similarity of all Eqs. (54), one of them is more consequential than the others. Indeed, the Gibbs energy G is the only thermodynamic potential that is a function of two intensive parameters, T and P. However, just as all thermodynamic potentials, G has to be extensive, so in a system of similar particles it has to be proportional to N: G  Ng ,

 as Gibbs energy

(1.55)

where g is some function of T and P. Plugging this expression into the last of Eqs. (54), we see that  equals exactly this function, so G (1.56)  , N i.e. the chemical potential is just the Gibbs energy per particle. In order to demonstrate how vital the notion of chemical potential may be, let us consider the situation (parallel to that shown in Fig. 2) when a system consists of two parts, with equal pressure and temperature, that can exchange particles at a relatively slow rate (much slower than the speed of the internal relaxation of each part). Then we can write two equations similar to Eqs. (5): N  N1  N 2 ,

G  G1  G2 ,

(1.57)

where N = const, and Eq. (56) may be used to describe each component of G: G  1 N 1   2 N 2 .

(1.58)

Plugging the N2 expressed from the first of Eqs. (57), N2 = N – N1, into Eq. (58), we see that dG  1   2 , dN 1

(1.59)

so the minimum of G is achieved at 1 = 2. Hence, in the conditions of fixed temperature and pressure, i.e. when G is the appropriate thermodynamic potential, the chemical potentials of the system parts should be equal – the so-called chemical equilibrium. This name, of a historic origin, is misleading: as evident from Eq. (52),  has a clear physical sense of the average energy cost of adding one more particle to a system with N >> 1. 40 Note that strictly speaking, Eqs. (9), (15), (31), (35), and (39) should be now generalized by adding another lower index, N, to the corresponding derivatives; I will just imply them being calculated at constant N. 39

Chapter 1

Page 17 of 24

Essential Graduate Physics

SM: Statistical Mechanics

Finally, later in the course, we will also run into several cases when the volume V of a system, its temperature T, and the chemical potential  are all fixed. (The last condition may be readily implemented by allowing the system of our interest to exchange particles with an environment so large that its  stays constant.) The thermodynamic potential appropriate for this case may be obtained by subtraction of the product N from the free energy F, resulting in the so-called grand thermodynamic (or “Landau”) potential: G Ω  F  N  F  N  F  G   PV . (1.60) N

Grand potential: definition

Indeed, for a reversible process, the full differential of this potential is

d  dF  d ( N )  ( SdT  PdV  dN )  ( dN  Nd )   SdT  PdV  Nd ,

(1.61)

so if  has been calculated as a function of T, V, and , other thermodynamic variables may be found as

          . S    , P    , N    T V ,   V  T ,     T ,V

(1.62)

Now acting exactly as we have done for other potentials, it is straightforward to prove that an irreversible process with fixed T, V, and , provides d/dt  0, so the system’s equilibrium indeed corresponds to the minimum of its grand potential . We will repeatedly use this fact in this course. 1.6. Thermal machines In order to complete this brief review of thermodynamics, I cannot completely pass the topic of thermal machines – not because it will be used much in this course, but mostly because of its practical and historic significance.41 Figure 8a shows the generic scheme of a thermal machine that may perform mechanical work on its environment (in our notation, equal to –W) during each cycle of the expansion/compression of some “working gas”, by transferring different amounts of heat from a hightemperature heat bath (QH) and to the low-temperature bath (QL). (a)

TH

(b)

P

QH W

W

“working gas”

TL

QL 0

V

Fig. 1.8. (a) The simplest implementation of a thermal machine, and (b) the graphic representation of the mechanical work it performs. On panel (b), the solid arrow indicates the heat engine cycle direction, while the dashed arrow, the refrigerator cycle direction. 41

The whole field of thermodynamics was spurred by the famous 1824 work by Nicolas Léonard Sadi Carnot, in which he, in particular, gave an alternative, indirect form of the 2nd law of thermodynamics – see below. Chapter 1

Page 18 of 24

Grand potential: differential

Essential Graduate Physics

SM: Statistical Mechanics

One relation between the three amounts QH, QL, and W is immediately given by the energy conservation (i.e. by the 1st law of thermodynamics): QH  QL  W .

(1.63)

From Eq. (1), the mechanical work during the cycle may be calculated as

 W   PdV ,

(1.64)

and hence represented by the area circumvented by the state-representing point on the [P, V] plane – see Fig. 8b. Note that the sign of this circular integral depends on the direction of the point’s rotation; in particular, the work (–W) done by the working gas is positive at its clockwise rotation (pertinent to heat engines) and negative in the opposite case (implemented in refrigerators and heat pumps – see below). Evidently, the work depends on the exact form of the cycle, which in turn may depend not only on TH and TL, but also on the working gas’ properties. An exception to this rule is the famous Carnot cycle, consisting of two isothermal and two adiabatic processes (all reversible!). In its heat engine’s form, the cycle may start, for example, from an isothermic expansion of the working gas in contact with the hot bath (i.e. at T = TH). It is followed by its additional adiabatic expansion (with the gas being disconnected from both heat baths) until its temperature drops to TL. Then an isothermal compression of the gas is performed in its contact with the cold bath (at T = TL), followed by its additional adiabatic compression to raise T to TH again, after which the cycle is repeated again and again. Note that during this specific cycle, the working gas is never in contact with both heat baths simultaneously, thus avoiding the irreversible heat transfer between them. The cycle’s shape on the [V, P] plane (Fig. 9a) depends on the exact properties of the working gas and may be rather complicated. However, since the system’s entropy is constant at any adiabatic process, the Carnot cycle’s shape on the [S, T] plane is always rectangular – see Fig. 9b. (a)

P

T  TH S  S1

TH S  S2

TL

T  TL 0

(b)

T

V

0

S1

S2

S

Fig. 1.9. Representations of the Carnot cycle: (a) on the [V, P] plane (schematically), and (b) on the [S, T] plane. The meaning of the arrows is the same as in Fig. 8.

Since during each isotherm, the working gas is brought into thermal contact only with the corresponding heat bath, i.e. its temperature is constant, the relation (19), dQ = TdS, may be immediately integrated to yield QH  TH ( S 2  S1 ),

QL  TL ( S 2  S1 ).

(1.65)

Hence the ratio of these two heat flows is completely determined by their temperature ratio: QH TH ,  QL TL

Chapter 1

(1.66)

Page 19 of 24

Essential Graduate Physics

SM: Statistical Mechanics

regardless of the working gas properties. Formulas (63) and (66) are sufficient to find the ratio of the work (–W) to any of QH and QL. For example, the main figure-of-merit of a thermal machine used as a heat engine (QH > 0, QL > 0, –W =  W  > 0), is its efficiency



W

QH



Q H  QL Q  1  L  1. QH QH

(1.67)

Heat engine’s efficiency: definition

For the Carnot cycle, this definition, together with Eq. (66), immediately yields the famous relation

 Carnot  1 

TL , TH

(1.68)

which shows that at a given TL (that is typically the ambient temperature ~300 K), the efficiency may be increased, ultimately to 1, by raising the temperature TH of the heat source.42 The unique nature of the Carnot cycle (see Fig. 9b again) makes its efficiency (68) the upper limit for any heat engine.43 Indeed, in this cycle, the transfer of heat between any heat bath and the working gas is performed reversibly, when their temperatures are equal. (If this is not so, some heat may flow from the hotter to the colder bath without performing any work.) In particular, it shows that max = 0 at TH = TL, i.e., no heat engine can perform mechanical work in the absence of temperature gradients.44 On the other hand, if the cycle is reversed (see the dashed arrows in Figs. 8 and 9), the same thermal machine may serve as a refrigerator, providing heat removal from the low-temperature bath (QL < 0) at the cost of consuming external mechanical work: W > 0. This reversal does not affect the basic relation (63), which now may be used to calculate the relevant figure-of-merit, called the cooling coefficient of performance (COPcooling): QL QL . (1.69) COPcooling   W QH  QL Notice that this coefficient may be above unity; in particular, for the Carnot cycle we may use Eq. (66) (which is also unaffected by the cycle reversal) to get TL (COPcooling ) Carnot  , (1.70) TH  TL so this value is larger than 1 at TH < 2TL, and may be even much larger when the temperature difference (TH – TL) sustained by the refrigerator, tends to zero. For example, in a typical air-conditioning system, this difference is of the order of 10 K, while TL ~ 300 K, so (TH – TL) ~ TL/30, i.e. the Carnot value of 42

Semi-quantitatively, this trend is valid also for other, less efficient but more practicable heat engine cycles – see Problems 15-18. This trend is the leading reason why internal combustion engines, with TH of the order of 1,500 K, are more efficient than steam engines, with TH of at most a few hundred K. 43 In some alternative axiomatic systems of thermodynamics, this fact is postulated and serves the role of the 2nd law. This is why it is under persisting (predominantly, theoretical) attacks by suggestions of more efficient heat engines – notably, with quantum systems. To the best of my knowledge, reliable analyses of all the suggestions put forward so far have confirmed that the Carnot efficiency (68) cannot be exceeded even using quantummechanical cycles – see, e.g., the recent review by S. Bhattacharjee and A. Dutta, Eur. J. Phys. 94, 239 (2021). 44 Such a hypothetical heat engine that would violate the 2nd law of thermodynamics, is called the “perpetual motion machine of the 2nd kind” – in contrast to any (also hypothetical) “perpetual motion machine of the 1st kind” that would violate the 1st law, i.e., the energy conservation. Chapter 1

Page 20 of 24

Carnot cycle’s efficiency

Essential Graduate Physics

SM: Statistical Mechanics

COPcooling is as high as ~30. (In the state-of-the-art commercial HVAC systems it is within the range of 3 to 4.) This is why the term “cooling efficiency”, used in some textbooks instead of (COP)cooling, may be misleading. Since in the reversed cycle QH = –W + QL < 0, i.e. the system provides heat flow into the hightemperature heat bath, it may be used as a heat pump for heating purposes. The figure-of-merit appropriate for this application is different from Eq. (69):

COPheating 

QH

W



QH , QH  Q L

(1.71)

TH . TH  TL

(1.72)

so for the Carnot cycle, using Eq. (66) again, we get (COPheating ) Carnot 

Note that this COP is always larger than 1, meaning that the Carnot heat pump is always more efficient than the direct conversion of work into heat (when QH = –W, so COPheating = 1), though practical electricity-driven heat pumps are substantially more complex and hence more expensive than simple electric heaters. Such heat pumps, with typical COPheating values of around 4 in summer and 2 in winter, are frequently used for heating large buildings. Finally, note that according to Eq. (70), the COPcooling of the Carnot cycle tends to zero at TL  0, making it impossible to reach the absolute zero of temperature, and hence illustrating the meaningful (Nernst’s) formulation of the 3rd law of thermodynamics, cited in Sec. 3. Indeed, let us prescribe a finite but very large heat capacity C(T) to the low-temperature bath, and use the definition of this variable to write the following expression for the relatively small change of its temperature as a result of dn similar refrigeration cycles: C (TL )dTL  QL dn . (1.73) Together with Eq. (66), this relation yields QH C (TL )dTL dn .  TL TH

(1.74)

If TL  0, so TH >>TL and  QH   –W = const, the right-hand side of this equation does not depend on TL, so if we integrate it over many (n >> 1) cycles, getting the following simple relation between the initial and final values of TL: Tfin

QH C (T )dT n.  T TH Tini



(1.75)

For example, if C(T) is a constant, Eq. (75) yields an exponential law,  QH  Tfin  Tini exp n , (1.76) CT H   with the absolute zero of temperature not reached as any finite n. Even for an arbitrary function C(T) that does not vanish at T  0, Eq. (74) proves the Nernst theorem, because dn diverges at TL  0.

Chapter 1

Page 21 of 24

Essential Graduate Physics

SM: Statistical Mechanics

1.7. Exercise problems 1.1. Two bodies, with temperature-independent heat capacities C1 and C2, and different initial temperatures T1 and T2, are placed into a weak thermal contact. Calculate the change of the total entropy of the system before it reaches thermal equilibrium. 1.2. A gas portion has the following properties: (i) its heat capacity CV = aT b, and (ii) the work W T needed for its isothermal compression from V2 to V1 equals cTln(V2/V1), where a, b, and c are some constants. Find the equation of state of the gas, and calculate the temperature dependences of its entropy S and thermodynamic potentials E, H, F, G, and . 1.3. A volume with an ideal classical gas of similar molecules is separated into two parts with a partition so that the number N of molecules in each part is the same, but their volumes are different. The gas is initially in thermal equilibrium with the environment, and its pressure in one part is P1, and in the other part, P2. Calculate the change of entropy resulting from a fast removal of the partition. 1.4. An ideal classical gas of N particles is initially confined to volume V, and is in thermal equilibrium with a heat bath of temperature T. Then the gas is allowed to expand to volume V’ > V in one of the following ways: (i) The expansion is slow, so due to the sustained thermal contact with the heat bath, the gas temperature remains equal to T. (ii) The partition separating the volumes V and (V’ –V) is removed very fast, allowing the gas to expand rapidly. For each case, calculate the changes of pressure, temperature, energy, and entropy of the gas during its expansion, and compare the results. 1.5. For an ideal classical gas with temperature-independent specific heat, derive the relation between P and V at its adiabatic expansion/compression. 1.6. Calculate the speed and the wave impedance of acoustic waves propagating in an ideal classical gas with temperature-independent specific heat, in the limits when the propagation may be treated as: (i) an isothermal process, and (ii) an adiabatic process. Which of these limits is achieved at higher wave frequencies? 1.7. As will be discussed in Sec. 3.5, the so-called “hardball” models of classical particle interaction yield the following equation of state of a gas of such particles: P  T n  ,

where n = N/V is the particle density, and the function (n) is generally different from that (ideal(n) = n) of the ideal gas, but still independent of temperature. For such a gas, with a temperature-independent cV, calculate: Chapter 1

Page 22 of 24

Essential Graduate Physics

SM: Statistical Mechanics

(i) the energy of the gas, and (ii) its pressure as a function of n at an adiabatic compression. 1.8. For an arbitrary thermodynamic system with a fixed number of particles, prove the four Maxwell relations (already mentioned in Sec. 4):

i  :

 S   P      ,  V  T  T V

ii  :

 V   T      ,  S  P  P  S

iii  :

 S   V       ,  P  T  T  P

iv  :

 P   T       ,  S V  V  S

and also the following formula:

v  :

 E   P     T   P.  V  T  T V

1.9. Express the heat capacity difference (CP – CV) via the equation of state P = P(V, T) of the system. 1.10. Prove that the isothermal compressibility45

T  

1  V    V  P  T , N

of a system of N similar particles may be expressed in two different ways:

T 

V2 N2

 2P  V  2   2    T N

 N    .    T ,V

1.11 Throttling46 is gas expansion by driving it through either a small hole (called the throttling valve) or a porous partition, by an externally sustained constant difference of pressure on two sides of such an obstacle. (i) Prove that in the absence of heat exchange with the environment, the enthalpy of the transferred gas does not change. (ii) Express the so-called Joule-Thomson coefficient (T/P)H, which characterizes the gas temperature change at throttling, via its thermal expansion coefficient   (V/T)P/V. 1.12. A system with a fixed number of particles is in thermal and mechanical contact with its environment of temperature T0 and pressure P0. Assuming that the internal relaxation of the system is sufficiently fast, derive the conditions of stability of its equilibrium with the environment with respect to small perturbations.

Note that the compressibility is just the reciprocal bulk modulus,  = 1/K – see, e.g., CM Sec. 7.3. Sometimes it is called the Joule-Thomson process, though more typically, the latter term refers to the possible gas cooling at the throttling. 45 46

Chapter 1

Page 23 of 24

Essential Graduate Physics

SM: Statistical Mechanics

1.13. Derive the analog of the relation for the difference (CP – CV), whose derivation was the task of Problem 9, for a fixed-volume sample with a uniform magnetization M parallel to the uniform external field H. Spell out this result for a paramagnet that obeys the Curie law M  H/T – the relation to be derived and discussed later in this course. 1.14. Two bodies have equal temperature-independent heat capacities C, but different temperatures, T1 and T2. Calculate the maximum mechanical work obtainable from this system by using a heat engine. P 1.15. Express the efficiency  of a heat engine that uses the soP called Joule (or “Brayton”) cycle consisting of two adiabatic and two max isobaric processes (see the figure on the right), via the minimum and maximum values of pressure, and compare the result with Carnot. P S  const min Assume an ideal classical working gas with temperature-independent CP and CV. 0

1.16. Calculate the efficiency of a heat engine using the Otto cycle47 that consists of two adiabatic and two isochoric (constantvolume) reversible processes – see the figure on the right. Explore how the efficiency depends on the compression ratio r  Vmax/Vmin, and compare it with the Carnot cycle’s efficiency. Assume an ideal classical working gas with temperature-independent heat capacity.

P

0

S  const

V

S  const

S  const

V0

rV0 V

P  const 1.17. The Diesel cycle (an approximate model of the Diesel P internal combustion engine’s operation) consists of two adiabatic 2 3 S  const processes, one isochoric process, and one isobaric process – see the 4 figure on the right. Assuming an ideal working gas with temperatureV  const independent CV and CP, express the cycle’s efficiency  via its two S  const dimensionless parameters: the so-called cutoff ratio   V3/V2 > 1 and the 1 compression ratio r  V1/V2 >  . 0 V

1.18. A heat engine’s cycle consists of two isothermal (T = T const) and two isochoric (V = const) processes – see the figure on the TH right.48 (i) Assuming that the working gas is an ideal classical gas of N particles, calculate the mechanical work performed by the engine during TL one cycle. (ii) Are the specified conditions sufficient to calculate the engine’s efficiency? (Justify your answer.) 0

V1

V2

47

V

This name stems from the fact that the cycle is an approximate model of operation of the four-stroke internal combustion engine, which was improved and made practicable (though not invented!) by N. Otto in 1876. 48 The reversed cycle of this type is a reasonable approximation for the operation of the Stirling and GiffordMcMahon (GM) refrigerators, broadly used for cryocooling – for a recent review see, e.g., A. de Waele, J. Low Temp. Phys. 164, 179 (2011). Chapter 1

Page 24 of 24

Essential Graduate Physics

SM: Statistical Mechanics

Chapter 2. Principles of Physical Statistics This chapter is the keystone of this course. It starts with a brief discussion of such basic notions of statistical physics as statistical ensembles, probability, and ergodicity. Then the so-called microcanonical distribution postulate is formulated, simultaneously with the statistical definition of entropy. This basis enables a ready derivation of the famous Gibbs (“canonical”) distribution – the most frequently used tool of statistical physics. Then we will discuss one more, “grand canonical” distribution, which is more convenient for some tasks. In particular, it is immediately used for the derivation of the most important Boltzmann, Fermi-Dirac, and Bose-Einstein statistics of independent particles, which will be repeatedly utilized in the following chapters. 2.1. Statistical ensembles and probability As was already discussed in Sec. 1.1, statistical physics deals with situations when either unknown initial conditions, or system’s complexity, or the laws of its motion (as in the case of quantum mechanics) do not allow a definite prediction of measurement results. The main formalism for the analysis of such systems is the probability theory, so let me start with a very brief review of its basic concepts, using an informal “physical” language – less rigorous but (hopefully) more transparent than standard mathematical treatments,1 and quite sufficient for our purposes. Consider N >> 1 independent similar experiments carried out with apparently similar systems (i.e. systems with identical macroscopic parameters such as volume, pressure, etc.), but still giving, by any of the reasons listed above, different results of measurements. Such a collection of experiments, together with a fixed method of result processing, is a good example of a statistical ensemble. Let us start from the case when each experiment may have M different discrete outcomes, and the number of experiments giving these outcomes is N1, N2,…, NM, so M

N m 1

m

 N.

(2.1)

The probability of each outcome, for the given statistical ensemble, is then defined as Wm  lim N 

Probability

Nm . N

(2.2)

Though this definition is so close to our everyday experience that it is almost self-evident, a few remarks may still be relevant. First, the probabilities Wm depend on the exact statistical ensemble they are defined for, notably including the method of result processing. As the simplest example, consider throwing the standard cubic-shaped dice many times. For the ensemble of all thrown and counted dice, the probability of each outcome (say, “1”) is 1/6. However, nothing prevents us from defining another statistical ensemble of dice-throwing experiments in which all outcomes “1” are discounted. Evidently, the probability of 1

For the reader interested in a more rigorous approach, I can recommend, for example, Chapter 18 of the famous handbook by G. Korn and T. Korn – see MA Sec. 16(ii).

© K. Likharev

Essential Graduate Physics

SM: Statistical Mechanics

finding the outcome “1” in this modified (but legitimate) ensemble is 0, while for all other five outcomes (“2” to “6”), it is 1/5 rather than 1/6. Second, a statistical ensemble does not necessarily require N similar physical systems, e.g., N distinct dice. It is intuitively clear that tossing the same die N times constitutes an ensemble with similar statistical properties. More generally, a set of N experiments with the same system gives a statistical ensemble equivalent to the set of experiments with N different systems, provided that the experiments are kept independent, i.e. that outcomes of past experiments do not affect the experiments to follow. Moreover, for many physical systems of interest, no special preparation of each new experiment is necessary, and N experiments separated by sufficiently long time intervals, form a “good” statistical ensemble – the property called ergodicity.2 Third, the reference to infinite N in Eq. (2) does not strip the notion of probability of its practical relevance. Indeed, it is easy to prove (see Chapter 5) that, at very general conditions, at finite but sufficiently large N, the numbers Nm are approaching their average (or expectation) values3 N m  Wm N ,

(2.3)

with the relative deviations decreasing as ~1/Nm1/2, i.e. as 1/N1/2. Now let me list those properties of probabilities that we will immediately need. First, dividing both sides of Eq. (1) by N and following the limit N  , we get the well-known normalization condition M

W m 1

m

 1;

(2.4)

just remember that it is true only if each experiment definitely yields one of the outcomes N1, N2,…, NM. Second, if we have an additive function of the results, 1 f  N

M

N m 1

m

fm ,

(2.5)

where fm are some definite (deterministic) coefficients, the statistical average (also called the expectation value) of this function is naturally defined as 2

The most popular counter-examples are provided by some energy-conserving systems. Consider, for example, a system of particles placed in a potential that is a quadratic-polynomial function of its coordinates. The theory of oscillations tells us (see, e.g., CM Sec. 6.2) that this system is equivalent to a set of non-interacting harmonic oscillators. Each of these oscillators conserves its own initial energy Ej forever, so the statistics of N measurements of one such system may differ from that of N different systems with a random distribution of Ej, even if the total energy of the system, E = jEj, is the same. Such non-ergodicity, however, is a rather feeble phenomenon and is readily destroyed by any of many mechanisms, such as weak interaction with the environment (leading, in particular, to oscillation damping), potential anharmonicity (see, e.g., CM Chapter 5), and chaos (CM Chapter 9), all of them growing fast with the number of particles in the system, i.e. the number of its degrees of freedom. This is why an overwhelming part of real-life systems are ergodic; for readers interested in non-ergodic exotics, I can recommend the monograph by V. Arnold and A. Avez, Ergodic Problems of Classical Mechanics, Addison-Wesley, 1989. 3 Here (and everywhere in this series) angle brackets … mean averaging over a statistical ensemble, which is generally different from averaging over time – as it will be the case in quite a few examples considered below. Chapter 2

Page 2 of 44

Essential Graduate Physics

SM: Statistical Mechanics

f  lim N 

1 N

M



m 1

Nm fm ,

(2.6)

so using Eq. (3) we get M

f   Wm f m .

Expectation value via probabilities

(2.7)

m 1

Note that Eq. (3) may be considered as the particular form of this general result, when all fm = 1. Eq. (5) with these fm defines what is sometimes called the counting function. Next, the spectrum of possible experimental outcomes is frequently continuous for all practical purposes. (Think, for example, about the set of positions of the marks left by bullets fired into a target from afar.) The above formulas may be readily generalized to this case; let us start from the simplest situation when all different outcomes may be described by just one continuous scalar variable q – which replaces the discrete index m in Eqs. (1)-(7). The basic relation for this case is the self-evident fact that the probability dW of having an outcome within a very small interval dq surrounding some point q is proportional to the magnitude of that interval:

dW  w(q)dq ,

(2.8)

where w(q) is some function of q, which does not depend on dq. This function is called probability density. Now all the above formulas may be recast by replacing the probabilities Wm with the products (8), and the summation over m, with the integration over q. In particular, instead of Eq. (4) the normalization condition now becomes (2.9)  w(q)dq  1, where the integration should be extended over the whole range of possible values of q. Similarly, instead of the discrete values fm participating in Eq. (5), it is natural to consider a function f(q). Then instead of Eq. (7), the expectation value of the function may be calculated as Expectation value via probability density

f   w(q) f (q)dq.

(2.10)

It is also straightforward to generalize these formulas to the case of more variables. For example, the state of a classical particle with three degrees of freedom may be fully described by the probability density w defined in the 6D space of its generalized radius-vector q and momentum p. As a result, the expectation value of a function of these variables may be expressed as a 6D integral f   w(q, p) f (q, p) d 3 qd 3 p.

(2.11)

Some systems considered in this course consist of components whose quantum properties cannot be ignored, so let us discuss how  f  should be calculated in this case. If by fm we mean measurement results, then Eq. (7) (and its generalizations) remains valid, but since these numbers themselves may be affected by the intrinsic quantum-mechanical uncertainty, it may make sense to have a bit deeper look into this situation. Quantum mechanics tells us4 that the most general expression for the expectation value of an observable f in a certain ensemble of macroscopically similar systems is

4

See, e.g., QM Sec. 7.1.

Chapter 2

Page 3 of 44

Essential Graduate Physics

SM: Statistical Mechanics

f 

W

mm'

f m'm  Tr ( Wf ) .

(2.12)

m ,m '

Here fmm’ are the matrix elements of the quantum-mechanical operator fˆ corresponding to the observable f, in a full basis of orthonormal states m, (2.13) f mm '  m fˆ m' , while the coefficients Wmm’ are the elements of the so-called density matrix W, which represents, in the same basis, the density operator Wˆ describing properties of this ensemble. Eq. (12) is evidently more general than Eq. (7), and is reduced to it only if the density matrix is diagonal: Wmm '  Wm mm '

(2.14)

(where mm’ is the Kronecker symbol), when the diagonal elements Wm play the role of probabilities of the corresponding states. Thus formally, the largest difference between the quantum and classical description is the presence, in Eq. (12), of the off-diagonal elements of the density matrix. They have the largest values in a pure (also called “coherent”) ensemble, in which the state of the system may be described with state vectors, e.g., the ket-vector    m m , (2.15) m

where m are some (generally, complex) coefficients. In this case, the density matrix elements are merely (2.16) Wmm '   m m ' , so the off-diagonal elements are of the same order as the diagonal elements. For example, in the very important particular case of a two-level system, the pure-state density matrix is    1 2   W   1 1   ,     2 2  2 1

(2.17)

so the product of its off-diagonal components is as large as that of the diagonal components. In the most important basis of stationary states, i.e. the eigenstates of the system’s timeindependent Hamiltonian, the coefficients m oscillate in time as5



 m (t )   m (0) exp i 

Em   E  t    m exp i m t  i m ,     

(2.18)

where Em are the corresponding eigenenergies, m are constant phase shifts, and  is the Planck constant. This means that while the diagonal terms of the density matrix (16) remain constant, its off-diagonal components are oscillating functions of time:

5

Here I use the Schrödinger picture of quantum dynamics, in which the matrix elements fnn’ representing quantum-mechanical operators, do not evolve in time. The final results of this discussion do not depend on the particular picture – see, e.g., QM Sec. 4.6.

Chapter 2

Page 4 of 44

Essential Graduate Physics

SM: Statistical Mechanics

 E  E m'   Wmm'   m'  m   m'  m expi m t  exp i ( m'   m ).   

(2.19)

Due to the extreme smallness of  on the human scale of things), minuscule random perturbations of eigenenergies are equivalent to substantial random changes of the phase multipliers, so the time average of any off-diagonal matrix element tends to zero. Moreover, even if our statistical ensemble consists of systems with exactly the same Em, but different values m (which are typically hard to control at the initial preparation of the system), the average values of all Wmm’ (with m  m’) vanish again. This is why, besides some very special cases, typical statistical ensembles of quantum particles are far from being pure, and in most cases (certainly including the thermodynamic equilibrium), a good approximation for their description is given by the opposite limit of the so-called classical mixture, in which all off-diagonal matrix elements of the density matrix equal zero, and its diagonal elements Wmm are merely the probabilities Wm of the corresponding eigenstates. In this case, for the observables compatible with energy, Eq. (12) is reduced to Eq. (7), with fm being the eigenvalues of the variable f, so we may base our further discussion on this key relation and its continuous extensions (10)-(11). 2.2. Microcanonical ensemble and distribution Now we may move to the now-standard approach to statistical mechanics, based on the three statistical ensembles introduced in the 1870s by Josiah Willard Gibbs.6 The most basic of them is the so-called microcanonical statistical ensemble7 defined as a set of macroscopically similar closed (isolated) systems with virtually the same total energy E. Since in quantum mechanics the energy of a closed system is quantized, in order to make the forthcoming discussion suitable for quantum systems as well, it is convenient to include in the ensemble all systems with energies Em within a relatively narrow interval ΔE > 1. Such choice of E is only possible if E > 1 of similar systems, but with a certain number Nm = WmN >> 1 of systems in each of M states, with the factors Wm not necessarily equal. In this case, the evident generalization of Eq. (24) is that the entropy SN of the whole ensemble is S N  ln M ( N1 , N 2 ,..) ,

(2.25)

where M (N1,N2,…) is the number of ways to commit a particular system to a certain state m while keeping all numbers Nm fixed. This number M (N1,N2,…) is clearly equal to the number of ways to distribute N distinct balls between M different boxes, with the fixed number Nm of balls in each box, but

This is of course just the change of a constant factor: S(M) = lnM = ln2  log2M = ln2  I(M)  0.693 I(M). A review of Chapter 1 shows that nothing in thermodynamics prevents us from choosing such a constant coefficient arbitrarily, with the corresponding change of the temperature scale – see Eq. (1.9). In particular, in the SI units, where Eq. (24b) becomes S = –kBlnWm, one bit of information corresponds to the entropy change ΔS = kB ln2 ≈ 0.693 kB  0.96510-23 J/K. (The formula “S = k logW” is engraved on L. Boltzmann’s tombstone in Vienna.) 11

Chapter 2

Page 7 of 44

Essential Graduate Physics

SM: Statistical Mechanics

in no particular order within it. Comparing this description with the definition of the so-called multinomial coefficients,12 we get

M ( N1 , N 2 ,)  N C

N 1 , N 2 ,..., N M



N! , N1! N 2 !...N M !

M

with N   N m .

(2.26)

m 1

To simplify the resulting expression for SN, we can use the famous Stirling formula, in its crudest, de Moivre’s form,13 whose accuracy is suitable for most purposes of statistical physics: ln( N !) N   N (ln N  1).

(2.27)

When applied to our current problem, this formula gives the following average entropy per system,14 M M SN 1 1     ln( N !)   ln( N m !)   N ln N  1   N m ln N m  1 N N m 1 m 1  Nm   N   (2.28) M N N   m ln m , N m 1 N

S

and since this result is only valid in the limit Nm   anyway, we may use Eq. (2) to represent it as M

M

m 1

m 1

S   Wm ln Wm   Wm ln

1 . Wm

(2.29)

This extremely important result15 may be interpreted as the average of the entropy values given by Eq. (24), weighed with specific probabilities Wm per the general formula (7).16 Now let us find what distribution of probabilities Wm provides the largest value of the entropy (29). The answer is almost evident from a good glance at Eq. (29). For example, if for a subgroup of M’  M states, the coefficients Wm are constant and equal to 1/M’, so Wm = 0 for all other states, all M’ nonzero terms in the sum (29) are equal to each other, so 1 ln M'  ln M' , (2.30) M' and the closer M’ to its maximum value M the larger S. Hence, the maximum of S is reached at the uniform distribution given by Eq. (24). S  M'

12

See, e.g., MA Eq. (2.3). Despite the intimidating name, Eq. (26) may be very simply derived. Indeed, N! is just the number of all possible permutations of N balls, i.e. of the ways to place them in certain positions – say, inside M boxes. Now to take into account that the particular order of the balls in each box is not important, that number should be divided by all numbers Nm! of possible permutations of balls within each box – that’s it! 13 See, e.g., MA Eq. (2.10). 14 Strictly speaking, I should use the notation S here. However, following the style accepted in thermodynamics, I will drop the averaging signs until we will really need them to avoid confusion. Again, this shorthand is not too bad because the relative fluctuations of entropy (as those of any macroscopic variable) are very small at N >> 1. 15 With the replacement of lnW with log W (i.e. division of both sides by ln2), Eq. (29) becomes the famous m 2 m Shannon (or “Boltzmann-Shannon”) formula for the average information I per symbol in a long communication string using M different symbols, with probability Wm each. 16 In some textbooks, this interpretation is even accepted as the derivation of Eq. (29); however, it is evidently less rigorous than the one outlined above. Chapter 2

Page 8 of 44

Entropy out of equilibrium

Essential Graduate Physics

SM: Statistical Mechanics

In order to prove this important fact more strictly, let us find the maximum of the function given by Eq. (29). If its arguments W1, W2, …WM were completely independent, this could be done by finding the point (in the M-dimensional space of the coefficients Wm) where all partial derivatives S/Wm equal zero. However, since the probabilities are constrained by condition (4), the differentiation has to be carried out more carefully, taking into account this interdependence:    S S Wm' S (W1 ,W2 ,...)   .   Wm  cond Wm m'  m Wm' Wm

(2.31)

At the maximum of the function S, such expressions should be equal to zero for all m. This condition yields S/Wm = , where the so-called Lagrange multiplier  is independent of m. Indeed, at such point Eq. (31) becomes     Wm Wm' Wm'         S (W1 ,W2 ,...)  m'  m Wm  Wm  cond  Wm m ' m Wm

     (1)  0 . Wm 

(2.32)

For our particular expression (29), the condition S/Wm =  yields S d   Wm ln Wm    ln Wm  1  . Wm dWm

(2.33)

The last equality holds for all m (and hence the entropy reaches its maximum value) only if Wm is independent of m. Thus the entropy (29) indeed reaches its maximum value (24) at equilibrium. To summarize, we see that the statistical definition (24) of entropy does fit all the requirements imposed on this variable by thermodynamics. In particular, we have been able to prove the 2nd law of thermodynamics using that definition together with the fundamental postulate (20). Now let me discuss one possible point of discomfort with that definition: the values of M, and hence Wm, depend on the accepted energy interval ΔE of the microcanonical ensemble, for whose choice no exact guidance is offered. However, if the interval ΔE contains many states, M >> 1, as was assumed before, then with a very small relative error (vanishing in the limit M → ∞), M may be represented as

M  g ( E )E ,

(2.34)

where g(E) is the density of states of the system: g (E) 

d( E ) , dE

(2.35)

Σ(E) being the total number of states with energies below E. (Note that the average interval E between energy levels, mentioned at the beginning of this section, is just E/M = 1/g(E).) Plugging Eq. (34) into Eq. (24), we get S  ln M  ln g ( E )  ln E , (2.36) so the only effect of a particular choice of ΔE is an offset of the entropy by a constant, and in Chapter 1 we have seen that such a constant offset does not affect any measurable quantity. Of course, Eq. (34), and hence Eq. (36) are only precise in the limit when the density of states g(E) is so large that the range available for the appropriate choice of E,

Chapter 2

Page 9 of 44

Essential Graduate Physics

SM: Statistical Mechanics

g 1 ( E )  E  E ,

(2.37)

is sufficiently broad: g(E)E = E/E >> 1. In order to get some gut feeling of the functions g(E) and S(E) and the feasibility of the condition (37), and also to see whether the microcanonical distribution may be directly used for calculations of thermodynamic variables in particular systems, let us apply it to a microcanonical ensemble of many sets of N >> 1 independent, similar harmonic oscillators with frequency ω. (Please note that the requirement of a virtually fixed energy is applied, in this case, to the total energy EN of each set of oscillators, rather to energy E of a single oscillator – which may be virtually arbitrary though certainly much less than EN ~ NE >> E.) Basic quantum mechanics tells us17 that the eigenenergies of such an oscillator form a discrete, equidistant spectrum: 1  Em    m  , where m  0, 1, 2,... (2.38) 2  If ω is kept constant, the ground-state energy ω/2 does not contribute to any thermodynamic properties of the system,18 so for the sake of simplicity we may take that point as the energy origin, and replace Eq. (38) with Em = mω. Let us carry out an approximate analysis of the system for the case when its average energy per oscillator, E E N, (2.39) N is much larger than the energy quantum ω. For one oscillator, the number of states with energy 1 below a certain value E1 >> ω is evidently Σ(E1) ≈ E1/ω  (E1/ω)/1! (Fig. 3a). For two oscillators, all possible values of the total energy (ε1 + ε2) below some level E2 correspond to the points of a 2D square grid within the right triangle shown in Fig. 3b, giving Σ(E2) ≈ (1/2)(E2/ω)2  (E2/ω)2/2!. For three oscillators, the possible values of the total energy (ε1 + ε2 + ε3) correspond to those points of the 3D cubic grid, that fit inside the right pyramid shown in Fig. 3c, giving Σ(E3) ≈ (1/3)[(1/2)(E3/ω)3]  (E3/ω)3/3!, etc. ε2

(a)

E2

E1 ε1 0

ω 2ω

… Σ(E1) ω

ε2

(b)

E3

(c)

2ω ω 0

ω 2ω

E2

ε1

0

ε1 ε3

E3

E3 Fig. 2.3. Calculating functions Σ(EN) for systems of (a) one, (b) two, and (c) three harmonic oscillators.

17

See, e.g., QM Secs. 2.9 and 5.4. Let me hope that the reader knows that the ground-state energy is experimentally measurable – for example, using the famous Casimir effect – see, e.g., QM Sec. 9.1. (In Sec. 5.5 below I will briefly discuss another method of experimental observation of that energy.) 18

Chapter 2

Page 10 of 44

Essential Graduate Physics

SM: Statistical Mechanics

An evident generalization of these formulas to arbitrary N gives the number of states19 N

1  EN  ( E N )    . N !   

(2.40)

Differentiating this expression over the energy, we get g (EN )  so

d( E N ) E NN 1 1 ,  ( N  1)!   N dE N

(2.41)

S N ( E N )  ln g ( E N )  const   ln( N  1)!  ( N  1) ln E N  N ln( )  const.

(2.42)

For N >> 1 we may ignore the difference between N and (N – 1) in both instances, and use the Stirling formula (27) to simplify this result as  E  N   E   E   ln S N ( E )  const  N  ln N  1  N  ln   .           N   

(2.43)

(The second step is only valid at very high E/ ratios when the logarithm in Eq. (43) is substantially larger than 1.) Returning for a second to the density of states, we see that in the limit N → , it is exponentially large: N S  E  (2.44) g (EN )  e N    ,    so the conditions (37) may be indeed satisfied within a very broad range of ΔE. Now we can use Eq. (43) to find all thermodynamic properties of the system, though only in the limit E >> . Indeed, according to thermodynamics, if the system’s volume and the number of particles in it are fixed, the derivative dS/dE is nothing else than the reciprocal temperature in thermal equilibrium – see Eq. (1.9). In our current case, we imply that the harmonic oscillators are distinct, for example by their spatial positions. Hence, even if we can speak of some volume of the system, it is certainly fixed.20 Differentiating Eq. (43) over energy E, we get Classical oscillator: average energy

1 dS N 1 N    . T dE N E N E

(2.45)

Reading this result backward, we see that the average energy E of a harmonic oscillator equals T (i.e. kBTK is SI units). At this point, the first-time student of thermodynamics should be very much relieved to see that the counter-intuitive thermodynamic definition (1.9) of temperature does indeed correspond to what we all have known about this notion from our kindergarten physics courses. The result (45) may be readily generalized. Indeed, in quantum mechanics, a harmonic oscillator with eigenfrequency  may be described by the Hamiltonian operator

19

The coefficient 1/N! in this formula has the geometrical meaning of the (hyper)volume of the N-dimensional right pyramid with unit sides. 20 For the same reason, the notion of pressure P in such a system is not clearly defined, and neither are any thermodynamic potentials but E and F. Chapter 2

Page 11 of 44

Essential Graduate Physics

SM: Statistical Mechanics

pˆ 2 qˆ 2  , Hˆ  2m 2

(2.46)

where q is some generalized coordinate, p is the corresponding generalized momentum, m is the oscillator’s mass,21 and  is its spring constant, so  = (/m)1/2. Since in the thermodynamic equilibrium the density matrix is always diagonal in the basis of stationary states m (see Sec. 1 above), the quantummechanical averages of the kinetic and potential energies may be found from Eq. (7):  p2 pˆ 2   Wm m m, 2m 2m m 0

q 2 2



qˆ 2

m 0

2

  Wm m

m,

(2.47)

where Wm is the probability to occupy the mth energy level, while bra- and ket-vectors describe the stationary state corresponding to that level.22 However, both classical and quantum mechanics teach us that for any m, the bra-ket expressions under the sums in Eqs. (47), which represent the average kinetic and mechanical energies of the oscillator on its mth energy level, are equal to each other, and hence each of them is equal to Em/2. Hence, even though we do not know the exact probability distribution Wm yet (it will be calculated in Sec. 5 below), we may conclude that in the “classical limit” T >> , p2 q 2 T   . 2m 2 2

(2.48)

Now let us consider a system with an arbitrary number of degrees of freedom, described by a more general Hamiltonian:23 pˆ 2j  j qˆ 2j ˆ ˆ ˆ with H j  H  H j,  , (2.49) 2m j 2 j with (generally, different) frequencies j = (j/mj)1/2. Since the “modes” (effective harmonic oscillators) contributing to this Hamiltonian, are independent, the result (48) is valid for each of the modes. This is the famous equipartition theorem: at thermal equilibrium at temperature T >> j, the average energy of each so-called half-degree of freedom (which is defined as any variable, either pj or qj, giving a quadratic contribution to the system’s Hamiltonian), is equal to T/2.24 In particular, for each of three Cartesian component contributions to the kinetic energy of a free-moving particle, this theorem is valid for any temperature, because such components may be considered as 1D harmonic oscillators with vanishing potential energy, i.e. j = 0, so condition T >> j is fulfilled at any temperature.

21

I am using this fancy font for the mass to avoid any chance of its confusion with the state number. Note again that while we have committed the energy EN of N oscillators to be fixed (to apply Eq. (36), valid only for a microcanonical ensemble at thermodynamic equilibrium), the single oscillator’s energy E in our analysis may be arbitrary – within the very broad limits  > , but calculations for an arbitrary T ~ , though possible, would be rather unpleasant because for that, all vertical steps of the function Σ(E N) have to be carefully counted. In Sec. 4 below, we will see that other statistical ensembles are much more convenient for such calculations. Let me conclude this section with a short notice on deterministic classical systems with just a few degrees of freedom (and even simpler mathematical objects called “maps”) that may exhibit essentially disordered behavior, called deterministic chaos.26 Such chaotic system may be approximately characterized by an entropy defined similarly to Eq. (29), where Wm are the probabilities to find it in different small regions of phase space, at well-separated small time intervals. On the other hand, one can use an expression slightly more general than Eq. (29) to define the so-called Kolmogorov (or “Kolmogorov-Sinai”) entropy K that characterizes the speed of loss of the information about the initial state of the system, and hence what is called the “chaos depth”. In the definition of K, the sum over m is replaced with the summation over all possible permutations {m} = m0, m1, …, mN-1 of small space regions, and Wm is replaced with W{m}, the probability of finding the system in the corresponding regions m at time moment tm, with tm = m, in the limit   0, with N = const. For chaos in the simplest objects, 1D maps, K is equal to the Lyapunov exponent  > 0.27 For systems of higher dimensionality, which are characterized by several Lyapunov exponents , the Kolmogorov entropy is equal to the phase-space average of the sum of all positive . These facts provide a much more practicable way of (typically, numerical) calculation of the Kolmogorov entropy than the direct use of its definition.28 2.3. Maxwell’s Demon, information, and computation Before proceeding to other statistical distributions, I would like to make a detour to address one more popular concern about Eq. (24) – the direct relation between entropy and information. Some physicists are still uneasy with entropy being nothing else than the (deficit of) information, though, to the best of my knowledge, nobody has yet been able to suggest any experimentally verifiable difference between these two notions. Let me give one example of their direct relationship.29 Consider a cylinder containing just one molecule (considered as a point particle), and separated into two halves by a movable partition with a door that may be opened and closed at will, at no energy cost – see Fig. 4a. If the door is open and the system is in thermodynamic equilibrium, we do not know on which side of the partition the molecule is. Here the disorder, i.e. the entropy has the largest value, and there is no way to get, from a large ensemble of such systems in equilibrium, any useful mechanical energy.

25

The reader is strongly urged to solve Problem 2, whose task is to do a similar calculation for another key (“twolevel”) physical system, and compare the results. 26 See, e.g., CM Chapter 9 and the literature therein. 27 For the definition of , see, e.g., CM Eq. (9.9). 28 For more discussion, see, e.g., either Sec. 6.2 of the monograph H. G. Schuster and W. Just, Deterministic Chaos, 4th ed., Wiley-VHS, 2005, or the monograph by Arnold and Avez, cited in Sec. 1. 29 This system is frequently called the Szilard engine, after L. Szilard who published its detailed theoretical discussion in 1929, but is essentially a straightforward extension of the thought experiment suggested by J. Maxwell as early as 1867. Chapter 2

Page 13 of 44

Essential Graduate Physics

SM: Statistical Mechanics

(a)

(b)

(c)

v

F

Fig. 2.4. The Szilard engine: a cylinder with a single molecule and a movable partition: (a) before and (b) after closing the door, and (c) after opening the door at the end of the expansion stage.

Now, let us consider that we know (as instructed by, in Lord Kelvin’s formulation, an omniscient Maxwell’s Demon) on which side of the partition the molecule is currently located. Then we may close the door, trapping the molecule, so its repeated impacts on the partition create, on average, a pressure force F directed toward the empty part of the volume (in Fig. 4b, the right one). Now we can get from the molecule some mechanical work, say by allowing the force F to move the partition to the right, and picking up the resulting mechanical energy by some deterministic (zero-entropy) external mechanism. After the partition has been moved to the right end of the volume, we can open the door again (Fig. 4c), equalizing the molecule’s average pressure on both sides of the partition, and then slowly move the partition back to the middle of the volume – without its resistance, i.e. without doing any substantial work. With the continuing help from Maxwell’s Demon, we can repeat the cycle again and again, and hence make the system perform unlimited mechanical work, fed “only” by the molecule’s thermal motion, and the information about its position – thus implementing the perpetual motion machine of the 2nd kind – see Sec. 1.6. The fact that such heat engines do not exist means that getting any new information, at a non-zero temperature (i.e. at a substantial thermal agitation of particles) has a non-zero energy cost. In order to evaluate this cost, let us calculate the maximum work per cycle that can be made by the Szilard engine (Fig. 4), assuming that it is constantly in the thermal equilibrium with a heat bath of temperature T. Formula (21) tells us that the information supplied by the demon (on what exactly half of the volume contains the molecule) is exactly one bit, I (2) = 1. According to Eq. (24), this means that by getting this information we are changing the entropy of our system by S I   ln 2 .

(2.50)

Now, it would be a mistake to plug this (negative) entropy change into Eq. (1.19). First, that relation is only valid for slow, reversible processes. Moreover (and more importantly), this equation, as well as its irreversible version (1.41), is only valid for a fixed statistical ensemble. The change SI does not belong to this category and may be formally described by the change of the statistical ensemble – from the one consisting of all similar systems (experiments) with an unknown location of the molecule to a new ensemble consisting of the systems with the molecule in its certain (in Fig. 4, left) half.30 Now let us consider a slow expansion of the “gas” after the door had been closed. At this stage, we do not need the Demon’s help any longer (i.e. the statistical ensemble may be fixed), and can indeed use the relation (1.19). At the assumed isothermal conditions (T = const), this relation may be integrated 30

This procedure of the statistical ensemble re-definition is the central point of the connection between physics and information theory, and is crucial in particular for any (or rather any meaningful :-) discussion of measurements in quantum mechanics – see, e.g., QM Secs. 2.5 and 10.1. Chapter 2

Page 14 of 44

Essential Graduate Physics

SM: Statistical Mechanics

over the whole expansion process, getting Q = TS. At the final position shown in Fig. 4c, the system’s entropy should be the same as initially, i.e. before the door had been opened, because we again do not know where in the volume the molecule is. This means that the entropy was replenished, during the reversible expansion, from the heat bath, by S = –SI = +ln2, so Q = TS = Tln2. Since by the end of the whole cycle the internal energy E of the system is the same as before, all this heat could have gone into the mechanical energy obtained during the expansion. Thus the maximum obtained work per cycle (i.e. for each obtained information bit) is Tln2 (kBTKln2 in the SI units), about 410-21 Joule at room temperature. This is exactly the energy cost of getting one bit of new information about a system at temperature T. The smallness of that amount on the everyday human scale has left the Szilard engine an academic theoretical exercise for almost a century. However, recently several such devices, of various physical nature, were implemented experimentally (with the Demon’s role played by an instrument measuring the position of the particle without a substantial effect on its motion), and the relation Q = Tln2 was proved, with a gradually increasing precision.31 Actually, discussion of another issue closely related to Maxwell’s Demon, namely energy consumption at numerical calculations, was started earlier, in the 1960s. It was motivated by the exponential (Moore’s-law) progress of the digital integrated circuits, which has led in particular, to a fast reduction of the energy E “spent” (turned into heat) per one binary logic operation. In the recent generations of semiconductor digital integrated circuits, the typical E is still above 10-17 J, i.e. still exceeds the room-temperature value of Tln2  410-21 J by several orders of magnitude.32 Still, some engineers believe that thermodynamics imposes this important lower limit on E and hence presents an insurmountable obstacle to the future progress of computation. Unfortunately, in the 2000s this delusion resulted in a substantial and unjustified shift of electron device research resources toward using “noncharge degrees of freedom” such as spin (as if they do not obey the general laws of statistical physics!), so the issue deserves at least a brief discussion. Let me believe that the reader of these notes understands that, in contrast to naïve popular talk, computers do not create any new information; all they can do is reshape (“process”) the input information, losing most of it on the go. Indeed, any digital computation algorithm may be decomposed into simple, binary logical operations, each of them performed by a circuit called the logic gate. Some of these gates (e.g., the logical NOT performed by inverters, as well as memory READ and WRITE operations) do not change the amount of information in the computer. On the other hand, such information-irreversible logic gates as two-input NAND (or NOR, or XOR, etc.) erase one bit at each operation, because they turn two input bits into one output bit – see Fig. 5a. In 1961, Rolf Landauer argued that each logic operation should turn into heat at least energy Irreversible computation: energy cost

E min  T ln 2  k BTK ln 2 .

(2.51)

This result may be illustrated with the Szilard engine (Fig. 4), operated in a reversed cycle. At the first stage, with the partition’s door closed, it uses external mechanical work E = Tln2 to reduce the volume in that the molecule is confined, from V to V/2, pumping heat Q = E into the heat bath. To model a logically irreversible logic gate, let us now open the door in the partition, and thus lose one bit of 31

See, for example, A. Bérut et al., Nature 483, 187 (2012); J. Koski et al., PNAS USA 111, 13786 (2014); Y. Jun et al., Phys. Rev. Lett. 113, 190601 (2014); J. Peterson et al., Proc. Roy. Soc. A 472, 20150813 (2016). 32 In practical computers, the effective E is even much higher (currently, above ~10-15 J) due to the high energy cost of moving data across a multi-component system, in particular between its logic and memory chips. Chapter 2

Page 15 of 44

Essential Graduate Physics

SM: Statistical Mechanics

information about the molecule’s position. Then we will never get the work Tln2 back, because moving the partition back to the right, with the door open, takes place at zero average pressure. Hence, Eq. (51) gives a fundamental limit for energy loss (per bit) at the logically irreversible computation. (a)

(b) A

A

A F

B

F B B

Fig. 2.5. Simple examples of (a) irreversible and (b) potentially reversible logic circuits. Each rectangle denotes a circuit storing one bit of information.

However, in 1973 Charles Bennett came up with convincing arguments that it is possible to avoid such energy loss by using only operations that are reversible not only physically, but also logically.33 For that, one has to avoid any loss of information, i.e. any erasure of intermediate results, for example in the way shown in Fig. 5b.34 At the end of all calculations, after the result has been copied into memory, the intermediate results may be “rolled back” through reversible gates to be eventually merged into a copy of input data, again without erasing a single bit. The minimal energy dissipation at such reversible calculation tends to zero as the operation speed is decreased, so the average energy loss per bit may be less than the perceived “fundamental thermodynamic limit” (51). The price to pay for this ultralow dissipation is a very high complexity of the hardware necessary for the storage of all intermediate results. However, using irreversible gates sparsely, it may be possible to reduce the complexity dramatically, so in the future such mostly reversible computation may be able to reduce energy consumption in practical digital electronics.35 Before we leave Maxwell’s Demon behind, let me use it to revisit, for one more time, the relation between the reversibility of the classical and quantum mechanics of Hamiltonian systems and the irreversibility possible in thermodynamics and statistical physics. In the thought experiment shown in Fig. 4, the laws of mechanics governing the motion of the molecule are reversible at all times. Still, at partition’s motion to the right, driven by molecular impacts, the entropy grows, because the molecule picks up the heat Q > 0, and hence the entropy S = Q/T > 0, from the heat bath. The physical mechanism of this irreversible entropy (read: disorder) growth is the interaction of the molecule with uncontrollable components of the heat bath and the resulting loss of information about its motion. Philosophically, such emergence of irreversibility in large systems is a strong argument against reductionism – a naïve belief that by knowing the exact laws of Nature at the lowest, most fundamental 33

C. Bennett, IBM J. Res. Devel. 17, 525 (1973); see also C. Bennett, Int. J. Theor. Phys. 21, 905 (1982). For that, all gates have to be physically reversible, with no static power consumption. Such logic devices do exist, though they are still not very practicable – see, e.g., K. Likharev, Int. J. Theor. Phys. 21, 311 (1982). (Another reason why I am citing, rather reluctantly, my own paper is that it also gave a constructive proof that the reversible computation may also beat the perceived “fundamental quantum limit”, Et > , where t is the time of the binary logic operation.) 35 Many currently explored schemes of quantum computing are also reversible – see, e.g., QM Sec. 8.5 and references therein. 34

Chapter 2

Page 16 of 44

Essential Graduate Physics

SM: Statistical Mechanics

level of its complexity, we can readily understand all phenomena on the higher levels of its organization. In reality, the macroscopic irreversibility of large systems is a good example36 of a new law (in this case, the 2nd law of thermodynamics) that becomes relevant on a substantially new, higher level of complexity – without defying the lower-level laws. Without such new laws, very little of the higher-level organization of Nature may be understood. 2.4. Canonical ensemble and the Gibbs distribution As was shown in Sec. 2, the microcanonical distribution may be directly used for solving some important problems. However, its further development, also due to J. Gibbs, turns out to be much more convenient for calculations. Let us consider a statistical ensemble of macroscopically similar systems, each in thermal equilibrium with a heat bath of the same temperature T (Fig. 6a). Such an ensemble is called canonical. (a) system under study

Em, T

(b) E

EΣ dQ, dS

EHB = E – Em heat bath

EHB, T

Em 0

Fig. 2.6. (a) A system in a heat bath (i.e. a canonical ensemble’s member) and (b) the energy spectrum of the composite system (including the heat bath).

It is intuitively evident that if the heat bath is sufficiently large, any thermodynamic variables characterizing the system under study should not depend on the heat bath’s environment. In particular, we may assume that the heat bath is thermally insulated, so the total energy E of the composite system, consisting of the system of our interest plus the heat bath, does not change in time. For example, if the system under study is in a certain (say, mth ) quantum state, then the sum E   E m  E HB

(2.52)

is time-independent. Now let us partition the considered canonical ensemble of such systems into much smaller sub-ensembles, each being a microcanonical ensemble of composite systems whose total, timeindependent energies E are the same – as was discussed in Sec. 2, within a certain small energy interval E > T for most of their internal degrees of freedom, including the electronic and 14

In some textbooks, Eq. (26) is also called the Boltzmann distribution, though it certainly should be distinguished from Eq. (2.111). Chapter 3

Page 7 of 34

Essential Graduate Physics

SM: Statistical Mechanics

vibrational ones. (The typical energy of the lowest electronic excitations is of the order of a few eV, and that of the lowest vibrational excitations is only an order of magnitude lower.) As a result, these degrees of freedom are “frozen out”: they are in their ground states, so their contributions exp{-k’/T} to the sum in Eq. (30), and hence to the heat capacity, are negligible. In monoatomic gases, this is true for all degrees of freedom besides those of the translational motion, already taken into account by the first term in Eq. (30), i.e. by Eq. (16b), so their specific heat is typically well described by Eq. (19). The most important exception is the rotational degrees of freedom of diatomic and polyatomic molecules. As quantum mechanics shows,15 the excitation energy of these degrees of freedom scales as 2/2I, where I is the molecule’s relevant moment of inertia. In the most important molecules, this energy is rather low (for example, for N2, it is close to 0.25 meV, i.e. ~1% of the room temperature), so at usual conditions they are well excited and, moreover, behave virtually as classical degrees of freedom, each giving a quadratic contribution to the molecule’s kinetic energy. As a result, they obey the equipartition theorem, each giving an extra contribution of T/2 to the energy, i.e. ½ to the specific heat.16 In polyatomic molecules, there are three such classical degrees of freedom (corresponding to their rotations about the three principal axes17), but in diatomic molecules, only two.18 Hence, these contributions may be described by the following generalization of Eq. (19):  3/2,  cV  5/2,  3, 

for monoatomic gases, for gases of diatomic molecules, for gases of polyatomic molecules.

(3.31)

Please keep in mind, however, that as the above discussion shows, this simple result is invalid at very low and very high temperatures. In the latter case, the most frequent violations of Eq. (31) are due to the thermal activation of the vibrational degrees of freedom; for many important molecules, it starts at temperatures of a few thousand K. 3.2. Calculating  Now let us discuss the properties of ideal gases of free, indistinguishable particles in more detail, paying special attention to the chemical potential  – which, for some readers, may still be a somewhat mysterious aspect of the Fermi and Bose distributions. Note again that particle indistinguishability is conditioned by the absence of thermal excitations of their internal degrees of freedom, so in the balance of this chapter such excitations will be ignored, and the particle’s energy k will be associated with its “external” energy alone: for a free particle in an ideal gas, with its kinetic energy (3). Let us start from the classical gas, and recall the conclusion of thermodynamics that  is just the Gibbs potential per unit particle – see Eq. (1.56). Hence we can calculate  = G/N from Eqs. (1.49) and (16b). The result, 15

See, e.g., either the model solution of Problem 2.14 (and references therein) or QM Secs. 3.6 and 5.6. This result may be readily obtained again from the last term of Eq. (30) by treating it exactly like the first one was and then applying the general Eq. (1.50). 17 See, e.g., CM Sec. 4.1. 18 This conclusion of the quantum theory may be interpreted as the indistinguishability of the rotations about the molecule’s symmetry axis. 16

Chapter 3

Page 8 of 34

Essential Graduate Physics

SM: Statistical Mechanics

N V   T ln  f T   T  T ln  N  gV which may be rewritten as 2    N  2    exp      T  gV  mT 

 2 2     mT 

3/ 2

 , 

(3.32a)

3/ 2

,

(3.32b)

gives us some idea about  not only for a classical gas but for quantum (Fermi and Bose) gases as well. Indeed, we already know that for indistinguishable particles, the Boltzmann distribution (2.111) is valid only if  Nk  r0), it is classical. Thus, the only way to observe quantum effects in the translational motion of molecules is by using very deep refrigeration. According to Eq. (37), for 19See,

e.g., QM Sec. 7.2 and in particular Eq. (7.37).

Chapter 3

Page 9 of 34

Essential Graduate Physics

SM: Statistical Mechanics

the same nitrogen molecule, taking rave ~ 102r0 ~ 10-8 m (to ensure that direct interaction effects are negligible), the temperature should be well below 1 mK. In order to analyze quantitatively what happens with gases when T is reduced to such low values, we need to calculate  for an arbitrary ideal gas of indistinguishable particles. Let us use the lucky fact that the Fermi-Dirac and the Bose-Einstein statistics may be represented with one formula: 1 N        / T , e 1

(3.38)

where (and everywhere in the balance of this section) the top sign stands for fermions and the lower one for bosons, to discuss fermionic and bosonic ideal gases in one shot. If we deal with a member of the grand canonical ensemble (Fig. 2.13), in which not only T but also  is externally fixed, we may use Eq. (38) to calculate the average number N of particles in volume V. If the volume is so large that N >> 1, we may use the general state counting rule (13) to get N

gV

2 3 

N   d 3 k 

d3p

gV

2 3  e[ ( p) ] / T

1



gV



4p 2 dp

2 3 0 e[ ( p) ] / T

1

.

(3.39)

In most practical cases, however, the number N of gas particles is fixed by particle confinement (i.e. the gas portion under study is a member of a canonical ensemble – see Fig. 2.6), and hence  rather than N should be calculated. Let us use the trick already mentioned in Sec. 2.8: if N is very large, the relative fluctuation of the particle number, at fixed , is negligibly small (N/N ~ 1/N > 1, Eq. (39) may be used to calculate the average  as a function of two independent parameters: N (i.e. the gas density n = N/V) and temperature T. For carrying out this calculation, it is convenient to convert the right-hand side of Eq. (39) to an integral over the particle’s energy (p) = p2/2m, so p = (2m)1/2, and dp = (m/2)1/2d, getting N

gVm 3 / 2 2 2  3



 1 / 2 d

 e (  ) / T  1 .

(3.40)

0

This key result may be represented in two other, sometimes more convenient forms. First, Eq. (40), derived for our current (3D, isotropic and parabolic-dispersion) approximation (3), is just a particular case of the following self-evident state-counting relation 

N   g ( ) N   d ,

(3.41)

g    dN states d

(3.42)

0

where

is the temperature-independent density of all quantum states of a particle – regardless of whether they are occupied or not. Indeed, according to the general Eq. (4), for the simple isotopic model (3), Chapter 3

Page 10 of 34

Basic equation for 

Essential Graduate Physics

SM: Statistical Mechanics

g    g 3 ( ) 

dN states d  gV 4 3  gVm 3 / 2 1 / 2   p    , d d  2 3 3 2 2  3 

(3.43)

so plugging it into Eq. (41), we return to Eq. (39). On the other hand, for some calculations, it is convenient to introduce the following dimensionless energy variable:   /T, to express Eq. (40) via a dimensionless integral: N

gV (mT ) 3 / 2 2 2  3



 1 / 2 d

 e  /T  1 .

(3.44)

0

As a sanity check, in the classical limit (34), the exponent in the denominator of the fraction under the integral is much larger than 1, and Eq. (44) reduces to N

gV (mT ) 3 / 2 2 2  3



 1 / 2 d

 e  / T  0

gV (mT ) 3 / 2 2 2  3



  exp    1 / 2 e  d , at    T . T  0

(3.45)

By the definition of the gamma function (),20 the last integral is just (3/2) = 1/2/2, and we get 2 2  3 2  T    exp   N   2 0  3/ 2 gV (mT )   T  T 

3/ 2

,

(3.46)

which is exactly the same result as given by Eq. (32), which was obtained earlier in a rather different way – from the Boltzmann distribution and thermodynamic identities. Unfortunately, in the general case of arbitrary , the integral in Eq. (44) cannot be worked out analytically.21 The best we can do is to use the T0 defined by Eq. (35), to rewrite Eq. (44) in the following convenient, fully dimensionless form: T  1  T0  2 2

 0 e  / T  1



 1 / 2 d

2 / 3

,

(3.47)

and then use this relation to calculate the ratios T/T0 and /T0  (/T)(T/T0), as functions of /T numerically. After that, we may plot the results versus each other, now considering the former ratio as the argument. Figure 1 below shows the resulting plots for both particle types. They show that at high temperatures, T >> T0, the chemical potential is negative and approaches the classical behavior given by Eq. (46) for both fermions and bosons – just as we could expect. However, at temperatures T ~ T0 the type of statistics becomes crucial. For fermions, the reduction of temperature leads to  changing its sign from negative to positive, and then approaching a constant positive value called the Fermi energy, F  7.595 T0 at T  0. On the contrary, the chemical potential of a bosonic gas stays negative and then turns into zero at a certain critical temperature Tc  3.313 T0. Both these limits, which are very important for applications, may (and will be :-) explored analytically, separately for each statistics.

20

See, e.g., MA Eq. (6.7a). For the reader’s reference only: for the upper sign, the integral in Eq. (40) is a particular form (for s = ½) of a special function called the complete Fermi-Dirac integral Fs, while for the lower sign, it is a particular case (for s = 3/2) of another special function called the polylogarithm Lis. (In what follows, I will not use these notations.)

21

Chapter 3

Page 11 of 34

Essential Graduate Physics

SM: Statistical Mechanics

10

F

fermions

8

T0

6 4

 T0

2

Fig. 3.1. The chemical potential of an ideal gas of N >> 1 indistinguishable quantum particles, as a function of temperature at a fixed gas density n  N/V (i.e. fixed T0  n2/3), for two different particle types. The dashed line shows the classical approximation (46), valid only asymptotically at T >> T0.

0 2 4

bosons

Tc

6 8 10

0

1

2

3

4

5

6

7

8

9

10

T / T0 Before carrying out such analyses (in the next two sections), let me show that rather surprisingly, for any non-relativistic ideal quantum gas, the relation between the product PV and the energy,

PV 

2 E, 3

(3.48)

Ideal gas: PV vs. E

is exactly the same as follows from Eqs. (18) and (19) for the classical gas, and hence does not depend on the particle statistics. To prove this, it is sufficient to use Eqs. (2.114) and (2.117) for the grand thermodynamic potential of each quantum state, which may be conveniently represented by a single formula, (   k ) / T   k  T ln1  e (3.49) ,   and sum them over all states k, using the general summation formula (13). The result for the total grand potential of a 3D gas with the dispersion law (3) is   T

gV

2  3



(   p 2 / 2m) / T  0 ln1  e







3/ 2  4 p 2 dp  T gVm ln 1  e (   ) / T  1 / 2 d . (3.50)  2 3   2  0

Working out this integral by parts, exactly as we did it with the one in Eq. (2.90), we get 





2 gVm 3 / 2  3 / 2 d 2    g 3 ( ) N   d .  3 2 2  3 0 e (   ) / T  1 3 0

(3.51)

But the last integral is just the total energy E of the gas: E

gV

2  3







p2 4p 2 dp gVm 3 / 2  3 / 2 d  0 2m e[ ( p) ] / T  1 2 2  3 0 e (  ) / T  1  0  g 3 ( ) N   d ,

(3.52)

so for any temperature and any particle type,  = –(2/3)E. But since, from thermodynamics,  = –PV, we have Eq. (48) proved. This universal relation22 will be repeatedly used below.

22

For gases of diatomic and polyatomic molecules, whose rotational degrees of freedom are usually thermally excited, Eq. (48) is valid only for the translational-motion energy.

Chapter 3

Page 12 of 34

Ideal gas: energy

Essential Graduate Physics

SM: Statistical Mechanics

3.3. Degenerate Fermi gas Analysis of low-temperature properties of a Fermi gas is very simple in the limit T = 0. Indeed, in this limit, the Fermi-Dirac distribution (2.115) is just the step function:  1, for    , N      0, for    ,

(3.53)

– see the bold line in Fig. 2a. Since the function  = p2/2m is isotropic in the momentum space, in that space the particles, at T = 0, fully occupy all possible quantum states inside a sphere (called either the Fermi sphere or the Fermi sea) with some radius pF (Fig. 2b), while all states above the sea surface are empty. Such degenerate Fermi gas is a striking manifestation of the Pauli principle: though in thermodynamic equilibrium at T = 0 all particles try to lower their energies as much as possible, only g of them may occupy each translational (“orbital”) quantum state. As a result, the sphere’s volume is proportional to the particle number N, or rather to their density n = N/V. N  

pz

(a)

1

T 0

T   F 0

(b)

px

Fig. 3.2. Representations of the Fermi sea: (a) on the Fermi distribution plot, and (b) in the momentum space.

py

pF 0



F

Indeed, the radius pF may be readily related to the number of particles N using Eq. (39), with the upper sign, whose integral in this limit is just the Fermi sphere’s volume: N

pF

gV

4 p 2    3

2

dp 

0

gV

2  

3

4 3 pF . 3

(3.54)

Now we can use Eq. (3) to express via N the chemical potential  (which, in the limit T = 0, bears the special name of the Fermi energy F)23: Fermi energy

F  

T 0

p2 2  2 N    6  F  2m 2m  gV 

2/3

 9 4    2

1/ 3

  T0  7.595 T0 , 

(3.55a)

where T0 is the quantum temperature scale defined by Eq. (35). This formula quantifies the lowtemperature trend of the function (T), clearly visible in Fig. 1, and in particular, explains the ratio F/T0 mentioned in Sec. 2. Note also a simple and very useful relation,

F 

3 N , 2 g 3 ( F )

i.e. g 3 ( F ) 

3 N , 2 F

(3.55b)

that may be obtained immediately from the comparison of Eqs. (43) and (54). The total energy of the degenerate Fermi gas may be (equally easily) calculated from Eq. (52): 23

Note that in the electronic engineering literature,  is usually called the Fermi level, for any temperature.

Chapter 3

Page 13 of 34

Essential Graduate Physics

SM: Statistical Mechanics

E

gV

pF

2  3 0

p2 gV 4 p F5 3 4p 2 dp    N, 2m 2  3 2m 5 5 F

(3.56)

showing that the average energy,    E/N, of a particle inside the Fermi sea is equal to 3/5 of that (F) of the particles in the most energetic occupied states, on the Fermi surface. Since, according to the basic formulas of Chapter 1, at zero temperature H = G = N, and F = E, the only thermodynamic variable still to be calculated is the gas pressure P. For it, we could use any of the thermodynamic relations P = (H – E)/V or P = –(F/V)T, but it is even easier to use our recent result (48). Together with Eq. (56), it yields 2 E 2 N F  36 4   P    3V 5 V 125  

1/ 3

P0  3.035 P0 , where P0  nT0 

 2n5/ 3 . mg 2 / 3

(3.57)

From here, it is straightforward to calculate the isothermal bulk modulus (reciprocal compressibility),24 2 N  P  K T  V    F ,  V  T 3 V

(3.58)

which is frequently simpler to measure experimentally than P. Perhaps the most important example25 of the degenerate Fermi gas is the conduction electrons in metals – the electrons that belong to the outer shells of isolated atoms but become shared in solid metals, and as a result, can move through the crystal lattice almost freely. Though the electrons (which are fermions with spin s = ½ and hence with the spin degeneracy g = 2s + 1 = 2) are negatively charged, the Coulomb interaction of the conduction electrons with each other is substantially compensated by the positively charged ions of the atomic lattice, so they follow the simple model discussed above, in which the interaction is disregarded, reasonably well. This is especially true for alkali metals (forming Group 1 of the periodic table of elements), whose experimentally measured Fermi surfaces are spherical within 1% – even within 0.1% for Na. Table 1 lists, in particular, the experimental values of the bulk modulus for such metals, together with the values given by Eq. (58) using the F calculated from Eq. (55) with the experimental density of the conduction electrons. The agreement is pretty impressive, taking into account that the simple theory described above completely ignores the Coulomb and exchange interactions of the electrons. This agreement implies that, surprisingly, the experimentally observed firmness of solids (or at least metals) is predominantly due to the kinetic energy (3) of the conduction electrons, rather than any electrostatic interactions – though, to be fair, these interactions are the crucial factor defining the equilibrium value

24

For a general discussion of this notion, see, e.g., CM Eqs. (7.32) and (7.36). Recently, nearly degenerate gases (with F ~ 5T) have been formed of weakly interacting Fermi atoms as well – see, e.g., K. Aikawa et al., Phys. Rev. Lett. 112, 010404 (2014), and references therein. Another interesting example of the system that may be approximately treated as a degenerate Fermi gas is the set of Z >> 1 electrons in a heavy atom. However, in this system the account of electron interaction via the electrostatic field they create is important. Since for this Thomas-Fermi model of atoms, the thermal effects are unimportant, it was discussed already in the quantum-mechanical part of this series (see QM Chapter 8). However, its analysis may be streamlined using the notion of the chemical potential, introduced only in this course – the problem left for the reader’s exercise.

25

Chapter 3

Page 14 of 34

Essential Graduate Physics

SM: Statistical Mechanics

of n. Numerical calculations using more accurate approximations (e.g., the Density Functional Theory26), which agree with experiment with a few-percent accuracy, confirm this conclusion.27 Table 3.1. Experimental and theoretical parameters of electrons’ Fermi sea in some alkali metals28

K (GPa) experiment

 (mcal/moleK2)

 (mcal/moleK2)

Eq. (55)

K (GPa) Eq. (58)

Eq. (69)

experiment

Na

3.24

923

642

0.26

0.35

K

2.12

319

281

0.40

0.47

Rb

1.85

230

192

0.46

0.58

Cs

1.59

154

143

0.53

0.77

Metal

F (eV)

Looking at the values of F listed in this table, note that room temperatures (TK ~ 300 K) correspond to T ~ 25 meV. As a result, most experiments with metals, at least in their solid or liquid form, are performed in the limit T 1 (think of N ~ 1023) interacting particles, and only then calculating appropriate ensemble averages, the Gibbs approach reduces finding the free energy (and then, from thermodynamic relations, all other thermodynamic variables) to the calculation of just one integral on its right-hand side of Eq. (89). Still, this integral is 3N-dimensional and may be worked out analytically only if the particle interactions are weak in some sense. Indeed, the last form of Eq. (89) makes it especially evident that if U  0 everywhere, the term in the parentheses under the integral vanishes, and so does the integral itself, and hence the addition to Fideal. Now let us see what would this integral yield for the simplest, short-range interactions, in which the potential U is substantial only when the mutual distance rkk’  rk – rk’ between the centers of two particles is smaller than a certain value 2r0, where r0 may be interpreted as the particle’s radius. If the gas is sufficiently dilute, so the radius r0 is much smaller than the average distance rave between the particles, the integral in the last form of Eq. (89) is of the order of (2r0)3N, i.e. much smaller than (rave)3N  VN. Then we may expand the logarithm in that expression into the Taylor series with respect to the small second term in the square brackets, and keep only its first non-zero term:

F  Fideal 





T e U / T  1 d 3 r1 ...d 3 rN . N  V r V k

(3.90)

Moreover, if the gas density is so low, the chances for three or more particles to come close to each other and interact (collide) simultaneously are typically very small, so pair collisions are the most important ones. In this case, we may recast the integral in Eq. (90) as a sum of N(N – 1)/2  N2/2 similar terms describing such pair interactions, each of the type

 e U (rkk ' ) / T  1 d 3 r d 3 r . (3.91)   k k' rk ,rk 'V It is convenient to think about the rkk’  rk – rk’ as the radius vector of the particle number k in the reference frame with the origin placed at the center of the particle number k’ – see Fig. 6a. Then in Eq. (91), we may first calculate the integral over rk’, while keeping the distance vector rkk’, and hence U(rkk’), constant, getting one more factor V. Moreover, since all particle pairs are similar, in the remaining integral over rkk’ we may drop the radius vector’s index, so Eq. (90) becomes V N 2

Chapter 3



Page 24 of 34

Essential Graduate Physics

SM: Statistical Mechanics

F  Fideal 





T N 2 N 1 T V  e U (r ) / T  1 d 3 r  Fideal  N 2 B(T ), N 2 V V

(3.92)

where the function B(T), called the second virial coefficient,42 has an especially simple form for spherically-symmetric interactions:











1 1 B (T )   1  e U (r ) / T d 3 r   4r 2 dr 1  e U ( r ) / T . 2 20

Second virial coefficient

(3.93)

From Eq. (92), and the second of the thermodynamic relations (1.35), we already know something particular about the equation of state P(V, T) of such a gas: N N2  N 2T  F   Pideal  2 B (T )  T   B(T ) 2  . P    V V   V  T , N V

(3.94)

We see that at a fixed gas density n = N/V, the pair interaction creates additional pressure, proportional to (N/V)2 = n2 and a function of temperature, B(T)T. (a)

(b)

r" r"'

r  rkk' particle k particle k’

r'

Fig. 3.6. The definition of the interparticle distance vectors at their (a) pair and (b) triple interactions.

Let us calculate B(T) for a few simple models of particle interactions. The solid curve in Fig. 7 shows (schematically) a typical form of the interaction potential between electrically neutral atoms/molecules. At large distances the interaction of particles without their own permanent electrical dipole moment p, is dominated by the attraction (the so-called London dispersion force) between the correlated components of the spontaneously induced dipole moments, giving U(r)  r–6 at r  .43 At closer distances the potential is repulsive, growing very fast at r  0, but its quantitative form is specific for particular atoms/molecules.44 The crudest description of such repulsion is given by the so-called hardball (or “hard-sphere”) model: 42

The term “virial”, from the Latin viris (meaning “force”), was introduced to molecular physics by R. Clausius. The motivation for the adjective “second” for B(T) is evident from the last form of Eq. (94), with the “first virial coefficient”, standing before the N/V ratio and sometimes denoted A(T), equal to 1 – see also Eq. (100) below. 43 Indeed, independent fluctuation-induced components p(t) and p’(t) of dipole moments of two particles have random mutual orientation, so that the time average of their interaction energy, proportional to p(t)p’(t)/r3, vanishes. However, the electric field E of each dipole p, proportional to r-3, induces a correlated component of p’, also proportional to r-3, giving interaction energy U(r) proportional to p’E  r-6, with a non-zero statistical average. Quantitative discussions of this effect, within several models, may be found, for example, in QM Chapters 3, 5, and 6. 44 Note that the particular form of the first term in the approximation U(r) = a/r12 – b/r6 (called either the Lennard-Jones potential or the “12-6 potential”), that had been suggested in 1924, lacks physical justification, and in professional physics was soon replaced with other approximations, including the so-called exp-6 model, Chapter 3

Page 25 of 34

Essential Graduate Physics

SM: Statistical Mechanics

 , for 0  r  2r0 , U (r )   (3.95) for 2r0  r  , 0, – see the dashed line and the inset in Fig. 7. (The distance 2r0 is sometimes called the van der Waals radius of the particle.) U (r )

2r0 0

r

U min

2r0

Fig. 3.7. Pair interactions of particles. Solid line: a typical interaction potential; dashed line: its hardball model (95); dash-dotted line: the improved model (97) – all schematically. The inset illustrates the hardball model’s physics.

As Eq. (93) shows, in this model the second virial coefficient is temperature-independent: 2r

1 0 2 2r0 3  4V0 , B (T )  b   4r 2 dr  2 0 3

where V0 

4 3 r0 , 3

(3.96)

so the equation of state (94) still gives a linear dependence of pressure on temperature. A correction to this result may be obtained by the following approximate account of the longrange attraction (see the dash-dotted line in Fig. 7):45 for 0  r  2r0 ,   , U (r )   U (r )  0, with U  T , for 2r0  r   .

(3.97)

For this improved model, Eq. (93) yields: 

B (T )  b 

U (r ) 1 a 4r 2 dr b ,  T 2 2 r0 T

with a  2



 U (r ) r

2

dr  0 .

(3.98)

2 r0

In this model, the equation of state (94) acquires a temperature-independent term: 2 2  N  N 2  N a  N  N P  T      b    T   b     a   . T   V   V   V  V    V

(3.99)

Still, the correction to the ideal-gas pressure is proportional to (N/V)2 and has to be relatively small for this result to be valid. which fits most experimental data much better. However, the Lennard-Jones potential still keeps creeping from one undergraduate textbook to another one, apparently for a not better reason than enabling a simple analytical calculation of the equilibrium distance between the particles at T  0. 45 The strong inequality U > 1 particles is confined in a container of volume V and wall surface area A. The particles may condense on the walls, releasing energy  per particle and forming an ideal 2D gas. Calculate the equilibrium number of condensed particles and the gas pressure, and discuss their temperature dependences. 3.9. The inner surfaces of the walls of a closed container of volume V, filled with N >> 1 particles, have NS >> 1 similar traps (small potential wells). Each trap can hold only one particle, at a potential energy – < 0 relative to that in the volume. Assuming that the gas of the particles in the volume is ideal and classical, derive an equation for the chemical potential  of the system in equilibrium, and use it to calculate the potential and the gas pressure in the limits of small and large values of the N/NS ratio. 3.10. Calculate the magnetic response (the Pauli paramagnetism) of a degenerate ideal gas of spin-½ particles to a weak external magnetic field, due to a partial spin alignment with the field. 3.11. Calculate the magnetic response (the Landau diamagnetism) of a degenerate ideal gas of electrically charged fermions to a weak external magnetic field, due to their orbital motion. 3.12.* Explore the Thomas-Fermi model of a heavy atom, with nuclear charge Q = Ze >> e, in which the electrons are treated as a degenerate Fermi gas, interacting with each other only via their Chapter 3

Page 31 of 34

Essential Graduate Physics

SM: Statistical Mechanics

contribution to the common electrostatic potential (r). In particular, derive the ordinary differential equation obeyed by the radial distribution of the potential, and use it to estimate the effective radius of the atom.50 3.13.* Use the Thomas-Fermi model, explored in the previous problem, to calculate the total binding energy of a heavy atom. Compare the result with that of a simpler model, in that the Coulomb electron-electron interaction of electrons is completely ignored. 3.14. Calculate the characteristic Thomas-Fermi length TF of weak electric field’s screening by conduction electrons in a metal, by modeling their ensemble as a degenerate, isotropic Fermi gas, with the electrons’ interaction limited (as in the two previous problems) by their contribution to the common electrostatic potential. Hint: Assume that TF is much larger than the Bohr radius rB. 3.15. For a degenerate ideal 3D Fermi gas of N particles confined in a rigid-wall box of volume V, calculate the temperature effects on its pressure P and the heat capacity difference (CP – CV), in the leading approximation in T > 1 non-interacting, indistinguishable, ultra-relativistic particles.51 Calculate E and also the gas pressure P explicitly in the degenerate gas limit T  0. In particular, is Eq. (48) valid in this case? 3.18. Use Eq. (49) to calculate the pressure of an ideal gas of ultra-relativistic, indistinguishable quantum particles, for an arbitrary temperature, as a function of the total energy E of the gas and its volume V. Compare the result with the corresponding relations for the electromagnetic blackbody radiation and for an ideal gas of non-relativistic particles. 3.19.* Calculate the speed of sound in an ideal gas of ultra-relativistic fermions of density n at negligible temperature. 3.20. Calculate basic thermodynamic characteristics, including all relevant thermodynamic potentials, specific heat, and the surface tension of a non-relativistic 2D electron gas with a constant areal density n  N/A: (i) at T = 0, and 50

Since this problem and the next one are important for atomic physics, and at their solution, thermal effects may be ignored, they were given in Chapter 8 of the QM part of the series as well, for the benefit of the readers who would not take this SM course. Note, however, that the argumentation in their solutions may be streamlined by using the notion of the chemical potential , which was introduced only in this course. 51 This is, for example, an approximate but reasonable model for electrons in a white dwarf star. (Their Coulomb interaction is mostly compensated by the electric charges of nuclei of fully ionized helium atoms.) Chapter 3

Page 32 of 34

Essential Graduate Physics

SM: Statistical Mechanics

(ii) at low temperatures (in the lowest nonvanishing order in T/F > 1 particles as a whole, while N0 is the number of particles in the condensate alone. 3.22.* For a spatially-uniform ideal Bose gas, calculate the law of the chemical potential’s disappearance at T  Tc and use the result to prove that at the critical point T = Tc, the heat capacity CV is a continuous function of temperature. 3.23. In Chapter 1, several thermodynamic relations involving entropy have been discussed, including the first of Eqs. (1.39): S  G / T P . If we combine this expression with Eq. (1.56), G = N, it looks like that, for the Bose-Einstein condensate, the entropy should vanish because its chemical potential  equals zero at temperatures below the critical point Tc. On the other hand, by dividing both parts of Eq. (1.19) by dT, and assuming that at this temperature change the volume is kept constant, we get CV  T S / T V . (This equality was also mentioned in Chapter 1.) If the CV is known as a function of temperature, the last relation may be integrated over T to calculate S: CV (T ) S  dT  const. T V  const According to Eq. (80), the specific heat for the Bose-Einstein condensate is proportional to T 3/2, so the integration gives a non-zero entropy S  T 3/2. Resolve this apparent contradiction, and calculate the genuine entropy at T = Tc. 3.24. The standard analysis of the Bose-Einstein condensation, outlined in Sec. 4, may seem to ignore the energy quantization of the particles confined in volume V. Use the particular case of a cubic confining volume V = aaa with rigid walls to analyze whether the main conclusions of the standard theory, in particular Eq. (71) for the critical temperature of the system of N >> 1 particles, are affected by such quantization. 3.25.* N >> 1 non-interacting bosons are confined in a soft, spherically-symmetric potential well U(r) = m2r2/2. Develop the theory of the Bose-Einstein condensation in this system; in particular,

52

This condition may be approached reasonably well, for example, in 2D electron gases formed in semiconductor heterostructures (see, e.g., the discussion in QM Sec. 1.6, and the solution of Problem 3.2 of that course), due to not only the electron field’s compensation by background ionized atoms, but also by its screening by highly doped semiconductor bulk.

Chapter 3

Page 33 of 34

Essential Graduate Physics

SM: Statistical Mechanics

prove Eq. (74b), and calculate the critical temperature Tc*. Looking at the solution, what is the most straightforward way to detect the condensation in experiment? 3.26. Calculate the chemical potential of an ideal, uniform 2D gas of spin-0 Bose particles as a function of its areal density n (the number of particles per unit area), and find out whether such gas can condense at low temperatures. Review your result for the case of a large (N >> 1) but finite number of particles. 3.27. Can the Bose-Einstein condensation be achieved in a 2D system of N >> 1 non-interacting bosons placed into a soft, axially-symmetric potential well U(x, y) = m22/2, where 2  x2 + y2, and {x, y} are the Cartesian coordinates in the particle confinement plane? If yes, calculate the critical temperature of the condensation. 3.28. Use Eqs. (115) and (120) to calculate the third virial coefficient C(T) for the hardball model of particle interactions. 3.29. Assuming the hardball model, with volume V0 per molecule, for the liquid phase, describe how the results of Problem 7 change if the liquid forms spherical drops of radius R >> V01/3. Briefly discuss the implications of the result for water cloud formation. Hint: Surface effects in macroscopic volumes of liquids may be well described by attributing an additional energy  (equal to the surface tension) to the unit surface area.53 3.30. A 1D Tonks’ gas is a set of N classical hard rods of length l confined to a segment of length L > Nl, in thermal equilibrium at temperature T: (i) Calculate the system’s average internal energy, entropy, both heat capacities, and the average force F exerted by the rods on the “walls” confining them to the segment L. (ii) Expand the calculated equation of state F(L, T) to the Taylor series in linear density N/L of the rods, find all virial coefficients, and compare the 2nd of them with the result following from the 1D version of Eq. (93).

53

See, e.g., CM Sec. 8.2.

Chapter 3

Page 34 of 34

Essential Graduate Physics

SM: Statistical Mechanics

Chapter 4. Phase Transitions This chapter gives a brief discussion of the coexistence between different states (“phases”) of systems consisting of many similar interacting particles, and transitions between these phases. Due to the complexity of these phenomena, quantitative analytical results in this field have been obtained only for a few very simple models, typically giving rather approximate descriptions of real systems. 4.1. First-order phase transitions From our everyday experience, say with water ice, liquid water, and water vapor, we know that one chemical substance (i.e. a system of many similar particles) may exist in different stable states – phases. A typical substance may have: (i) a dense solid phase, in which interparticle forces keep all atoms/molecules in virtually fixed relative positions, with just small thermal fluctuations about them; (ii) a liquid phase, of comparable density, in which the relative distances between atoms or molecules are almost constant, but these particles are virtually free to move around each other, and (iii) a gas phase, typically of a much lower density, in which the molecules are virtually free to move all around the containing volume.1 Experience also tells us that at certain conditions, two different phases may be in thermal and chemical equilibrium – say, ice floating on water at the freezing-point temperature. Actually, in Sec. 3.4 we already discussed a quantitative theory of one such equilibrium: the Bose-Einstein condensate’s coexistence with the uncondensed gas of similar particles. However, this is a rather exceptional case when the phase coexistence is due to the quantum nature of the particles (bosons) rather than their direct interaction. Much more frequently, the formation of different phases and transitions between them are due to particle repulsive and attractive interactions, briefly discussed in Sec. 3.5. Phase transitions are sometimes classified by their order.2 I will start their discussion with the so-called first-order phase transitions that feature non-zero latent heat  – the thermal energy that is necessary to turn one phase into another phase completely, even if temperature and pressure are kept constant.3 Unfortunately, even the simplest “microscopic” models of particle interaction, such as those discussed in Sec. 3.5, give rather complex equations of state. (As a reminder, even the simplest hardball model leads to the series (3.100), whose higher virial coefficients defy analytical calculation.) This is

1

Plasma, in which atoms are partly or completely ionized, is frequently mentioned on one more phase, on equal footing with the three phases listed above, but one has to remember that in contrast to them, a typical electroneutral plasma consists of particles of two very different sorts – positive ions and electrons. 2 Such classification schemes, started by Paul Ehrenfest in the early 1930s, have been repeatedly modified to accommodate new results for particular systems, and by now only the “first-order phase transition” is still a generally accepted term, but with a definition different from the original one. 3 For example, for water the latent heat of vaporization at the ambient pressure is as high as ~2.2106 J/kg, i.e. ~ 0.4 eV per molecule, making this ubiquitous liquid indispensable for fire fighting. (The latent heat of water ice’s melting is an order of magnitude lower.) © K. Likharev

Essential Graduate Physics

SM: Statistical Mechanics

why I will follow the tradition to discuss the first-order phase transitions using a simple phenomenological model suggested in 1873 by Johannes Diderik van der Waals. For its introduction, it is useful to recall that in Sec. 3.5 we have derived Eq. (3.99) – the equation of state for a classical gas of weakly interacting particles, which takes into account (albeit approximately) both interaction components necessary for a realistic description of gas condensation/liquefaction: the long-range attraction of the particles and their short-range repulsion. Let us rewrite that result as follows: N 2 NT  Nb  Pa 2  (4.1) . 1  V  V  V As we saw while deriving this formula, the physical meaning of the constant b is the effective volume of space taken by a particle pair collision – see Eq. (3.96). The relation (1) is quantitatively valid only if the second term in the parentheses is small, Nb 1, i.e. T > Tc, pressure increases monotonically at gas compression (qualitatively, as in an ideal classical gas, with P = NT/V, to which the van der Waals system tends at T >> Tc), i.e. with (P/V)T < 0 at all points.6 However, below the critical temperature Tc, any isotherm features a segment with (P/V)T >0. It is easy to understand that, at least in a constantpressure experiment (see, for example, Fig. 1.5),7 these segments describe a mechanically unstable equilibrium. Indeed, if due to a random fluctuation, the volume deviated upward from its equilibrium value, the pressure would also increase, forcing the environment (say, the heavy piston in Fig. 1.5) to allow further expansion of the system, leading to an even higher pressure, etc. A similar deviation of volume downward would lead to its similar avalanche-like decrease. Such avalanche instability would develop further and further until the system has reached one of the stable branches with a negative slope (P/V)T. In the range where the single-phase equilibrium state is unstable, the system as a whole may be stable only if it consists of the two phases (one with a smaller, and another with a higher density n = N/V) that are described by the two stable branches – see Fig. 2. P

stable liquid phase

P0 (T )

0

1

2' Au

Ad

unstable branch

liquid and gas in equilibrium

2

stable gaseous phase

1' V

Fig. 4.2. Phase equilibrium at T < Tc (schematically).

6

The special choice of the numerical coefficients in Eq. (3) makes the border between these two regions take place exactly at t = 1, i.e. at the temperature equal to Tc, with the critical point’s coordinates equal to Pc and Vc. 7 Actually, this assumption is not crucial for our analysis of mechanical stability, because if a fluctuation takes place in a small part of the total volume V, its other parts play the role of a pressure-fixing environment. Chapter 4

Page 3 of 36

Essential Graduate Physics

SM: Statistical Mechanics

In order to understand the basic properties of this two-phase system, let us recall the general conditions of the thermodynamic equilibrium of two systems, which have been discussed in Chapter 1: T1  T2 (thermal equilibrium),

(4.5)

1   2 (“chemical” equilibrium),

(4.6)

the latter condition meaning that the average energy of a single particle in both systems has to be the same. To those, we should add the evident condition of mechanical equilibrium, P1  P2 (mechanical equilibrium),

(4.7)

which immediately follows from the balance of the normal forces exerted on any inter-phase boundary. If we discuss isotherms, Eq. (5) is fulfilled automatically, while Eq. (7) means that the effective isotherm P(V) describing a two-phase system should be a horizontal line – see Fig. 2:8 P  P0 (T ) .

(4.8)

Along this line, the internal properties of each phase do not change; only the particle distribution is: it evolves gradually from all particles being in the liquid phase at point 1 to all particles being in the gas phase at point 2.9 In particular, according to Eq. (6), the chemical potentials  of the phases should be equal at each point of the horizontal line (8). This fact enables us to find the line’s position: it has to connect points 1 and 2 in that the chemical potentials of the two phases are equal to each other. Let us recast this condition as 2

2

 d  0, i.e.  dG  0 , 1

(4.9)

1

where the integral may be taken along the single-phase isotherm. (For this mathematical calculation, the mechanical instability of states on a certain part of this curve is not important.) By definition, along that curve, N = const and T = const, so according to Eq. (1.53c), dG = –SdT + VdP +dN, for a slow (reversible) change, dG = VdP. Hence Eq. (9) yields 2

 VdP  0 .

(4.10)

1

This equality means that in Fig. 2, the shaded areas Ad and Au should be equal. 10 8

Frequently, P0(T) is called the saturated vapor pressure. A natural question: is the two-phase state with P = P0(T) the only state existing between points 1 and 2? Indeed, the branches 1-1’ and 2-2’ of the single-phase isotherm also have negative derivatives (P/V)T and hence are mechanically stable with respect to small perturbations. However, these branches are actually metastable, i.e. have larger Gibbs energy per particle (i.e. ) than the counterpart phase at the same P, and are hence unstable to larger perturbations – such as foreign microparticles (say, dust), protrusions on the confining walls, etc. In very controlled conditions, these single-phase “superheated” and “supercooled” states can survive almost all the way to the zero-derivative points 1’ and 2’, leading to sudden jumps of the system into the counterpart phase. (At fixed pressure, such jumps go as shown by dashed lines in Fig. 2.) In particular, at the atmospheric pressure, purified water may be supercooled to almost –50C, and superheated to nearly +270C. However, at more realistic conditions, unavoidable perturbations result in the two-phase coexistence formation close to points 1 and 2. 10 This Maxwell equal-area rule (also called “Maxwell’s construct”) was suggested by J. C. Maxwell in 1875 using more complex reasoning. 9

Chapter 4

Page 4 of 36

Phase equilibrium conditions

Essential Graduate Physics

SM: Statistical Mechanics

As the same Fig. 2 figure shows, the Maxwell rule may be rewritten in a different form, Maxwell equal-area rule

2

 P  P (T ) dV  0 ,

(4.11)

0

1

which is more convenient for analytical calculations than Eq. (10) if the equation of state may be explicitly solved for P – as it is in the van der Waals model (2). Such calculation (left for the reader’s exercise) shows that for that model, the temperature dependence of the saturated vapor pressure at low T is exponential,11 a 27   P0 (T )  Pc exp , with    Tc , for T  Tc , (4.12) b 8  T

corresponding very well to the physical picture of the particle’s thermal activation from a potential well of depth . The signature parameter of a first-order phase transition, the latent heat of evaporation 2

Latent heat: definition

   dQ ,

(4.13)

1

may also be found by a similar integration along the single-phase isotherm. Indeed, using Eq. (1.19), dQ = TdS, we get 2

   TdS  T ( S 2  S1 ) .

(4.14)

1

Let us express the right-hand side of Eq. (14) via the equation of state. For that, let us take the full derivative of both sides of Eq. (6) over temperature, considering the value of G = N for each phase as a function of P and T, and taking into account that according to Eq. (7), P1 = P2 = P0(T):  G1   G  dP0  G2   G  dP0  .    1    2   T  P  P  T dT  T  P  P  T dT

(4.15)

According to the first of Eqs. (1.39), the partial derivative (G/T)P is just minus the entropy, while according to the second of those equalities, (G/P)T is the volume. Thus Eq. (15) becomes

 S1  V1

ClapeyronClausius formula

dP0 dP   S 2  V2 0 . dT dT

(4.16)

Solving this equation for (S2 – S1), and plugging the result into Eq. (14), we get the following Clapeyron-Clausius formula: dP   T (V2  V1 ) 0 . (4.17) dT For the van der Waals model, this formula may be readily used for the analytical calculation of  in two limits: T 0 at  = 0, and 2F/2 < 0 at  = 0). Thus, below Tc the system is in the ferromagnetic phase, with one of two possible directions of the average spontaneous magnetization, so the critical (Curie39) temperature given by Eq. (72) marks the transition between the paramagnetic and ferromagnetic phases. (Since the stable minimum value of the free energy F is a continuous function of temperature at T = Tc, this phase transition is continuous.) Now let us repeat this graphics analysis to examine how each of these phases responds to an external magnetic field h  0. According to Eq. (70), the effect of h is just a horizontal shift of the straight-line plot of its left-hand side – see Fig. 9. (Note a different, here more convenient, normalization of both axes.) 39

Named after Pierre Curie, rather than his (more famous) wife Marie Skłodowska-Curie.

Chapter 4

Page 22 of 36

Critical (“Curie”) temperature

Essential Graduate Physics

SM: Statistical Mechanics

(a)

2dJ

h0

hef

h  hc

h   hc

h0 0

(b)

2dJ

h   hc hef

0

h   hc

 2dJ

 2dJ

Fig. 4.9 External field effects on (a) a paramagnet (T > Tc), and (b) a ferromagnet (T < Tc).

In the paramagnetic case (Fig. 9a) the resulting dependence hef(h) is evidently continuous, but the coupling effect (J > 0) makes it steeper than it would be without spin interaction. This effect may be quantified by the calculation of the low-field susceptibility defined by Eq. (29). To calculate it, let us notice that for small h, and hence small hef, the function tanh is approximately equal to its argument so Eq. (70) is reduced to hef  h 

2 Jd hef , T

for

2 Jd hef  1 . T

(4.73)

Solving this equation for hef, and then using Eq. (72), we get

hef 

h h .  1  2 Jd / T 1  Tc / T

(4.74)

Recalling Eq. (66), we can rewrite this result for the order parameter:



hef  h h  , Tc T  Tc

(4.75)

so the low-field susceptibility CurieWeiss law



 1  , h  0 h T  Tc

for T  Tc .

(4.76)

This is the famous Curie-Weiss law, which shows that the susceptibility diverges at the approach to the Curie temperature Tc. In the ferromagnetic case, the graphical solution (Fig. 9b) of Eq. (70) gives a qualitatively different result. A field increase leads, depending on the spontaneous magnetization, either to the further saturation of hmol (with the order parameter  gradually approaching 1), or, if the initial  was negative, to a jump to positive  at some critical (coercive) field hc. In contrast with the crude approximation (59), at T > 0 the coercive field is smaller than that given by Eq. (60), and the magnetization saturation is gradual, in a good (semi-quantitative) accordance with experiment. To summarize, the Weiss molecular-field approach gives an approximate but realistic description of the ferromagnetic and paramagnetic phases in the Ising model, and a very simple prediction (72) of the temperature of the transition between them, for an arbitrary dimensionality d of the cubic lattice. It also enables the calculation of other parameters of Landau’s mean-field theory for this model – an exercise left for the reader. Moreover, the molecular-field approximation allows one to obtain similarly reasonable analytical results for some other models of phase transitions – see, e.g., the exercise problems at the end of Sec. 6.

Chapter 4

Page 23 of 36

Essential Graduate Physics

SM: Statistical Mechanics

4.5. Ising model: Exact and numerical results In order to evaluate the main prediction (72) of the Weiss theory, let us now discuss the exact (analytical) and quasi-exact (numerical) results obtained for the Ising model, going from the lowest value of dimensionality, d = 0, to its higher values. The zero dimensionality means that the spin has no nearest neighbors at all, so the first term of Eq. (23) vanishes. Hence Eq. (64) is exact, with hef = h, and so is its solution (69). Now we can simply use Eq. (76), with J = 0, i.e. Tc = 0, reducing this result to the so-called Curie law: 1 (4.77)  . T It shows that the system is paramagnetic at any temperature. One may say that for d = 0 the Weiss molecular-field theory is exact – or even trivial. (However, in some sense it is more general than the Ising model because as we know from Chapter 2, it gives the exact result for a fully quantummechanical treatment of any two-level system, including spin-½.) Experimentally, the Curie law is approximately valid for many so-called paramagnetic materials, i.e. 3D systems of particles with spontaneous spins and sufficiently weak interaction between them. The case d = 1 is more complex but has an exact analytical solution. A simple way to obtain it for a uniform chain is to use the so-called transfer matrix approach.40 For this, first of all, we may argue that most properties of a 1D system of N >> 1 spins (say, put at equal distances on a straight line) should not change too much if we bend that line gently into a closed ring (Fig. 10), assuming that spins s1 and sN interact exactly as all other next-neighbor pairs. Then the energy (23) becomes

E m   Js1 s 2  Js 2 s3  ...  Js N s1   hs1  hs 2  ...  hs N  . ...

(4.78)

s N 1 sN

s1 s2 ...

s3

Fig. 4.10. The closed-ring version of the 1D Ising system.

Let us regroup the terms of this sum in the following way:

 h h  h h  h  h E m    s1  Js1 s 2  s 2    s 2  Js 2 s 3  s3   ...   s N  Js N s1  s1  , 2  2 2  2  2  2

(4.79)

so the group inside each pair of parentheses depends only on the state of two adjacent spins. The corresponding statistical sum,

40

This approach was developed in 1941 by H. Kramers and G. Wannier. Note that for the field-free case h = 0, an even simpler solution of the problem, valid for chains with an arbitrary number N of “spins” and arbitrary coefficients Jk, is possible. For that, one needs to calculate the explicit relation between the statistical sums Z for systems with N and (N + 1) spins first, and then apply it sequentially to systems starting from Z = 2. I am leaving this calculation for the reader’s exercise.

Chapter 4

Page 24 of 36

Curie law

Essential Graduate Physics

Z

SM: Statistical Mechanics

s s s  s s ss s  s   s  s  s exph 1  J 1 2  h 2  exph 2  J 2 3  h 3 ... exph N  J N 1  h 1  , (4.80) T 2T  T 2T  T 2T   2T  2T  2T s k  1, for



k 1, 2 ,... N

still has 2N terms, each corresponding to a certain combination of signs of N spins. However, each operand of the product under the sum may take only four values, corresponding to four different combinations of its two arguments: exp J  h  / T , for s k  s k 1  1, s k s k 1 s k 1    sk (4.81) J h exph   expJ  h  / T , for s k  s k 1  1, T 2T    2T for s k   s k 1  1.  exp J / T , These values do not depend on the site number k,41 and may be represented as the elements Mj,j’ (with j, j’ = 1, 2) of the so-called transfer matrix exp J/T    exp J  h /T  , M   exp J  h /T   exp J/T 

(4.82)

so the whole statistical sum (80) may be recast as a product: Z



jk 1, 2

M j j M j j ...M j j M j j . 1 2 2 3 N 1 N N 1

(4.83)

According to the basic rule of matrix multiplication, this sum is just

Z  Tr M N  .

(4.84)

Linear algebra tells us that this trace may be represented just as

Z   N   N ,

(4.85)

where  are the eigenvalues of the transfer matrix M, i.e. the roots of its characteristic equation, exp J  h /T    exp J/T   0. exp J/T  exp J  h /T   

(4.86)

A straightforward solution of this equation yields two roots: 1/ 2 h   4J    J  2 h   exp cosh   sinh  exp    . T  T  T    T 

(4.87)

The last simplification comes from the condition N >> 1 – which we need anyway, to make the ring model sufficiently close to the infinite linear 1D system. In this limit, even a small difference of the exponents, + > -, makes the second term in Eq. (85) negligible, so we finally get N

1/ 2 h  h  NJ   4J   Z    exp cosh   sinh 2  exp    . T  T  T   T    N 

(4.88)

41

This is a result of the “translational” (or rather rotational) symmetry of the system, i.e. its invariance to the index replacement k  k + 1 in all terms of Eq. (78).

Chapter 4

Page 25 of 36

Essential Graduate Physics

SM: Statistical Mechanics

From here, we can find the free energy per particle: 1/ 2  F T 1 h   4J   2 h  ln   J  T ln cosh   sinh  exp    , T  T N N Z  T    

(4.89)

and then use thermodynamics to calculate such variables as entropy – see the first of Eqs. (1.35). However, we are mostly interested in the order parameter defined by Eq. (25):   sj. The conceptually simplest approach to the calculation of this statistical average would be to use the sum (2.7), with the Gibbs probabilities Wm = Z-1exp{-Em/T}. However, the number of terms in this sum is 2N, so for N >> 1 this approach is completely impracticable. Here the analogy between the canonical pair {– P, V} and other generalized force-coordinate pairs {F, q}, in particular {0H(rk), mk} for the magnetic field, discussed in Secs. 1.1 and 1.4, becomes invaluable – see in particular Eq. (1.3b). (In our normalization (22), and for a uniform field, the pair {0H(rk), mk} becomes {h, sk}.) Indeed, in this analogy the last term of Eq. (23), i.e. the sum of N products (–hsk) for all spins, with the statistical average (–Nh), is similar to the product PV, i.e. the difference between the thermodynamic potentials F and G  F + PV in the usual “P-V thermodynamics”. Hence, the free energy F given by Eq. (89) may be understood as the Gibbs energy of the Ising system in the external field, and the equilibrium value of the order parameter may be found from the last of Eqs. (1.39) with the replacements –P  h, V  N:   F / N    F  . N    , i.e.     h  T  h  T

(4.90)

Note that this formula is valid for any model of ferromagnetism, of any dimensionality, if it has the same form of interaction with the external field as the Ising model. For the 1D Ising ring with N >> 1, Eqs. (89) and (90) yield  h  4J   sinh 2  exp   T  T  

h   sinh T

1/ 2

,

giving  

1   2J   exp  . h  0 T h T 

(4.91)

This result means that the 1D Ising model does not exhibit a phase transition, i.e., in this model Tc = 0. However, its susceptibility grows, at T  0, much faster than the Curie law (77). This gives us a hint that at low temperatures the system is “virtually ferromagnetic”, i.e. has the ferromagnetic order with some rare random violations. (Such violations are commonly called low-temperature excitations.) This interpretation may be confirmed by the following approximate calculation. It is almost evident that the lowest-energy excitation of the ferromagnetic state of an open-end 1D Ising chain at h = 0 is the reversal of signs of all spins in one of its parts – see Fig. 11.

+

+

+

+

-

-

-

-

Fig. 4.11. A Bloch wall in an open-end 1D Ising chain.

Indeed, such an excitation (called either the “magnetic domain wall” or just the Bloch wall42) involves the change of sign of just one product sksk’, so according to Eq. (23), its energy EW (defined as 42

Named after Felix Bloch who was the first one to discuss such excitations. More complex excitations such as skyrmions (see, e.g., A. Fert et al., Nature Review Materials 2, 17031 (2017)) have higher energies.

Chapter 4

Page 26 of 36

Essential Graduate Physics

SM: Statistical Mechanics

the difference between the values of Em with and without the excitation) equals 2J, regardless of the wall position.43 Since in the ferromagnetic Ising model, the parameter J is positive, EW > 0. If the system “tried” to minimize its internal energy, having any wall in the system would be energy-disadvantageous. However, thermodynamics tells us that at T  0, the system’s thermal equilibrium corresponds to the minimum of the free energy F  E – TS, rather than just energy E.44 Hence, we have to calculate the Bloch wall’s contribution FW to the free energy. Since in an open-end linear chain of N >> 1 spins, the wall can take (N – 1)  N positions with the same energy EW, we may claim that the entropy SW associated with this excitation is lnN, so FW  EW  TSW  2 J  T ln N .

(4.92)

This result tells us that in the limit N  , and at T  0, the Bloch walls are always free-energybeneficial, thus explaining the absence of the perfect ferromagnetic order in the 1D Ising system. Note, however, that since the logarithmic function changes extremely slowly at large values of its argument, one may argue that a large but finite 1D system should still feature a quasi-critical temperature "Tc " 

2J , ln N

(4.93)

below which it would be in a virtually complete ferromagnetic order. (The exponentially large susceptibility (91) is another manifestation of this fact.) Now let us apply a similar approach to estimate Tc of a 2D Ising model, with open borders. Here the Bloch wall is a line of a certain total length L – see Fig. 12. (For the example presented in that figure, counting from the left to the right, L = 2 + 1 + 4 + 2 + 3 = 12 lattice periods.) + + + + + + -

+ + + + + + -

+ + + + + -

+ + + + + -

+ + + + + -

+ + + + + -

+ + + -

+ + + -

+ + + -

Fig. 4.12. A Bloch wall in a 2D Ising system.

Evidently, the additional energy associated with such a wall is EW = 2JL, while the wall’s entropy SW may be estimated using the following reasoning. A continuous Bloch wall may be thought about as the path of a “Manhattan pedestrian” crossing the system between its nodes. At each junction of straight segments of the path, the pedestrian may select 3 choices of 4 possible directions (except the one that leads backward), so for a path without self-crossings, there are 3(L-1)  3L options for a walk starting from a certain point. Now taking into account that the open borders of a square-shaped lattice with N spins have a length of the order of N1/2, and the Bloch wall may start from any of them, there are approximately M ~ N1/23L different walks between two borders. Again estimating SW as lnM, we get 43

For the closed-ring model (Fig. 10) such analysis gives an almost similar prediction, with the difference that in that system, the Bloch walls may appear only in pairs, so EW = 4J, and SW = ln[N(N – 1)]  2lnN. 44 This is a very vivid application of one of the core results of thermodynamics. If the reader is still uncomfortable with it, they are strongly encouraged to revisit Eq. (1.42) and its discussion. Chapter 4

Page 27 of 36

Essential Graduate Physics

SM: Statistical Mechanics





FW  EW  TSW  2 JL  T ln N 1 / 2 3 L  L(2 J  T ln 3)  T / 2  ln N .

(4.94)

(Actually, since L scales as N1/2 or higher, at N   the last term in Eq. (94) is negligible.) We see that the sign of the derivative FW /L depends on whether the temperature is higher or lower than the following critical value: 2J Tc   1.82 J . (4.95) ln 3 At T < Tc, the free energy’s minimum corresponds to L  0, i.e. the Bloch walls are free-energydetrimental, and the system is in the purely ferromagnetic phase. So, for d = 2 the simple estimate predicts a non-zero critical temperature of the same order as the Weiss theory (according to Eq. (72), in this case Tc = 4J). The major approximation implied in the calculation leading to Eq. (95) is disregarding possible self-crossings of the “Manhattan walk”. The accurate counting of such self-crossings is rather difficult. It had been carried out in 1944 by L. Onsager; since then his calculations have been redone in several easier ways, but even they are rather cumbersome, and I will not have time to discuss them.45 The final result, however, is surprisingly simple: 2J Tc   2.269 J , (4.96) ln 1  2





i.e. showing that the simple estimate (95) is off the mark by only ~20%. The Onsager solution, as well as all alternative solutions of the problem that were found later, are so “artificial” (2D-specific) that they do not give a clear way towards their generalization to other (higher) dimensions. As a result, the 3D Ising problem is still unsolved analytically. Nevertheless, we do know Tc for it with extremely high precision – at least to the 6th decimal place. This has been achieved by numerical methods; they deserve a discussion because of their importance for the solution of other similar problems as well. Conceptually, the task is rather simple: just compute, to the desired precision, the statistical sum of the system (23):  J h Z   exp  s k s k '   s k  . (4.97) T k  s k  1, for T {k ,k '} k 1, 2 ,..., N

As soon as this has been done for a sufficient number of values of the dimensionless parameters J/T and h/T, everything is easy; in particular, we can compute the dimensionless function F / T   ln Z ,

(4.98)

and then find the ratio J/Tc as the smallest value of the parameter J/T at which the ratio F/T (as a function of h/T) has a minimum at zero field. However, for any system of a reasonable size N, the “exact” computation of the statistical sum (97) is impossible, because it contains too many terms for any supercomputer to handle. For example, let us take a relatively small 3D lattice with N = 101010 = 103 spins, which still features substantial boundary artifacts even using the periodic boundary conditions, so its phase transition is smeared about Tc by ~ 3%. Still, even for such a crude model, Z would include 45

For that, the interested reader may be referred to either Sec. 151 in the textbook by Landau and Lifshitz, or Chapter 15 in the text by Huang.

Chapter 4

Page 28 of 36

Onsager’s exact result

Essential Graduate Physics

SM: Statistical Mechanics

21,000  (210)100  (103)100  10300 terms. Let us suppose we are using a modern exaflops-scale supercomputer performing 1018 floating-point operations per second, i.e. ~1026 such operations per year. With those resources, the computation of just one statistical sum would require ~10(300-26) = 10274 years. To call such a number “astronomic” would be a strong understatement. (As a reminder, the age of our Universe is close to 1.41010 years – a very humble number in comparison.) This situation may be improved dramatically by noticing that any statistical sum,  E  Z   exp m  ,  T  m

(4.99)

is dominated by terms with lower values of Em. To find those lowest-energy states, we may use the following powerful approach (belonging to a broad class of numerical Monte-Carlo techniques), which essentially mimics one (randomly selected) path of the system’s evolution in time. One could argue that for that we would need to know the exact laws of evolution of statistical systems,46 that may differ from one system to another, even if their energy spectra Em are the same. This is true, but since the genuine value of Z should be independent of these details, it may be evaluated using any reasonable kinetic model that satisfies certain general rules. In order to reveal these rules, let us start from a system with just two states, with energies Em and Em’  Em +  – see Fig. 13. W m' 

E m'  E m    Wm

Fig. 4.13. Deriving the detailed balance relation.

Em

In the absence of quantum coherence between the states (see Sec. 2.1), the equations for the time evolution of the corresponding probabilities Wm and Wm’ should depend only on the probabilities (plus certain constant coefficients). Moreover, since the equations of quantum mechanics are linear, these master equations should be also linear. Hence, it is natural to expect them to have the following form,

dW m  W m' Γ   W m Γ  , dt

Master equations

dW m'  W m Γ   W m' Γ  , dt

(4.100)

where the coefficients  and  have the physical sense of the rates of the corresponding transitions (see Fig. 13); for example, dt is the probability of the system’s transition into the state m’ during an infinitesimal time interval dt, provided that at the beginning of that interval it was in the state m with full certainty: Wm = 1, Wm’ = 0.47 Since for the system with just two energy levels, the time derivatives of the probabilities have to be equal and opposite, Eqs. (100) describe a redistribution of the probabilities between the energy levels, while keeping their sum W = Wm + Wm’ constant. According to Eqs. (100), at t  , the probabilities settle to their stationary values related as Wm'   . Wm 

(4.101)

46

Discussion of such laws in the task of physical kinetics, which will be briefly reviewed in Chapter 6. The calculation of these rates for several particular cases is described in QM Secs. 6.6, 6.7, and 7.6 – see, e.g., QM Eq. (7.196), which is valid for a very general model of a quantum system.

47

Chapter 4

Page 29 of 36

Essential Graduate Physics

SM: Statistical Mechanics

Now let us require these stationary values to obey the Gibbs distribution (2.58); from it  E  E m'  Wm'    exp m   exp   1 . Wm T  T  

(4.102)

Comparing these two expressions, we see that the rates have to satisfy the following detailed balance relation:    (4.103)  exp  .   T Now comes the final step: since the rates of transition between two particular states should not depend on other states and their occupation, Eq. (103) has to be valid for each pair of states of any multi-state system. (By the way, this relation may serve as an important sanity check: the rates calculated using any reasonable model of a quantum system have to satisfy it.) The detailed balance yields only one equation for two rates  and ; if our only goal is the calculation of Z, the choice of the other equation is not too critical. A very simple choice is if   0, 1,         exp  / T , otherwise,

(4.104)

where  is the energy change resulting from the transition. This model, which evidently satisfies the detailed balance relation (103), is very popular (despite the unphysical cusp this function has at  = 0), because it enables the following simple Metropolis algorithm (Fig. 14). set up an initial state - flip a random spin - calculate  - calculate  () generate random  (0   1)

reject spin flip

 

accept spin flip

Fig. 4.14. A crude scheme of the Metropolis algorithm for the Ising model simulation.

The calculation starts by setting a certain initial state of the system. At relatively high temperatures, the state may be generated randomly; for example, in the Ising system, the initial state of each spin sk may be selected independently, with a 50% probability. At low temperatures, starting the calculations from the lowest-energy state (in particular, for the Ising model, from the ferromagnetic state sk = sgn(h) = const) may give the fastest convergence. Now one spin is flipped at random, the

Chapter 4

Page 30 of 36

Detailed balance

Essential Graduate Physics

SM: Statistical Mechanics

corresponding change  of the energy is calculated,48 and plugged into Eq. (104) to calculate (). Next, a pseudo-random number generator is used to generate a random number , with the probability density being constant on the segment [0, 1]. (Such functions are available in virtually any numerical library.) If the resulting  is less than (), the transition is accepted, while if  > (), it is rejected. Physically, this means that any transition down the energy spectrum ( < 0) is always accepted, while those up the energy profile ( > 0) are accepted with the probability proportional to exp{–/T}.49 After sufficiently many such steps, the statistical sum (99) may be calculated approximately as a partial sum over the states passed by the system. (It may be better to discard the contributions from a few first steps, to avoid the effects of the initial state choice.) This algorithm is extremely efficient. Even with the modest computers available in the 1980s, it has allowed simulating a 3D Ising system of (128)3 spins to get the following result: J/Tc  0.221650  0.000005. For all practical purposes, this result is exact – so perhaps the largest benefit of the possible future analytical solution of the infinite 3D Ising problem will be a virtually certain Nobel Prize for its author. Table 2 summarizes the values of Tc for the Ising model. Very visible is the fast improvement of the prediction accuracy of the molecular-field approximation, because it is asymptotically correct at d  . Table 4.2. The critical temperature Tc (in the units of J) of the Ising model of a ferromagnet (J > 0), for several values of dimensionality d

d 0 1 2 3

Molecular-field approximation – Eq. (72) Exact value 0 0 2 0 4 2.269… 6 4.513…

Exact value’s source Gibbs distribution Transfer matrix theory Onsager’s solution Numerical simulation

Finally, I need to mention the renormalization-group (“RG”) approach,50 despite its low efficiency for Ising-type problems. The basic idea of this approach stems from the scaling law (30)-(31): at T = Tc the correlation radius rc diverges. Hence, the critical temperature may be found from the requirement for the system to be spatially self-similar. Namely, let us form larger and larger groups (“blocks”) of adjacent spins, and require that all properties of the resulting system of the blocks approach those of the initial system, as T approaches Tc. Let us see how this idea works for the simplest nontrivial (1D) case described by the statistical sum (80). Assuming N to be even (which does not matter at N  ), and adding an inconsequential constant C to each exponent (for the purpose that will be clear soon), we may rewrite this expression as

48

Note that a flip of a single spin changes the signs of only (2d + 1) terms in the sum (23), i.e. does not require the re-calculation of all (2d +1)N terms of the sum, so the computation of  takes just a few multiply-andaccumulate operations even at N >> 1. 49 The latter step is necessary to avoid the system’s trapping in local minima of its multidimensional energy profile Em(s1, s2,…, sN). 50 Initially developed in the quantum field theory in the 1950s, it was adapted to statistics by L. Kadanoff in 1966, with a spectacular solution of the so-called Kubo problem by K. Wilson in 1972, later awarded with a Nobel Prize. Chapter 4

Page 31 of 36

Essential Graduate Physics

SM: Statistical Mechanics

Z

h J  h  s k 1  C  . exp  s k  s k s k 1  T 2T  2T  k 1, 2 ,... N

 

s k  1

(4.105)

Let us group each pair of adjacent exponents to recast this expression as a product over only even numbers k,  h  h h J (4.106) Z    exp s k 1  s k  s k 1  s k 1     s k 1  2C  , T  2T T s k  1 k  2 , 4 ,... N  2T  and carry out the summation over two possible states of the internal spin sk explicitly:   h h h J   exp 2T s k 1  T s k 1  s k 1   T  2T s k 1  2C      Z     h h J   h s k  1 k  2 , 4 ,... N s k 1  2C   exp s k 1  s k 1  s k 1    T 2T T   2T  

(4.107)

h   h J 2 cosh  s k 1  s k 1    exp s k 1  s k 1   2C . T   2T T s k  1 k  2 , 4 ,... N

 

Now let us require this statistical sum (and hence all statistical properties of the system of twospin blocks) to be identical to that of the Ising system of N/2 spins, numbered by odd k: Z' 

h'  J'  exp s k 1 s k 1  s k 1  C'  , T T  s k  1 k  2 , 4 ,..., N

 

(4.108)

with some different parameters h’, J’, and C’, for all four possible values of sk-1 = 1 and sk+1 = 1. Since the right-hand side of Eq. (107) depends only on the sum (sk-1 + sk+1), this requirement yields only three (rather than four) independent equations for finding h’, J’, and C’. Of them, the equations for h’ and J’ depend only on h and J (but not on C),51 and may be represented in an especially simple form,

if the following notation is used:

x (1  y ) 2 x'  , ( x  y )(1  xy )

y ( x  y) y'  , 1  xy

(4.109)

J  x  exp  4 ,  T

h  y  exp  2  .  T

(4.110)

Now the grouping procedure may be repeated, with the same result (109)-(110). Hence these equations may be considered as recurrence relations describing repeated doubling of the spin block size. Figure 15 shows (schematically) the trajectories of this dynamic system on the phase plane [x, y]. (Each trajectory is defined by the following property: for each of its points {x, y}, the point {x’, y’} defined by the “mapping” Eq. (109) is also on the same trajectory.) For the ferromagnetic coupling (J > 0) and h > 0, we may limit the analysis to the unit square 0  x, y  1. If this flow diagram had a stable fixed point with x’ = x = x  0 (i.e. T/J < ) and y’ = y = 1 (i.e. h = 0), then the first of Eqs. (110) would immediately give us the critical temperature of the phase transition in the field-free system: 51

This might be expected because physically C is just a certain constant addition to the system’s energy. However, the introduction of that constant was mathematically necessary, because Eqs. (107) and (108) may be reconciled only if C’  C.

Chapter 4

Page 32 of 36

RG equations for 1D Ising model

Essential Graduate Physics

SM: Statistical Mechanics

Tc 

4J . ln(1 / x )

(4.111)

However, Fig. 15 shows that the only fixed point of the 1D system is x = y = 0, which (at a finite coupling J) should be interpreted as Tc = 0. This is of course in agreement with the exact result of the transfer-matrix analysis but does not provide much additional information. y  exp 2h / T  1

h0

T 0

0

T  h

1 x  exp{4 J / T }

Fig. 4.15. The RG flow diagram of the 1D Ising system (schematically).

Unfortunately, for higher dimensionalities, the renormalization-group approach rapidly becomes rather cumbersome and requires certain approximations whose accuracy cannot be easily controlled. For the 2D Ising system, such approximations lead to the prediction Tc  2.55 J, i.e. to a substantial difference from the exact result (96). 4.6. Exercise problems 4.1. Calculate the entropy, the internal energy, and the specific heat cV of the van der Waals gas, and discuss the results. For the gas with temperature-independent cV, find the relation between V and T during an adiabatic process. 4.2. Use two different approaches to calculate the coefficient (E/V)T for the van der Waals gas and the change of temperature of such a gas, with a temperature-independent CV, at its very fast expansion. 4.3. For real gases, the Joule-Thomson coefficient (T/P)H (and hence the gas temperature change at its throttling, see Problem 1.11) inverts its sign at crossing the so-called inversion curve Tinv(P). Calculate this curve for the van der Waals gas. 4.4. Calculate the difference CP – CV for the van der Waals gas, and compare the result with that for an ideal classical gas. 4.5. For the van der Waals model, calculate the temperature dependence of the phase-equilibrium pressure P0(T) and the latent heat (T), in the low-temperature limit T 1 confined particles, by assuming the natural trial solution   exp{–r2/2r02}.54 (iii) Explore the function E(r0) for positive and negative values of the constant b, and interpret the results. (iv) For b < 0 with small  b , estimate the largest number N of particles that may form a metastable Bose-Einstein condensate. 4.14. Superconductivity may be suppressed by a sufficiently strong magnetic field. In the simplest case of a bulk, long cylindrical sample of a type-I superconductor, placed into an external magnetic field Hext parallel to its surface, this suppression takes a simple form of a simultaneous transition of the whole sample from the superconducting state to the “normal” (non-superconducting) state at a certain value Hc(T) of the field’s magnitude. This critical field gradually decreases with temperature from its maximum value Hc(0) at T  0 to zero at the critical temperature Tc. Assuming that the function Hc(T) is known, calculate the latent heat of this phase transition as a function of temperature, and spell out its values at T  0 and T = Tc. Hint: In this particular context, “bulk sample” means a sample much larger than the intrinsic length scales of the superconductor (such as the London penetration depth L and the coherence length ).55 For such bulk superconductors, magnetic properties of the superconducting phase may be well described just as its perfect diamagnetism, with B = 0 inside it. 4.15. In some textbooks, the discussion of thermodynamics of superconductivity is started by displaying, as self-evident, the following formula: Fn T   Fs T  

 0 H c2 T 

V, 2 where Fs and Fn are the free energy values in the superconducting and non-superconducting (“normal”) phases, and Hc(T) is the critical value of the magnetic external field. Is this formula correct, and if not, what qualification is necessary to make it valid? Assume that all conditions of a simultaneous fieldinduced phase transition in the whole sample, spelled out in the previous problem, are satisfied. 4.16. Consider a ring of N = 3 Ising “spins” (sk = 1) with similar ferromagnetic coupling J between all sites, in thermal equilibrium. (i) Calculate the order parameter  and the low-field susceptibility   /hh=0. (ii) Use the low-temperature limit of the result for  to predict it for a ring with an arbitrary N, and verify your prediction by a direct calculation (in this limit). (iii) Discuss the relation between the last result, in the limit N  , and Eq. (91). 54

This task is essentially the first step of the variational method of quantum mechanics – see, e.g., QM Sec. 2.9. discussion of these parameters, as well as of the difference between the type-I and type-II superconductivity, may be found in EM Secs. 6.4-6.5. However, those details are not needed for the solution of this problem.

55A

Chapter 4

Page 35 of 36

Essential Graduate Physics

SM: Statistical Mechanics

4.17. Calculate the average energy, entropy, and heat capacity of a three-site ring of Ising-type “spins” (sk = 1), with antiferromagnetic coupling (of magnitude J) between the sites, in thermal equilibrium at temperature T, with no external magnetic field. Find the asymptotic behavior of its heat capacity for low and high temperatures, and give an interpretation of the results. 4.18. Using the results discussed in Sec. 5, calculate the average energy, free energy, entropy, and heat capacity (all per one “spin”) as functions of temperature T and external field h, for the infinite 1D Ising model. Sketch the temperature dependence of the heat capacity for various values of the h/J ratio, and give a physical interpretation of the result. 4.19. Calculate the specific heat (per “spin”) for the d-dimensional Ising problem in the absence of the external field, in the molecular-field approximation. Sketch the temperature dependence of C and compare it with the corresponding plot in the previous problem’s solution. 4.20. Prove that in the limit T  Tc, the molecular-field approximation, applied to the Ising model with spatially constant order parameter, gives results similar to those of Landau’s mean-field theory with certain coefficients a and b. Calculate these coefficients, and list the critical exponents defined by Eqs. (26), (28), (29), and (32), given by this approximation. 4.21. Assuming that the statistical sum ZN of a field-free, open-ended 1D Ising system of N “spins” with arbitrary coefficients Jk is known, calculate ZN+1. Then use this relation to obtain an explicit expression for ZN, and compare it with Eq. (88). 4.22. Use the molecular-field approximation to calculate the critical temperature and the lowfield susceptibility of a d-dimensional cubic lattice of spins, described by the so-called classical Heisenberg model:56 E m   J  s k  s k'   h  s k . k , ,k' 

k

Here, in contrast to the (otherwise, very similar) Ising model (23), the spin of each site is described as a classical 3D vector sk = {sxk, syk, szk} of unit length: s k2 = 1. 4.23. Use the molecular-field approximation to calculate the coefficient a in Landau’s expansion (46) for a 3D cubic lattice of spins described by the classical Heisenberg model (whose analysis was the subject of the previous problem). 4.24. Use the molecular-field approximation to calculate the critical temperature of the ferromagnetic transition for the d-dimensional cubic Heisenberg lattice of arbitrary (either integer or half-integer) quantum spins s. Hint: This model is described by Eq. (4.21) of the lecture notes, with σˆ now meaning the vector operator of spin s, in units of Planck’s constant .

56

This classical model is formally similar to the generalization of the genuine (quantum) Heisenberg model (21) to arbitrary spin s, and serves as its infinite-spin limit. Chapter 4

Page 36 of 36

Essential Graduate Physics

SM: Statistical Mechanics

Chapter 5. Fluctuations This chapter discusses fluctuations of macroscopic variables, mostly in thermodynamic equilibrium. In particular, it describes the intimate connection between fluctuations and dissipation (damping) in dynamic systems coupled to multi-particle environments. This connection culminates in the Einstein relation between the diffusion coefficient and mobility, the Nyquist formula, and its quantummechanical generalization – the fluctuation-dissipation theorem. An alternative approach to the same problem, based on the Smoluchowski and Fokker-Planck equations, is also discussed in brief. 5.1. Characterization of fluctuations

Fluctuation

At the beginning of Chapter 2, we discussed the notion of averaging a variable f over a statistical ensemble – see Eqs. (2.7) and (2.10). Now, the fluctuation of the variable is defined simply as its deviation from the average  f : ~ f  f  f ; (5.1) this deviation is, generally, also a random variable. The most important property of any fluctuation is that its average over the same statistical ensemble equals zero; indeed: ~ f  f  f  f  f  f  f  0. (5.2)

Variance: definition

As a result, such an average cannot be used to characterize fluctuations’ intensity, and the simplest meaningful characteristic of the intensity is the variance (sometimes called “dispersion”): ~2 2 (5.3) f  f  f  . The following simple property of the variance is frequently convenient for its calculation: ~2 2 2 2 2 f  f  f   f 2  2 f f  f  f 2 2 f  f ,

Variance via averages

so, finally:

~2 f  f

2

2

 f

.

(5.4a)

(5.4b)

As the simplest example, consider a variable that takes only two values, 1, with equal probabilities Wj = ½. For such a variable, the basic Eq. (2.7) yields f  W j f j  j

however, f

2

1  1  1  1  0 ; 2 2

  W j f j2  ½ (1) 2  ½ (1) 2  1, so that

~2 f  f

(5.5a) 2

 f

2

 1  0.

(5.5b)

j

The square root of the variance, r.m.s. fluctuation

~2

f  f

© K. Likharev

1/ 2

,

(5.6)

Essential Graduate Physics

SM: Statistical Mechanics

is called the root-mean-square (r.m.s.) fluctuation. An advantage of this measure is that it has the same dimensionality as the variable itself, so the ratio f/ f  is dimensionless, and is used to characterize the relative intensity of fluctuations. As has been mentioned in Chapter 1, all results of thermodynamics are valid only if the fluctuations of thermodynamic variables (the internal energy E, entropy S, etc.) are relatively small.1 Let us make a simple estimate of the relative intensity of fluctuations, taking as the simplest example a system of N independent, similar parts (e.g., particles), and an extensive variable N

F   fk .

(5.7)

k 1

where all single-particle functions fk are similar, besides that each of them depends on the state of only “its own” (kth) part. The statistical average of such F is evidently N

F  f N f ,

(5.8)

k 1

while the variance of its fluctuations is

~ ~~ F 2  FF 

N

~

N

~

f f k

k 1

k' 1

k'



N

~ ~  k fk'

f

k , k '1

N



k , k '1

~ ~ fk fk' .

(5.9)

Now we may use the fact that for two independent variables: ~ ~ ~ fk fk'  fk

~ f k '  0,

for k'  k ;

(5.10)

indeed, the first of these equalities may be used as the mathematical definition of their independence. Hence, only the terms with k’ = k make nonzero contributions to the right-hand side of Eq. (9):

~ F 2 

N



k , k '1

~2 ~ f k  k ,k '  N f 2 .

(5.11)

Comparing Eqs. (8) and (11), we see that the relative intensity of fluctuations of the variable F,

F

F



1 f , N 1/ 2 f

(5.12)

tends to zero as the system size grows (N  ). It is this fact that justifies the thermodynamic approach to typical physical systems, with the number N of particles of the order of the Avogadro number NA ~ 1024. Nevertheless, in many situations even small fluctuations of variables are important, and in this chapter, we will calculate their basic properties, starting with the variance. It should be comforting for the reader to notice that for one very important case, such a calculation has already been done in our course. Indeed, for any generalized coordinate q and generalized momentum p that give quadratic contributions of the type (2.46) to the system’s Let me remind the reader that up to this point, the averaging signs … were dropped in most formulas, for the sake of notation simplicity. In this chapter, I have to restore these signs to avoid confusion. The only exception will be temperature – whose average, following (probably, bad :-) tradition, will be still called just T everywhere, besides the last part of Sec. 3, where temperature fluctuations are discussed explicitly. 1

Chapter 5

Page 2 of 44

Relative fluctuation estimate

Essential Graduate Physics

SM: Statistical Mechanics

Hamiltonian (as in a harmonic oscillator), we have derived the equipartition theorem (2.48), valid in the classical limit. Since the average values of these variables, in the thermodynamic equilibrium, equal zero, Eq. (6) immediately yields their r.m.s. fluctuations: 1/ 2

1/ 2

1/ 2

   T  , where     . (5.13)  2  m  m  The generalization of these classical relations to the quantum-mechanical case (T ~ ) is provided by Eqs. (2.78) and (2.81): 1/ 2 1/ 2        m (5.14) coth ,  coth p    q  .   2T  2T   2 m  2

p  (mT )1 / 2 ,

T 

q     

However, the intensity of fluctuations in other systems requires special calculations. Moreover, only a few cases allow for general, model-independent results. Let us review some of them. 5.2. Energy and the number of particles First of all, note that fluctuations of macroscopic variables depend on particular conditions.2 For example, in a mechanically- and thermally-insulated system with a fixed number of particles, i.e. a member of a microcanonical ensemble, the internal energy does not fluctuate: E = 0. However, if such a system is in thermal contact with the environment, i.e. is a member of a canonical ensemble (Fig. 2.6), the situation is different. Indeed, for such a system we may apply the general Eq. (2.7), with Wm given by the Gibbs distribution (2.58)-(2.59), not only to E but also to E2. As we already know from Sec. 2.4, the former average, 1  E   E  E   Wm E m , Wm  exp m , (5.15) Z   exp m  , Z  T   T  m m yields Eq. (2.61b), which may be rewritten in the form E 

1 Z , Z  (  )

where β 

1 , T

(5.16)

which is more convenient for our current purposes. Let us carry out a similar calculation for E2: E 2   Wm E m2  m

1 Z

E

2 m

exp E m .

(5.17)

m

It is straightforward to verify, by double differentiation, that the last expression may be rewritten in a form similar to Eq. (16): 1 2Z 1 2 . (5.18)      E2  E exp  m Z  (  ) 2 m Z  (  ) 2 Now it is easy to use Eqs. (4) to calculate the variance of energy fluctuations: ~ E2  E2  E

2

2

 1 Z  1 2Z   1 Z   E        . 2  (  )  Z  (  )   (  ) Z  (  )  Z  (  ) 

(5.19)

2

Unfortunately, even in some renowned textbooks, certain formulas pertaining to fluctuations are either incorrect or given without specifying the conditions of their applicability, so the reader’s caution is advised.

Chapter 5

Page 3 of 44

Essential Graduate Physics

SM: Statistical Mechanics

Since Eqs. (15)-(19) are valid only if the system’s volume V is fixed (because its change may affect the energy spectrum Em), it is customary to rewrite this important result as follows:

~ E2 

 E  T 2   (1 / T )  T

   CV T 2 .  V

 E

(5.20)

Fluctuations of E

This is a remarkably simple, fundamental result. As a sanity check, for a system of N similar, independent particles,  E  and hence CV are proportional to N, so Eq. (20) yields E  N1/2 and E/E  N–1/2, in agreement with Eq. (12). Let me emphasize that the classically-looking Eq. (20) is based on the general Gibbs distribution, and hence is valid for any system (either classical or quantum) in thermal equilibrium. Some corollaries of this result will be discussed in the next section, and now let us carry out a very similar calculation for a system whose number N of particles in a system is not fixed, because they may go to, and come from its environment at will. If the chemical potential  of the environment and its temperature T are fixed, i.e. we are dealing with the grand canonical ensemble (Fig. 2.13), we may use the grand canonical distribution (2.106)-(2.107): Wm , N 

 N  E m , N  1 exp , ZG T  

 N  E m , N  Z G   exp . T N ,m  

(5.21)

Acting exactly as we did above for the internal energy, we get  N  E m , N  T Z G ,  T m, N  Z G    N  E m , N  T 2  2 Z G 1 2 exp , N     2 Z G m, N T  Z G  

N  N2

1 ZG

 N exp

(5.22) (5.23)

so the particle number’s variance is ~ N2  N2  N

2

T 2 Z G T 2   Z G  Z G2

 Z G   

2

 N    T Z G     T   T ,   Z G    

(5.24)

in full analogy with Eq. (19).3 In particular, for an ideal classical gas, we may combine the last result with Eq. (3.32b). (As was already emphasized in Sec. 3.2, though that result has been obtained for the canonical ensemble, in which the number of particles N is fixed, at N >> 1 the fluctuations of N in the grand canonical ensemble should be relatively small, so the same relation should be valid for the average N in that ensemble.) Easily solving Eq. (3.32b) for N, we get   N  const  exp , T 

(5.25)

where “const” means a factor constant at the partial differentiation of  N  over , required by Eq. (24). Performing the differentiation and then using Eq. (25) again, 3

Note, however, that for the grand canonical ensemble, Eq. (19) is generally invalid.

Chapter 5

Page 4 of 44

Fluctuations of N

Essential Graduate Physics

SM: Statistical Mechanics

 N 

 const 

N 1   exp   , T T T 

(5.26)

we get from Eq. (24) a very simple result: ~ N 2  N , i.e. N  N

Fluctuations of N: classical gas

1/ 2

.

(5.27)

This relation is so important that I will also show how it may be derived differently. As a byproduct of this new derivation, we will prove that this result is valid for systems with an arbitrary (say, small) N, and also get more detailed information about the statistics of fluctuations of that number. Let us consider an ideal classical gas of N0 particles in a volume V0, and calculate the probability WN to have exactly N  N0 of these particles in its part of volume V  V0 – see Fig. 1. V, N V0, N0

Fig. 5.1. Deriving the binomial, Poisson, and Gaussian distributions.

For one particle such probability is obviously W = V/V0 = N/N0  1, while the probability to have that particle in the remaining part of the volume is W’ = 1 – W = 1 – N/N0. If all particles are distinct, the probability of having N  N0 specific particles in volume V and (N – N0) specific particles in volume (V – V0) is WNW’(N0–N). However, if we do not want to distinguish the particles, we should multiply this probability by the number of possible particle combinations keeping the numbers N and N0 constant, i.e. by the binomial coefficient N0!/N!(N0 – N)!.4 As the result, the required probability is N

Binomial distribution

W N  W W' N

( N0  N )

 N   N  N0!  1     N !( N 0  N )!  N 0   N 0 

N0  N

N0! . N !( N 0  N )!

(5.28)

This is the so-called binomial probability distribution,5 valid for any  N  and N0. Still keeping  N  arbitrary, we can simplify the binomial distribution by assuming that the whole volume V0, and hence N0, are very large: (5.29) N 0  N , where N means any value of interest, including  N . Indeed, in this limit we can neglect N in comparison with N0 in the second exponent of Eq. (28), and also approximate the fraction N0!/(N0 – N)!, i.e. the product of N terms, (N0 – N + 1) (N0 – N + 2)…(N0 – 1)N0, by just N0N. As a result, we get N

 N   N   1   W N      N N 0   0  

4 5

N0

N

N N 0N  N! N!

 N  1     N 0  

N0

N

N

1   W    1  W   N! 

N

,

(5.30)

See, e.g., MA Eq. (2.2). It was derived by Jacob Bernoulli (1655-1705).

Chapter 5

Page 5 of 44

Essential Graduate Physics

SM: Statistical Mechanics

where, as before, W =  N /N0. In the limit (29), W  0, so the factor inside the square brackets tends to 1/e, the reciprocal of the natural logarithm base.6 Thus, we get an expression independent of N0: WN 

N

N

N!

e

N

(5.31)

.

Poisson distribution

This is the much-celebrated Poisson distribution7 which describes a very broad family of random phenomena. Figure 2 shows this distribution for several values of  N  – which, in contrast to N, is not necessarily an integer. 1

N  0.8

0.1 0.6

WN

1

0.4

2

4

0.2

0

0

2

4

8 6

8

10

12

14

Fig. 5.2. The Poisson distribution for several values of  N . In contrast to that average, the argument N may take only integer values, so that the lines in these plots are only guides for the eye.

N

In the limit of very small N, the function WN(N) is close to an exponent, WN ≈ WN   N N, while in the opposite limit,  N  >> 1, it rapidly approaches the Gaussian (or “normal”) distribution8 WN 

 ( N  N ) 2  exp  . 2 1 / 2 N  2(N ) 2  1

(5.32)

(Note that the Gaussian distribution is also valid if both N and N0 are large, regardless of the relation between them – see Fig. 3.) Binomial distribution Eq. (28)

1  N , N 0

N  N 0

Poisson distribution Eq. (31)

Gaussian distribution Eq. (32)

1  N

Fig. 5.3. The hierarchy of three major probability distributions.

6

Indeed, this is just the most popular definition of that major mathematical constant – see, e.g., MA Eq. (1.2a) with n = –1/W. 7 Named after the same Siméon Denis Poisson (1781-1840) who is also responsible for other major mathematical results and tools used in this series, including the Poisson equation – see, e.g., Sec. 6.4 below. 8 Named after Carl Friedrich Gauss (1777-1855), though Pierre-Simone Laplace (1749-1827) is credited for substantial contributions to its development. Chapter 5

Page 6 of 44

Gaussian distribution

Essential Graduate Physics

SM: Statistical Mechanics

A major property of the Poisson (and hence of the Gaussian) distribution is that it has the same variance as given by Eq. (27): ~ 2 N 2  N  N   N . (5.33) (This is not true for the general binomial distribution.) For our current purposes, this means that for the ideal classical gas, Eq. (27) is valid for any number of particles. 5.3. Volume and temperature What are the r.m.s. fluctuations of other thermodynamic variables – like V, T, etc.? Again, the answer depends on specific conditions. For example, if the volume V occupied by a gas is externally fixed (say, by rigid walls), it obviously does not fluctuate at all: V = 0. On the other hand, the volume may fluctuate in the situation when the average pressure is fixed – see, e.g., Fig. 1.5. A formal calculation of these fluctuations, using the approach applied in the last section, is complicated by the fact that in most cases of interest, it is physically impracticable to fix its conjugate variable P, i.e. suppress its fluctuations. For example, the force F(t) exerted by an ideal classical gas on a container’s wall (whose measure the pressure is) is the result of individual, independent hits of the wall by particles (Fig. 4), with the time scale c ~ rB/v21/2 ~ rB/(T/m)1/2 ~ 10–16 s, so its spectrum extends to very high frequencies, virtually impossible to control.

F (t )

F 0

t

Fig. 5.4. The force exerted by gas particles on a container’s wall, as a function of time (schematically).

However, we can use the following trick, very typical for the theory of fluctuations. It is almost evident that the r.m.s. fluctuations of the gas volume are independent of the shape of the container. Let us consider a particular situation similar to that shown in Fig. 1.5, with the container of a cylindrical shape, with the base area A.9 Then the coordinate of the piston is just q = V/A, while the average force exerted by the gas on the cylinder is F  = PA – see Fig. 5. Now if the piston is sufficiently massive, the frequency  of its free oscillations near the equilibrium position is low enough to satisfy the following three conditions. First, besides balancing the average force F  and thus sustaining the average pressure  P  of the gas, the interaction between the heavy piston and the relatively light particles of the gas is weak, because of a relatively short duration of the particle hits (Fig. 4). As a result, the full energy of the system may be represented as a sum of those of the particles and the piston, with a quadratic contribution to the piston’s potential energy by small deviations from the equilibrium:

9

As a math reminder, the term “cylinder” does not necessarily mean the “circular cylinder”; the shape of its crosssection may be arbitrary; it just should not change with height. Chapter 5

Page 7 of 44

Essential Graduate Physics

SM: Statistical Mechanics

~ V ~ where q  q  q  , A

 U p  q~ 2 , 2

(5.34)

and  is the effective spring constant arising from the finite compressibility of the gas.

F  PA A, M

q

V A

~ V  V  V (t ) Fig. 5.5. Deriving Eq. (37).

Second, at  = (/M)1/2  0, this spring constant may be calculated just as for constant variations of the volume, with the gas remaining in quasi-equilibrium at all times:

 

  P  A 2   q  V

F

 .  

(5.35)

This partial derivative10 should be calculated at whatever the thermal conditions are, e.g., with S = const for adiabatic conditions (i.e., a thermally insulated gas), or with T = const for isothermal conditions (including a good thermal contact between the gas and a heat bath), etc. With that constant denoted as X, Eqs. (34)-(35) give 2  P   V~  1  1   P  ~ 2 2      (5.36) Up   A V . 2 2   V  X  V  X  A  Finally, assuming that  is also small in the sense  > 1 the useful signal drops faster than noise. So, the lowest (i.e. the best) values of the NEP, ( NEP) min  T0G 1 / 2 1 / 2 , with  ~ 1 , (5.45) are reached at   1. (The exact values of the optimal product , and of the numerical constant  ~ 1 in Eq. (45), depend on the exact law of the power modulation and the readout signal processing procedure.) With the parameters cited above, this estimate yields (NEP)min/1/2 ~ 310-17 W/Hz1/2 – a very low power indeed. However, perhaps counter-intuitively, the power modulation allows the bolometric (and other broadband) receivers to register radiation with power much lower than this NEP! Indeed, picking up the sensor signal at the modulation frequency , we can use the subsequent electronics stages to filter out all the noise besides its components within a very narrow band, of width  T/U0 (as a reminder, the latter ratio has to be much smaller than 1 in order for the very notion of lifetime  to be meaningful) and at lower damping, its pre-exponential factor requires a correction, which may be calculated using the already mentioned energy diffusion equation.72 The Kramers’ result for the classical thermal activation of a system over a potential barrier may be compared with that for its quantum-mechanical tunneling through the barrier.73 The WKB approximation for the latter effect gives the expression



 Q   A exp 2



  (q)dq,

 2 2 ( q )  U (q)  E , 2m

(5.156)   2 ( q )0  showing that generally, the classical and quantum lifetimes of a metastable state have different dependences on the barrier shape. For example, for a nearly-rectangular potential barrier, the exponent that determines the classical lifetime (155) depends (linearly) only on the barrier height U0, while that defining the quantum lifetime (156) is proportional to the barrier width and to the square root of U0. However, in the important case of “soft” potential profiles, which are typical for the case of emerging (or nearly disappearing) quantum wells (Fig. 11), the classical and quantum results are closely related. with

U (q) U0

q1 0

q2

q

Fig. 5.11. Cubic-parabolic potential profile and its parameters.

70

An example of such an equation, for the particular case of a harmonic oscillator, is given by QM Eq. (7.214). The Fokker-Planck equation, of course, can give only its classical limit, with n, ne >> 1. 71 H. Kramers, Physica 7, 284 (1940); see also the model solution of Problem 27. 72 See, e.g., the review paper by O. Mel’nikov, Phys. Repts. 209, 1 (1991), and references therein. 73 See, e.g., QM Secs. 2.4-2.6. Chapter 5

Page 34 of 44

Essential Graduate Physics

SM: Statistical Mechanics

Indeed, such potential profile U(q) may be well approximated by four leading terms of its Taylor expansion, with the highest term proportional to (q – q0)3, near any point q0 in the vicinity of the well. In this approximation, the second derivative d2U/dq2 vanishes at the inflection point q0 = (q1 + q2)/2, exactly between the well’s bottom and the barrier’s top (in Fig. 11, q1 and q2). Selecting the origin at this point, as this is done in Fig. 11, we may reduce the approximation to just two terms:74 b U (q)  aq  q 3 . 3

(5.157)

(For the particle’s escape into the positive direction of the q-axis, we should have a,b > 0.) An easy calculation gives all essential parameters of this cubic parabola: the positions of its minimum and maximum: 1/ 2 (5.158) q 2   q1  a / b  , the barrier height over the well’s bottom: 4  a3  U 0  U (q 2 )  U (q1 )    3 b  and the effective spring constants at these points:

1   2 

1/ 2

,

(5.159)

d 2U 1/ 2  2ab  . 2 dq q 1, 2

(5.160)

Hence for this potential profile, Eq. (155) may be rewritten as 2

2(ab)1 / 2 with   . m

U   exp 0 , 0 T 

Soft well: thermal lifetime

2 0

(5.161)

On the other hand, for the same profile, the WKB approximation (156) (which is accurate when the height of the metastable state energy over the well’s bottom, E – U(q1)  0/2, is much lower than the barrier height U0) yields75 2   0  Q   0  864U 0

Soft well: quantum lifetime

  

1/ 2

 36 U 0  exp .  5  0 

(5.162)

The comparison of the dominating, exponential factors in these two results shows that the thermal activation yields a lower lifetime (i.e., dominates the metastable state decay) if the temperature is above the crossover value 36 Tc   0  7.2  0 . (5.163) 5 This expression for the cubic-parabolic barrier may be compared with a similar crossover for a quadratic-parabolic barrier,76 for which Tc = 2 0  6.28 0. We see that the numerical factors for 74

As a reminder, a similar approximation arises for the P(V) function, at the analysis of the van der Waals model near the critical temperature – see Problem 4.6. 75 The main, exponential factor in this result may be obtained simply by ignoring the difference between E and U(q1), but the correct calculation of the pre-exponential factor requires taking this difference, 0/2, into account – see, e.g., the model solution of QM Problem 2.43. 76 See, e.g., QM Sec. 2.4. Chapter 5

Page 35 of 44

Essential Graduate Physics

SM: Statistical Mechanics

the quantum-to-classical crossover temperature for these two different soft potential profiles are close to each other – and much larger than 1, which could result from a naïve estimate. 5.8. Back to the correlation function Unfortunately, I will not have time/space to either derive or even review solutions of other problems using the Smoluchowski and Fokker-Planck equations, but have to mention one conceptual issue. Since it is intuitively clear that the solution w(q, p, t) of the Fokker-Planck equation for a system provides full statistical information about it, one may wonder how it may be used to find its temporal characteristics that were discussed in Secs. 4-5 using the Langevin formalism. For any statistical average of a function taken at the same time instant, the answer is clear – cf. Eq. (2.11): f q(t ), p(t )   f (q, p) w(q, p, t )d 3 qd 3 p ,

(5.164)

but what if the function depends on variables taken at different times, for example as in the correlation function Kf() defined by Eq. (48)? To answer this question, let us start from the discrete-variable case when Eq. (164) takes the form (2.7), which, for our current purposes, may be rewritten as f t    f mWm (t ) .

(5.165)

m

In plain English, this is a sum of all possible values of the function, each multiplied by its probability as a function of time. But this implies that the average f(t)f(t’) may be calculated as the sum of all possible products fmfm’, multiplied by the joint probability to measure outcome m at moment t, and outcome m’ at moment t’. The joint probability may be represented as a product of Wm(t) by the conditional probability W(m’, t’ m, t). Since the correlation function is well defined only for stationary systems, in the last expression we may take t = 0, i.e. look for the conditional probability as the solution, Wm’(), of the equation describing the system’s probability evolution, at time  = t’ – t (rather than t’), with the special initial condition Wm' (0)   m' ,m . (5.166) On the other hand, since the average f(t)f(t +) of a stationary process should not depend on t, instead of Wm(0) we may take the stationary probability distribution Wm(), independent of the initial conditions, which may be found as the same special solution, but at time   . As a result, we get f t  f t      f mWm   f m'Wm'   .

(5.167)

m ,m '

This expression looks simple, but note that this recipe requires solving the time evolution equations for each Wm’() for all possible initial conditions (166). To see how this recipe works in practice, let us revisit the simplest two-level system (see, e.g., Fig. 4.13, which is reproduced in Fig. 12 below in a notation more convenient for our current purposes), and calculate the correlation function of its energy fluctuations.

Chapter 5

Page 36 of 44

Correlation function: discrete system

Essential Graduate Physics

SM: Statistical Mechanics

E1  

W1 (t )

W 0 t 





Fig. 5.12. Dynamics of a two-level system.

E0  0

The stationary probabilities of the system’s states (i.e. their probabilities for   ) have been calculated in problems of Chapter 2, and then again in Sec. 4.4 – see Eq. (4.68). In our current notation (Fig. 12), 1 1   W0    W , ,   1 e /T  1 1  e  / T (5.168)  so that E    W0    0  W1       / T . e 1

To calculate the conditional probabilities Wm’( ) with the initial conditions (166) (according to Eq. (168), we need all four of them, for {m, m’} = {0, 1}), we may use the master equations (4.100), in our current notation reading dW0 dW1   W0  W1 . (5.169) d d Since Eq. (169) conserves the total probability, W0 + W1 = 1, only one probability (say, W1) is an independent variable, and for it, Eq. (169) gives a simple, linear differential equation dW1   1  W1   W1    W1 , d

where      ,

which may be readily integrated for an arbitrary initial condition: W1 ( )  W1 (0)e





 W1 () 1  e

 

(5.170)

,

(5.171)

where W1() is given by the second of Eqs. (168). (It is straightforward to verify that the solution for W0() may be represented in a form similar to Eq. (171), with the corresponding replacement of the state index.) Now everything is ready to calculate the average E(t)E(t +) using Eq. (167), with fm,m’ = E0,1. Thanks to our (smart :-) choice of the energy reference, of the four terms in the double sum (167), all three terms that include at least one factor E0 = 0 vanish, and we have only one term left to calculate:



E (t ) E (t   )  E1W1 () E1W1 ( ) W (0)1  E12W1 () W1 0 e 1









 W1 () 1  e









2 2 1        1   1  e /T e  .   /T e e   2  /T  / T 1  1 e e  1 e





W1 (0)1

(5.172)

From here and the last of Eqs. (168), the correlation function of energy fluctuations is77

77

The step from the first line of Eq. (173) to its second line utilizes the fact that our system is stationary, so E(t +

) = E(t) = E() = const. Chapter 5

Page 37 of 44

Essential Graduate Physics

SM: Statistical Mechanics

~ ~ K E ( )  E (t ) E (t   )  E (t )  E (t ) E (t   )  E (t )  E (t ) E (t   )  E  

2

 2

e /T

e  / T  1

2

e





,

(5.173)

so its variance, equal to KE(0), does not depend on the transition rates  and . However, since the rates have to obey the detailed balance relation (4.103), / = exp{/T}, for this variance we may formally write  /     K E 0 e /T      2 , (5.174) 2 2 2 2   /   1      e /T  1





so Eq. (173) may be represented in a simpler form:     K E ( )    2  e  .  2

(5.175)

Energy fluctuations: two-level system

We see that the correlation function of energy fluctuations decays exponentially with time, with the rate . Now using the Wiener-Khinchin theorem (58) to calculate its spectral density, we get S E ( ) 

1





2 Δ 0

Γ  Γ  Γ Δ 2 Γ Γ .    e d cos  Γ  Γ 2   2 Γ 2

(5.176)

Such Lorentzian dependence on frequency is very typical for discrete-state systems described by master equations. It is interesting that the most widely accepted explanation of the 1/f noise (also called the “flicker” or “excess” noise), which was mentioned in Sec. 5, is that it is a result of thermallyactivated jumps between states of two-level systems with an exponentially-broad statistical distribution of the transition rates . Such a broad distribution follows from the Kramers formula (155), which is approximately valid for the lifetimes of both states of systems with double-well potential profiles (Fig. 13), for a statistical ensemble with a smooth statistical distribution of the energy barrier heights U0. Such profiles are typical, in particular, for electrons in disordered (amorphous) solid-state materials, which indeed feature high 1/f noise. U

U0  0

q

Fig. 5.13. Typical doublewell potential profile.

Returning to the Fokker-Planck equation, we may use the following evident generalization of Eq. (167) to the continuous-variable case: f (t ) f (t   )   d qd p  d q'd p' f (q,p) wq,p,   f (q',p' ) wq',p' ,  , 3

3

3

3

(5.177)

were both probability densities are particular values of the equation’s solution with the delta-functional initial condition

Chapter 5

Page 38 of 44

Correlation function: continuous system

Essential Graduate Physics

SM: Statistical Mechanics

w(q' , p' ,0)   (q' - q) (p' - p) .

(5.178)

For the Smoluchowski equation, valid in the high-damping limit, the expressions are similar, albeit with a lower dimensionality: f (t ) f (t   )   d 3 q  d 3 q' f (q) wq,   f (q' ) wq' ,  ,

(5.179)

w(q' ,0)   (q' - q) .

(5.180)

To see this formalism in action, let us use it to calculate the correlation function Kq() of a linear relaxator, i.e. an overdamped 1D harmonic oscillator with m0 > 1 “spins”, in thermal equilibrium at temperature T, calculate the correlation coefficient Ks  slsl+n, where l and (l + n) are the numbers of two specific spins in the chain. Hint: Consider a mixed partial derivative of the statistical sum calculated in Problem 4.21 for an Ising chain with an arbitrary set of Jk, over a part of these parameters.

82

Note that these two cases may be considered as the non-interacting limits of, respectively, the Ising model (4.23) and the classical limit of the Heisenberg model (4.21), whose analysis within the Weiss approximation was the subject of Problems 4.22 and 4.23.

Chapter 5

Page 41 of 44

Essential Graduate Physics

SM: Statistical Mechanics

5.9. Within the framework of the Weiss molecular-field approximation, calculate the variance of spin fluctuations in the d-dimensional Ising model. Use the result to derive the conditions of quantitative validity of the approximation. 5.10. Calculate the variance of energy fluctuations in a quantum harmonic oscillator with frequency , in thermal equilibrium at temperature T, and express it via the average energy. 5.11. The spontaneous electromagnetic field inside a closed volume V is in thermal equilibrium at temperature T. Assuming that V is sufficiently large, calculate the variance of fluctuations of the total energy of the field, and express the result via its average energy and temperature. How large should the volume V be for your results to be quantitatively valid? Evaluate this limitation for room temperature. 5.12. Express the r.m.s. uncertainty of the occupancy Nk of a certain energy level k by noninteracting: (i) classical particles, (ii) fermions, and (iii) bosons, in thermodynamic equilibrium, via the level’s average occupancy Nk, and compare the results. 5.13. Write a general expression for the variance of the number of particles in the ideal gases of bosons and fermions, at fixed V, T, and . Spell out the result for the degenerate Fermi gas. ~ 5.14. Express the variance of the number of particles,  N 2 V,T,, of a single-phase system in equilibrium, via its isothermal compressibility T  –(V/P)T,N/V.

5.15.* Calculate the low-frequency spectral density of fluctuations of the pressure P of an ideal classical gas, in thermal equilibrium at temperature T, and estimate their variance. Compare the former result with the solution of Problem 3.2. Hints: You may consider a cylindrically-shaped container of volume A V = LA (see the figure on the right), and start by using the Maxwell N ,T F (t ) distribution of velocities to calculate the spectral density of the force F (t) exerted by the confined particles on its plane lid of area A, approximating it as a delta-correlated process. L 5.16. Calculate the low-frequency spectral density of fluctuations of the electric current I(t) due to the random passage of charged particles between two conducting electrodes – see the figure on the right. Assume that the particles are emitted, at random times, by one of the electrodes, and are fully absorbed by the counterpart electrode. Can your result be mapped onto some aspect of the electromagnetic blackbody radiation?

q

Hint: For the current I(t), use the same delta-correlated-process approximation as for the force F(t) in the previous problem.

5.17. Perhaps the simplest model of the diffusion is the 1D discrete random walk: each time interval , a particle leaps, with equal probability, to any of two adjacent sites of a 1D lattice with a Chapter 5

Page 42 of 44

Essential Graduate Physics

SM: Statistical Mechanics

spatial period a. Prove that the particle’s displacement during a time interval t >>  obeys Eq. (77), and calculate the corresponding diffusion coefficient D. 5.18.83 A long uniform string, of mass  per unit length, is attached to a F t  T firm support and stretched with a constant force (“tension”) T – see the figure on the right. Calculate the spectral density of the random force F (t) exerted by the  string on the support point, within the plane normal to its length, in thermal equilibrium at temperature T. Hint: You may assume that the string is so long that the transverse wave propagating along it from the support point, never comes back. 5.19.84 Each of the two 3D harmonic oscillators, with mass m, resonance frequency 0, and damping coefficient  > 0, has electric dipole moment d = qs, where s is the vector of the oscillator’s displacement from its equilibrium position. Use the Langevin formalism to calculate the average potential of electrostatic interaction (a particular case of the so-called London dispersion force) of these oscillators separated by distance r >> (T/m)1/2/0, in thermal equilibrium at temperature T >> 0. Also, explain why the approach used to solve a very similar Problem 2.18 is not directly applicable to this case.  1 2  d  Hint: You may like to use the following integral:  . 2 2 2 2 4a 0 1     a 





5.20.* Within the van der Pol approximation,85 calculate the major statistical properties of fluctuations of classical self-oscillations (including their linewidth), for: (i) a free (“autonomous”) run of the oscillator, and (ii) its phase being locked by an external sinusoidal force, assuming that the fluctuations are caused by a weak external noise with a smooth spectral density Sf(). 5.21. Calculate the correlation function of the coordinate of a 1D harmonic oscillator with low damping, in thermal equilibrium. Compare the solution with that of the previous problem. 5.22. A lumped electric circuit consisting of a capacitor C shortened with an Ohmic resistor R is in thermal equilibrium at temperature T. Use two different approaches to calculate the variance of the thermal fluctuations of the capacitor’s electric charge Q. Estimate the effect of quantum fluctuations.

83

This problem, conceptually important for the quantum mechanics of open systems, was also given in Chapter 7 of the QM part of this series. 84 This problem, for the case of arbitrary temperature, was the subject of QM Problem 7.6, with QM Problem 5.15 serving as the background. However, the method used in the model solutions of those problems requires one to prescribe, to the oscillators, different frequencies 1 and 2 at first, and only after this more general problem has been solved, pursue the limit 1  2, while neglecting dissipation altogether. The goal of this problem is to show that the result of that solution is valid even at non-zero damping. 85 See, e.g., CM Secs. 5.2-5.5. Note that in quantum mechanics, a similar approach is called the rotating-wave approximation (RWA) – see, e.g., QM Secs. 6.5, 7.6, 9.2, and 9.4. Chapter 5

Page 43 of 44

Essential Graduate Physics

SM: Statistical Mechanics

5.23. Consider a very long, uniform, two-wire transmission line (see the figure on the right) with a wave impedance Z, which allows the propagation of V I I TEM electromagnetic waves with negligible attenuation. Calculate the variance 2 V  of spontaneous fluctuations of the voltage V between the wires within a small interval  of cyclic frequencies, in thermal equilibrium at temperature T. Hint: As an E&M reminder,86 in the absence of dispersive materials, TEM waves propagate with a frequency-independent velocity, with the voltage V and the current I (see the figure above) related as V (x,t)/I(x,t) = Z, where Z is the line’s wave impedance. 5.24. Now consider a similar long transmission line but terminated, at one end, with an impedance-matching Ohmic resistor R = Z. Calculate the variance V2 of the voltage across the resistor, and discuss the relation between the result and the Nyquist formula (81b), including numerical factors. Hint: A termination with resistance R = Z absorbs incident TEM waves without reflection. 5.25. An overdamped classical 1D particle escapes from a potential well with a smooth bottom but a sharp top of the barrier – see the figure on the right. Perform the necessary modification of the Kramers formula (139).

U (q)

0

Iq

q1

q2 q

5.26.* Similar particles, whose spontaneous electric dipole moments p have a field-independent magnitude p0, are uniformly distributed in space with a density n so low that their mutual interaction is negligible. Each particle may rotate without substantial inertia but under a kinematic friction torque proportional to its angular velocity. Use the Smoluchowski equation to calculate the complex dielectric constant () of such a medium, in thermal equilibrium at temperature T, for a weak, linearly-polarized rf electric field. 5.27.* Prove that for systems with relatively low inertia (i.e. relatively high damping), at not very high temperatures, the Fokker-Planck equation (149) reduces to the Smoluchowski equation (122) – in the sense described by Eq. (153) and the accompanying statement. 5.28.* Use the 1D version of the Fokker-Planck equation (149) to prove the solution (156) of the Kramers problem. 5.29. A constant external torque, applied to a 1D mechanical pendulum with mass m and length l, has displaced it by angle 0 < /2 from the vertical position. Calculate the average rate of the pendulum’s rotation induced by relatively small thermal fluctuations of temperature T. 5.30. A classical particle may occupy any of N similar sites. Its weak interaction with the environment induces random uncorrelated incoherent jumps from the occupied site to any other site, with the same time-independent rate . Calculate the correlation function and the spectral density of fluctuations of the instant occupancy n(t) (equal to either 1 or 0) of a site.

86

See, e.g., EM Sec. 7.6.

Chapter 5

Page 44 of 44

Essential Graduate Physics

SM: Statistical Mechanics

Chapter 6. Elements of Kinetics This chapter gives a brief introduction to the basic notions of physical kinetics. Its main focus is on the Boltzmann transport equation, especially within the simple relaxation-time approximation (RTA), which allows an approximate but reasonable and simple description of transport phenomena (such as the electric current and thermoelectric effects) in gases, including electron gases in metals and semiconductors. 6.1. The Liouville theorem and the Boltzmann equation Physical kinetics (not to be confused with “kinematics”!) is the branch of statistical physics that deals with systems out of thermodynamic equilibrium. Major effects addressed by kinetics include: (i) for autonomous systems (those out of external fields): the transient processes (relaxation), that lead from an arbitrary initial state of a system to its thermodynamic equilibrium; (ii) for systems in time-dependent (say, sinusoidal) external fields: the field-induced periodic oscillations of the system’s variables; and (iii) for systems in time-independent (“dc”) external fields: dc transport. In the last case, we are dealing with stationary (/t = 0 everywhere), but non-equilibrium situations, in which the effect of an external field, continuously driving the system out of equilibrium, is partly balanced by its simultaneous relaxation – the trend back to equilibrium. Perhaps the most important effect of this class is the dc current in conductors and semiconductors,1 which alone justifies the inclusion of the basic notions of kinetics into any set of core physics courses. The reader who has reached this point of the notes already has some taste of physical kinetics because the subject of the last part of Chapter 5 was the kinetics of a “Brownian particle”, i.e. of a “heavy” system interacting with an environment consisting of many “lighter” components. Indeed, the equations discussed in that part – whether the Smoluchowski equation (5.122) or the Fokker-Planck equation (5.149) – are valid if the environment is in thermodynamic equilibrium, but the system of our interest is not necessarily so. As a result, we could use those equations to discuss such non-equilibrium phenomena as the Kramers problem of the metastable state’s lifetime. In contrast, this chapter is devoted to the more traditional subject of kinetics: systems of many similar particles – generally, interacting with each other but not too strongly, so the energy of the system still may be partitioned into a sum of single-particle components, with the interparticle interactions considered as a perturbation. Actually, we have already started the job of describing such a system at the beginning of Sec. 5.7. Indeed, in the absence of particle interactions (i.e. when it is unimportant whether the particle of our interest is “light” or “heavy”), the probability current densities in the coordinate and momentum spaces are given, respectively, by Eq. (5.142) and the first form of Eq. (5.143a), so the continuity equation (5.140) takes the form

w   q  wq    p  wp   0 . t 1

This topic was briefly addressed in EM Chapter 4, avoiding its aspects related to thermal effects.

© K. Likharev

(6.1)

Essential Graduate Physics

SM: Statistical Mechanics

If similar particles do not interact, this equation for the single-particle probability density w(q, p, t) is valid for each of them, and the result of its solution may be used to calculate any ensemble-average characteristic of the system as a whole. Let us rewrite Eq. (1) in the Cartesian component form,    w wq j    wp j   0 ,   t p j j   q j 

(6.2)

where the index j numbers all degrees of freedom of the particle under consideration, and assume that its motion (perhaps in an external, time-dependent field) may be described by a Hamiltonian function H (qj, pj, t). Plugging into Eq. (2) the Hamiltonian equations of motion:2

q j 

H , p j

p j  

H , q j

(6.3)

we get   w   t j  q j 

 H w  p j 

    p j 

 H w  q j 

   0 .  

(6.4)

After differentiation of both parentheses by parts, the equal mixed terms w2H/qjpj and w2H/pjqj cancel, and using Eq. (3) again, we get the so-called Liouville theorem3

 w  w w   q j  p j   0 .   t p j j  q j 

(6.5)

Since the left-hand side of this equation is just the full derivative of the probability density w considered as a function of the generalized coordinates qj(t) of a particle, its generalized momenta components pj(t), and (possibly) time t,4 the Liouville theorem (5) may be represented in a surprisingly simple form: dw(q, p, t ) (6.6)  0. dt Physically this means that the elementary probability dW = wd3qd3p to find a Hamiltonian particle in a small volume of the coordinate-momentum space [q, p], with its center moving in accordance to the deterministic law (3), does not change with time – see Fig. 1.

q (t ), p (t ) t'  t

t 3

3

d qd p

Fig. 6.1. The Liouville theorem’s interpretation: probability’s conservation at the system’s motion flow through the [q, p] space.

2

See, e.g., CM Sec. 10.1. Actually, this is just one of several theorems bearing the name of Joseph Liouville (1809-1882). 4 See, e.g., MA Eq. (4.2). 3

Chapter 6

Page 2 of 38

Liouville theorem

Essential Graduate Physics

SM: Statistical Mechanics

At the first glance, this fact may not look surprising because according to the fundamental Einstein relation (5.78), one needs non-Hamiltonian forces (such as the kinematic friction) to have diffusion. On the other hand, it is striking that the Liouville theorem is valid even for (Hamiltonian) systems with deterministic chaos,5 in which the deterministic trajectories corresponding to slightly different initial conditions become increasingly mixed with time. For an ideal gas of 3D particles, we may use the ordinary Cartesian coordinates rj (with j = 1, 2, 3) as the generalized coordinates qj, so pj become the Cartesian components mvj of the usual (linear) momentum, and the elementary volume is just d3rd3p – see Fig. 1. In this case, Eqs. (3) are just pj rj   v j , p j  F j , (6.7) m where F is the force exerted on the particle, so the Liouville theorem may be rewritten as w 3  w w    vj  Fj  0, t j 1  r j p j 

(6.8)

and conveniently represented in the vector form w  v rw F  pw  0 . t

(6.9)

Of course, the situation becomes much more complex if the particles interact. Generally, a system of N similar particles in 3D space has to be described by the probability density being a function of (6N + 1) arguments: 3N Cartesian coordinates, plus 3N momentum components, plus time. An analytical or numerical solution of any equation describing the time evolution of such a function for a typical system of N ~ 1023 particles is evidently a hopeless task. Hence, any theory of realistic systems’ kinetics has to rely on making reasonable approximations that would simplify the situation. One of the most useful approximations (sometimes called Stosszahlansatz – German for the “collision-number assumption”) was suggested by Ludwig Boltzmann for a gas of particles that move freely most of the time but interact during short time intervals, when a particle comes close to either an immobile scattering center (say, an impurity in a conductor’s crystal lattice) or to another particle of the gas. Such brief scattering events may change the particle’s momentum. Boltzmann argued that they may be still approximately described Eq. (9), with the addition of a special term (called the scattering integral) to its right-hand side: w w  v rw F  pw  t t

Boltzmann transport equation

scattering

.

(6.10)

This is the Boltzmann transport equation, sometimes called just the “Boltzmann equation” for short. As will be discussed below, it may give a very reasonable description of not only classical but also quantum particles, though it evidently neglects the quantum-mechanical coherence/entanglement effects6 – besides those that may be hidden inside the scattering integral.

5

See, e.g., CM Sec. 9.3. Indeed, the quantum state coherence is described by off-diagonal elements of the density matrix, while the classical probability w represents only the diagonal elements of that matrix. However, at least for the ensembles close to thermal equilibrium, this is a reasonable approximation – see the discussion in Sec. 2.1.

6

Chapter 6

Page 3 of 38

Essential Graduate Physics

SM: Statistical Mechanics

The concrete form of the scattering integral depends on the type of particle scattering. If the scattering centers do not belong to the ensemble under consideration (an example is given, again, by impurity atoms in a conductor), then the scattering integral may be expressed as an evident generalization of the master equation (4.100): w t

scatteering





  d 3 p' p' p w(r, p' , t )  pp' w(r, p, t ) ,

(6.11)

where the physical sense of pp’ is the rate (i.e. the probability per unit time) for the particle to be scattered from the state with the momentum p into the state with the momentum p’ – see Fig. 2. scattering center

p' Fig. 6.2. A single-particle scattering event.

p

Most elastic interactions are reciprocal, i.e. obey the following relation (closely related to the reversibility of time in Hamiltonian systems): pp’ = p’p, so Eq. (11) may be rewritten as7 w t

scatteering

  d 3 p' pp' w(r, p' , t )  w(r, p, t ) .

(6.12)

With such scattering integral, Eq. (10) stays linear in w but becomes an integro-differential equation, typically harder to solve analytically than differential equations. The equation becomes even more complex if the scattering is due to the mutual interaction of the particle members of the system – see Fig. 3.

p '' interaction region

p'

p

p' Fig. 6.3. A particle-particle scattering event.

In this case, the probability of a scattering event scales as a product of two single-particle probabilities, and the simplest reasonable form of the scattering integral is8 7

One may wonder whether this approximation may work for Fermi particles, such as electrons, for whom the Pauli principle forbids scattering into the already occupied state, so for the scattering p  p’, the term w(r, p, t) in Eq. (12) has to be multiplied by the probability [1 – w(r, p’, t)] that the final state is available. This is a valid argument, but one should notice that if this modification has been done with both terms of Eq. (12), it becomes

w t

scatteering

  d 3 p' pp' w(r, p' , t )1  w(r, p, t )  w(r, p, t )1  w(r, p' , t ) .

Opening both square brackets, we see that the probability density products cancel, bringing us back to Eq. (12). This was the approximation used by L. Boltzmann to prove the famous H-theorem, stating that the entropy of the gas described by Eq. (13) may only grow (or stay constant) in time, dS/dt  0. Since the model is very approximate, that result does not seem too fundamental nowadays, despite all its historic significance. 8

Chapter 6

Page 4 of 38

Essential Graduate Physics

w t

SM: Statistical Mechanics

p' p, p ' p w(r, p' , t ) w(r, p '' , t )  ' ' .   d p'  d p '   pp' , p p ' w(r, p, t ) w(r, p ' , t ) ' '   3

scatteering

3

(6.13)

The integration dimensionality in Eq. (13) takes into account the fact that due to the conservation of the total momentum at scattering, p  p '  p'  p '' , (6.14) one of the momenta is not an independent argument, so the integration in Eq. (13) may be restricted to a 6D p-space rather than the 9D one. For the reciprocal interaction, Eq. (13) may also be a bit simplified, but it still keeps Eq. (10) a nonlinear integro-differential transport equation, excluding such powerful solution methods as the Fourier expansion – which hinges on the linear superposition principle. This is why most useful results based on the Boltzmann transport equation depend on its further simplifications, most notably the relaxation-time approximation – RTA for short.9 This approximation is based on the fact that in the absence of spatial gradients ( = 0), and external forces (F = 0), in the thermal equilibrium, Eq. (10) yields w w  (6.15) scattering , t t so the equilibrium probability distribution w0(r, p, t) has to turn any scattering integral to zero. Hence at a small deviation from the equilibrium, ~ (r, p, t )  w(r, p, t )  w (r, p, t )  0 , w 0 Relaxationtime approximation (RTA)

(6.16)

~ , and its simplest reasonable model is the scattering integral should be proportional to the deviation w ~ w w   , (6.17) scatteering  t

where  is a phenomenological constant (which, according to Eq. (15), has to be positive for the system’s stability) called the relaxation time. Its physical meaning will be more clear in the next section.

BoltzmannRTA equation

The relaxation-time approximation is quite reasonable if the angular distribution of the scattering rate is dominated by small angles between vectors p and p’ – as it is, for example, for the Rutherford scattering by a Coulomb center.10 Indeed, in this case the two values of the function w participating in Eq. (12) are close to each other for most scattering events, so the loss of the second momentum argument (p’) is not too essential. However, using the Boltzmann-RTA equation that results from combining Eqs. (10) and (17), ~ w w  v rw F  pw   , (6.18) t  we should always remember that this is just a phenomenological model, sometimes giving completely wrong results. For example, it prescribes the same time scale () to the relaxation of the net momentum of the system, and to its energy relaxation, while in many real systems, the latter process (that results 9

Sometimes this approximation is called the “BGK model”, after P. Bhatnager, E. Gross, and M. Krook who suggested it in 1954. (The same year, a similar model was considered by P. Welander.) 10 See, e.g., CM Sec. 3.7. Chapter 6

Page 5 of 38

Essential Graduate Physics

SM: Statistical Mechanics

from inelastic collisions) may be substantially longer. Naturally, in the following sections, I will describe only those applications of the Boltzmann-RTA equation that give a reasonable description of physical reality. 6.2. The Ohm law and the Drude formula Despite its shortcomings, Eq. (18) is adequate for quite a few applications. Perhaps the most important of them is deriving the Ohm law for dc current in a “nearly-ideal” gas of charged particles, whose only important deviation from ideality is the rare scattering effects described by Eq. (17). As a result, in equilibrium it is described by the stationary probability w0 of an ideal gas (see Sec. 3.1):

w0 (r, p, t ) 

g

N   ,

2 3

(6.19)

where g is the internal degeneracy factor (say, g = 2 for electrons due to their spin), and N() is the average occupancy of a quantum state with momentum p, that obeys either the Fermi-Dirac or the BoseEinstein distribution: 1 N    ,    p  . (6.20) exp    / T   1 (The following calculations will be valid, up to a point, for both statistics and hence, in the limit /T  –, for a classical gas as well.) Now let a uniform dc electric field E be applied to a uniform gas of similar particles with electric charge q, exerting the force F = qE on each of them. Then the stationary solution of Eq. (18), with /t = 0, should also be stationary and spatially uniform (r = 0), so this equation is reduced to ~ w (6.21) qE   p w   .



~ it produces is relatively small, Let us require the electric field to be relatively low, so the perturbation w as required by our basic assumption (16).11 Then on the left-hand side of Eq. (21), we can neglect that perturbation, by replacing w with w0, because that side already has a small factor (E). As a result, this equation yields ~   q E   w   qE     w0 , w (6.22) p 0 p 

where the second step implies isotropy of the parameters  and T, i.e. their independence of the direction of the particle’s momentum p. But the gradient p is nothing else than the particle’s velocity

Since the scale of the fastest change of w0 in the momentum space is of the order of w0/p = (w0/)(d/dp) ~ (1/T)v, where v is the particle speed scale, the necessary condition of the linear approximation (22) is eE T is well fulfilled for room temperatures (T ~ 0.025 eV) – and actually for all temperatures below the metal’s evaporation point.

 

vacuum level

(a)

(b)

d

1

2

(c)

1



2

U0 Fig. 6.5. Potential profiles of (a) a single conductor and (b, c) a system of two closely located conductors, for two different biasing situations: (b) zero electrostatic field (the “flat-band condition”), and (c) zero voltage ’.

Now let us consider two conductors with different values of , separated by a small spatial gap d – see Figs. 5b,c. Panel (b) shows the case when the electric field E = – in the free-space gap between the conductors equals zero, i.e. their electrostatic potentials  are equal.26 If there is an opportunity for particles to cross the gap (e.g., by either the thermally-activated hopping over the potential barrier, discussed in Secs. 5.6-5.7, or the quantum-mechanical tunneling through it), there will be an average flow of particles from the conductor with the higher Fermi level to that with the lower Fermi level,27 because the chemical equilibrium requires their equality – see Secs. 1.5 and 2.7. If the particles have an electric charge (as electrons do), the equilibrium will be automatically achieved by them recharging the effective capacitor formed by the conductors, until the electrostatic energy difference q reaches the value reproducing that of the workfunctions (Fig. 5c). So for the equilibrium potential difference28 we may write q     . (6.44) At this equilibrium, the electric field in the gap between the conductors is 25

Sometimes it is also called “electron affinity”, though this term is mostly used for atoms and molecules. In semiconductor physics and engineering, the situation shown in Fig. 5b is called the flat-band condition, because any electric field applied normally to a surface of a semiconductor leads to the so-called energy band bending – see the next section. 27 As measured from a common reference value, for example from the vacuum level – rather than from the bottom of an individual potential well as in Fig. 5a. 28 In physics literature, it is usually called the contact potential difference, while in electrochemistry (for which it is one of the key notions), the term Volta potential is more common. 26

Chapter 6

Page 12 of 38

Essential Graduate Physics

SM: Statistical Mechanics

E 

Δ Δ  n n ; d qd q

(6.45)

in Fig. 5c this field is clearly visible as the tilt of the electric potential profile. Comparing Eq. (45) with the definition (42) of the effective electric field E, we see that the equilibrium, i.e. the absence of current through the potential barrier, is achieved exactly when E = 0, in accordance with Eq. (41). The electric field dichotomy, E  E, raises a natural question: which of these fields we are speaking about in everyday and laboratory practice? Upon some contemplation, the reader should agree that most of our electric field measurements are done indirectly, by measuring corresponding voltages – with voltmeters. A vast majority of these instruments belong to the so-called electrodynamic variety, which is based on the measurement of a small current flowing through the voltmeter.29 As Eq. (41) shows, such electrodynamic voltmeters measure the electrochemical potential difference ’/q. However, there exists a rare breed of electrostatic voltmeters (also called “electrometers”) that measure the electrostatic potential difference  between two conductors. One way to implement such an instrument is to use an ordinary, electrodynamic voltmeter, but with the reference point set at the flatband condition (Fig. 5b) between the conductors. (This condition may be detected by vanishing electric charge on the adjacent surfaces of the conductors, and hence by the absence of its modulation in time if the distance between the surfaces is periodically modulated.) Now let me return to Eq. (41) and make two very important remarks. First, it says that in the presence of an electric field, the current vanishes only if ’ = 0, i.e. that the electrochemical potential ’, rather than the chemical potential , has to be position-independent in a system in thermodynamic (thermal, chemical, and electric) equilibrium of a conducting system. This result by no means contradicts the fundamental thermodynamic relations for  discussed in Sec. 1.5, or the statistical relations involving , which were discussed in Sec. 2.7 and beyond. Indeed, according to Eq. (40), ’(r) is “merely” the chemical potential referred to the local value of the electrostatic energy q(r), and in all previous parts of the course, this energy was assumed to be constant throughout the system. Second, note another interpretation of Eq. (41), which may be achieved by modifying Eq. (38) for the particular case of the classical gas. Indeed, the local density n  N/V of the gas obeys Eq. (3.32), which may be rewritten as   (r )  . (6.46) n(r )  const  exp    T  Taking the spatial gradient of both sides of this relation (still at constant T), we get n  const 

1 n   exp    , T T T 

(6.47)

so  = (T/n)n, and Eq. (41), with  given by Eq. (32), may be recast as  '  q 2    1   j     n        q nqE  Tn  . m  q m  q  

(6.48)

29

The devices for such measurement may be based on the interaction between the measured current and a permanent magnet, as pioneered by A.-M. Ampère in the 1820s – see, e.g., EM Chapter 5. Such devices are sometimes called galvanometers, honoring another pioneer of electricity, Luigi Galvani.

Chapter 6

Page 13 of 38

Essential Graduate Physics

SM: Statistical Mechanics

The second term in the parentheses is a specific manifestation of the general Fick’s law of diffusion jw = Dn, already mentioned in Sec. 5.6. Hence the current density may be viewed as consisting of two independent parts: one due to particle drift induced by the “usual” electric field E = –, and another due to their diffusion – see Eq. (5.118) and its discussion. This is exactly the physics of the “mysterious” term  in Eq. (42), though its simple form (48) is valid only in the classical limit. Besides being very useful for applications, Eq. (48) also gives us a pleasant surprise. Namely, plugging it into the continuity equation for electric charge,30  (qn)  j  0, t

(6.49)

we get (after the division of all terms by q/m) the so-called drift-diffusion equation:31 m n   nU   T 2 n, with U  q .  t

(6.50)

Comparing it with Eq. (5.122), we see that the drift-diffusion equation is identical to the Smoluchowski equation,32 provided that we parallel the ratio /m with the mobility m = 1/ of the Brownian particle. Now using Einstein’s relation (5.78), we see that the effective diffusion constant D of the classical gas of similar particles is T . (6.51a) D m This important relation is more frequently represented in either of two other forms. First, since the rare scattering events we are considering do not change the statistics of the gas in thermal equilibrium, we may still use the Maxwell-distribution result (3.9) for the average-square velocity v2, to recast Eq. (51a) as 1 (6.51b) D  v2  . 3 One more popular form of the same relation uses the notion of the mean free path l, which may be defined as the average distance to be passed by a particle before its next scattering: 1 D  l v2 3

1/ 2

,

with l  v 2

1/ 2

.

(6.51c)

In the forms (51b)-(51c), the result for D makes more physical sense, because it may be readily derived (admittedly, with some uncertainty of the numerical coefficient) from simple kinematic arguments – the task left for the reader’s exercise. Note also that using Eq. (51a), Eq. (48) may be rewritten as an expression for the particle flow density jn  njw = j/q: j n  nμ m qE  Dn , (6.52)

30

If this relation is not obvious, please revisit EM Sec. 4.1. Sometimes this term is associated with Eq. (52). One may also run into the term “convection-diffusion equation” for Eq. (50) with the replacement (51a). 32 And hence, at negligible U, identical to the diffusion equation (5.116). 31

Chapter 6

Page 14 of 38

Driftdiffusion equation

Essential Graduate Physics

SM: Statistical Mechanics

with the first term on the right-hand side describing particles’ drift, and the second one, their diffusion. I will discuss the application of this equation to the most important case of non-degenerate (“quasiclassical”) gases of electrons and holes in semiconductors, in the next section. To complete this section, let me emphasize again that the mathematically similar drift-diffusion equation (50) and the Smoluchowski equation (5.122) describe different physical situations. Indeed, our (or rather Einstein and Smoluchowski’s :-) treatment of the Brownian motion in Chapter 5 was based on a strong hierarchy of the system, consisting of a large “Brownian particle” in an environment of many smaller particles – “molecules”. On the other hand, in this chapter we are considering a gas of similar particles. Nevertheless, the equations describing the dynamics of their probability distribution, are the same – at least within the framework of the Boltzmann transport equation with the relaxation-time approximation (17) of the scattering integral. The origin of this similarity is the fact that Eq. (12) is clearly applicable to a Brownian particle as well, with each “scattering” event being the particle’s hit by a random molecule of its environment. Since, due to the mass hierarchy, the particle momentum change at each such event is very small, the scattering integral has to be local, i.e. depend only on w at the same momentum p as the left-hand side of the Boltzmann equation, so the relaxation time approximation (17) is absolutely natural – indeed, more natural than for our current case of similar particles. 6.4. Charge carriers in semiconductors Now let me demonstrate the application of the concepts discussed in the last section, first of all of the electrochemical potential, to understanding the basic kinetic properties of semiconductors and a few key semiconductor structures – which are the basis of most modern electronic and optoelectronic devices, and hence of all our IT civilization. For that, I will need to take a detour to discuss their equilibrium properties first. I will use an approximate but reasonable picture in which the energy of the electron subsystem in a solid may be partitioned into the sum of the effective energies  of independent electrons. Quantum mechanics says33 that in such periodic structures as crystals, the stationary state energy  of a particle interacting with the atomic lattice follows one of the periodic functions n(q) of the quasimomentum q, oscillating between two extreme values nmin and nmax. These allowed energy bands are separated with bandgaps, of widths n  nmin – n-1max, with no allowed states inside them. Semiconductors and insulators (dielectrics) are defined as such crystals that in equilibrium at T = 0, all electron states in several energy bands (with the highest of them called the valence band) are completely filled, N(v) = 1, while those in the upper bands, starting from the lowest, conduction band, are completely empty, N(c) = 0.34, 35 Since the electrons follow the Fermi-Dirac statistics (2.115), this means that at T  0, 33

See, e.g., QM Sec. 2.7 and 3.4, but a thorough knowledge of this material is not necessary for following discussions in this section. If the reader is not familiar with the notion of quasimomentum (alternatively called the “crystal momentum”), the following interpretation may be useful: q is the result of quantum averaging of the genuine electron momentum p over the crystal lattice period. In contrast to p, which is not conserved because of the electron’s interaction with the lattice, q is an integral of motion – in the absence of other forces. 34 This mapping of electrical properties of crystals onto their band structure was pioneered in 1931-32 by Alan H. Wilson. 35 In insulators, the bandgap  is so large (e.g., ~9 eV in SiO ) that the conduction band remains unpopulated in 2 all practical situations, so the following discussion is only relevant for semiconductors, with their moderate bandgaps – such as 1.14 eV in the most important case of silicon at room temperature. Chapter 6

Page 15 of 38

Essential Graduate Physics

SM: Statistical Mechanics

the Fermi energy F  (0) is located somewhere between the valence band’s maximum vmax (usually called simply V), and the conduction band’s minimum cmin (called C) – see Fig. 6.

 c q  C V





 v q 

Fig. 6.6. Calculating  in an intrinsic semiconductor.

q

0

Let us calculate the population of both branches n(q), and the chemical potential  in equilibrium at T > 0. Since the functions n(q) are typically smooth, near the bandgap edges the dispersion laws c(q) and v(q) may be well approximated with quadratic parabolas. For our analysis, let us take the parabolas in the simplest, isotropic form, with origins at the same quasimomentum, taking it for the reference point:36   C  q 2 / 2mC , for    C ,   2  V  q / 2mV , for    V ,

with  C   V   .

(6.53)

The positive constants mC and mV are usually called the effective masses of, respectively, electrons and holes. (In a typical semiconductor, mC is a few times smaller than the free electron mass me, while mV is closer to me.) Due to the similarity between the top line of Eq. (53) and the dispersion law (3.3) of free particles, we may reuse Eq. (3.40), with the appropriate particle mass m, the degeneracy factor g, and the energy origin, to calculate the full spatial density of the populated states (in semiconductor physics, called electrons in the narrow sense of the word):  Ne g m3/ 2 n   N   g 3  d  C 2C 3 V 2  C



~ ~  N      C

1/ 2

d~ ,

(6.54)

0

where ~   – C  0. Similarly, the density p of “no-electron” excitations (called holes) in the valence band is the number of unfilled states in the band, and hence may be calculated as N p h  V

V

 1  N    g  d  3



g V mV3 / 2 2  2

3



 1  N 

V

 ~  ~ 1 / 2 d~ ,

(6.55)

0

where in this case, ~  0 is defined as (V – ). If the electrons and holes37 are in the thermal and chemical equilibrium, the functions N() in these two relations should follow the Fermi-Dirac 36

It is easy (and hence is left for the reader’s exercise) to verify that all equilibrium properties of charge carriers remain the same (with some effective values of mC and mV) if c(q) and v(q) are arbitrary quadratic forms of the Cartesian components of the quasimomentum. A mutual displacement of the branches c(q) and v(q) in the quasimomentum space is also unimportant for statistical and most transport properties of the semiconductors, though it is very important for their optical properties – which I will not have time to discuss in any detail. 37 The collective name for them in semiconductor physics is charge carriers – or just “carriers”. Chapter 6

Page 16 of 38

Essential Graduate Physics

SM: Statistical Mechanics

distribution (2.115) with the same temperature T and the same chemical potential . Moreover, in our current case of an undoped (intrinsic) semiconductor, these densities have to be equal, n  p  ni ,

(6.56)

because if this electroneutrality condition was violated, the volume would acquire a non-zero electric charge density  = e(p – n), which would result, in a bulk sample, in an extremely high electric field energy. From this condition and Eqs. (54)-(55), we get a system of two equations, ni 

g V mV3 / 2  ~ 1 / 2 d~ ~ 1 / 2 d~ ,    ~ ~ 2 2  3 0 exp   C    / T   1 2 2  3 0 exp   V    / T   1

g C mC3 / 2



(6.57)

whose solution gives both the requested charge carrier density ni and the Fermi level . For an arbitrary ratio /T, this solution may be found only numerically, but in most practical cases, this ratio is very large. (Again, for Si at room temperature,   1.14 eV, while T  0.025 eV.) In this case, we may use the same classical approximation as in Eq. (3.45), to reduce Eqs. (54) and (55) to simple expressions   C      n  nC exp p  nV exp V for T   , (6.58) , ,  T   T  where the temperature-dependent parameters, g m T  nC  C3  C    2 

3/ 2

g m T  and nV  V3  V    2 

3/ 2

,

(6.59)

may be interpreted as the effective numbers of states (per unit volume) available for occupation in, respectively, the conduction and valence bands, in thermal equilibrium. For usual semiconductors (with gC ~ gV ~ 1, and mC ~ mV ~ me), at room temperature, these numbers are of the order of 31025m-3  31019cm-3. (Note that all results based on Eqs. (58) are only valid if both n and p are much lower than, respectively, nC and nV.) With the substitution of Eqs. (58), the system of equations (56) allows a straightforward solution:



V  C 2



T  g V 3 mV  ln  ln 2  g C 2 mC

  , 

ni  nC n V 

1/ 2

  exp .  2T 

(6.60)

Since in all practical materials the logarithms in the first of these expressions are never much larger than 1,38 it shows that the Fermi level in intrinsic semiconductors never deviates much from the so-called midgap value (V +C)/2 – see the (schematic) Fig. 6. In the result for ni, the last (exponential) factor is very small, so the equilibrium number of charge carriers is much lower than that of the atoms – for the most important case of silicon at room temperature, ni ~ 1010cm-3. The exponential temperature dependence of ni (and hence of the electric conductivity   ni) of intrinsic semiconductors is the basis of several applications, for example, simple germanium resistance thermometers efficient in the whole range from ~0.5K to ~100K. Another useful application of the same fact is the extraction of the bandgap 38

Note that in the case of simple electron spin degeneracy (gV = gC = 2), the first logarithm vanishes altogether. However, in many semiconductors, the degeneracy is factored by the number of similar energy bands (e.g., six similar conduction bands in silicon), and the factor ln(gV/gC) may slightly affect quantitative results.

Chapter 6

Page 17 of 38

Essential Graduate Physics

SM: Statistical Mechanics

of a semiconductor from the experimental measurement of the temperature dependence of   ni – frequently, in just two well-separated temperature points. However, most applications require a much higher concentration of carriers. It may be increased quite dramatically by planting into a semiconductor a relatively small number of slightly different atoms – either

donors (e.g., phosphorus atoms for Si) or acceptors (e.g., boron atoms for Si). Let us analyze the first opportunity, called n-doping, using the same simple energy band model (53). If the donor atom is only slightly different from those in the crystal lattice, it may be easily ionized – giving an additional electron to the conduction band and hence becoming a positive ion. This means that the effective ground state energy D of the additional electrons is just slightly below the conduction band edge C – see Fig. 7a.39 (a)

D 



(b)

C

V





A

Fig. 6.7. The Fermi levels  in (a) n-doped and (b) p-doped semiconductors. Hatching shows the ranges of unlocalized state energies.

Reviewing the arguments that have led us to Eqs. (58), we see that at relatively low doping, when the strong inequalities n Ec (as it is in the situation shown in Fig. 8c), it forms, on the left of such point x0 the so-called depletion layer, of a certain width w. Within this layer, not only the electron density n but the hole density p as well are negligible, so the only substantial contribution to the charge density  is given by the fully ionized acceptors:   –en–  –enA, and Eq. (72) becomes very simple: d 2 enA   const, dx 2  0

for x0  w  x  x0 .

(6.78)

Let us use this equation to calculate the largest possible width w of the depletion layer, and the critical value, Ec, of the applied field necessary for this. (By definition, at E = Ec, the left boundary of the layer, where V – e(x) = C, i.e. e(x) = V – A  , just touches the semiconductor surface: x0 – w = 0, i.e. x0 = w. (Figure 8c shows the case when E is slightly larger than Ec.) For this, Eq. (78) has to be solved with the following boundary conditions:

 0  

 , e

d 0  Ec , dx

 w  0,

d w   0 . dx

(6.79)

49

Even some amorphous thin-film insulators, such as properly grown silicon and aluminum oxides, can withstand fields up to ~10 MV/cm. 50 As a reminder, the derivation of this formula was the task of Problem 3.14. Chapter 6

Page 22 of 38

Essential Graduate Physics

SM: Statistical Mechanics

Note that the first of these conditions is strictly valid only if T