Feynman Simplified 1B: Harmonic Oscillators, & Thermodynamics [2 ed.]

Table of contents :
Chapter 12 Harmonic Oscillators
Chapter 13 Resonances
Chapter 14 Transients & Linear Systems Review
Chapter 15 Kinetic Theory of Gases
Chapter 16 Statistical Mechanics
Chapter 17 Brownian Motion
Chapter 18 Kinetic Theory at Equilibrium
Chapter 19 Kinetic Theory Near Equilibrium
Chapter 20 “Newton, We Have a Problem!”
Chapter 21 Laws of Thermodynamics
Chapter 22 Applications of Thermodynamics
Chapter 23 Irreversibility & Entropy
Chapter 24 Review of Chapters 15 – 23

Citation preview

Feynman Simplified 1B: Harmonic Oscillators, & Thermodynamics Everyone’s Guide to the Feynman Lectures on Physics by Robert L. Piccioni, Ph.D. Second Edition

Copyright © 2016 by Robert L. Piccioni Published by Real Science Publishing 3949 Freshwind Circle Westlake Village, CA 91361, USA Edited by Joan Piccioni

All rights reserved, including the right of reproduction in whole or in part, in any form. Visit our web site www.guidetothecosmos.com

Everyone’s Guide to the Feynman Lectures on Physics Feynman Simplified gives mere mortals access to the fabled Feynman Lectures on Physics. Caltech Professor and Nobel Laureate Richard Feynman was the greatest scientist since Einstein. I had the amazing opportunity to learn physics directly from the world’s best physicist. He had an uncanny ability to unravel the most complex mysteries, reveal underlying principles, and profoundly understand nature. No one ever presented introductory physics with greater insight than did Richard Feynman. He taught us more than physics — he taught us how to think like a physicist. But, the Feynman Lectures are like “sipping from a fire hose.” His mantra seemed to be: No Einstein Left Behind. He sought to inspire “the more advanced and excited student”, and ensure “even the most intelligent student was unable to completely encompass everything.” My goal is to reach as many eager students as possible and bring Feynman’s genius to a wider audience. For those who have struggled with the Big Red Books, and for those who were reluctant to take the plunge, Feynman Simplified is for you. Physics is one of the greatest adventures of the human mind — everyone can enjoy exploring nature. To further Simplify this adventure, I have written an eBook explaining all the math needed to master Feynman physics. Click here for more information about Feynman Simplified 4A: Math for Physicists. Additionally, an index for the Feynman Simplified series is in progress, with the latest edition available for free here.

This Book Feynman Simplified: 1B covers about a quarter of Volume 1, the freshman course, of The Feynman Lectures on Physics. The topics we explore include: Harmonic Oscillators, Resonances, and Transients Kinetic Theory of Gases Statistical Mechanics Thermodynamics Feynman Simplified makes Feynman’s lectures easier to understand without watering down his brilliant insights. I have added reviews of key ideas at the end of each chapter and at the end of each major section.

Feynman Simplified is self-contained; you need not go back and forth between this book and his. But, for those who wish to read both, I provide extensive cross-references: V1p12-9 denotes his Volume 1, chapter 12, page 9. If, for example, you have trouble with Feynman’s description of reversible machines in Volume 1 page 4-2, simply search Feynman Simplified for V1p4-2. Some material is presented in a different sequence — the best way to divide topics for one-hour lectures is not necessarily the best way to present them in a book. Many major discoveries have been made in the last 50 years; Feynman Simplified augments and modifies his lectures as necessary to provide the best explanations of the latest developments. Links to additional information on many topics are included. To find out about other eBooks in the Feynman Simplified series, click HERE. I welcome your comments and suggestions. Please contact me through my WEBSITE. If you enjoy this eBook please do me the great favor of rating it on Amazon.com or BN.com.

Table of Contents Chapter 12: Harmonic Oscillators Chapter 13: Resonances Chapter 14: Transients & Linear Systems Review Chapter 15: Kinetic Theory of Gases Chapter 16: Statistical Mechanics Chapter 17: Brownian Motion Chapter 18: Kinetic Theory At Equilibrium Chapter 19: Kinetic Theory Near Equilibrium Chapter 20: “Newton, We Have A Problem!” Chapter 21: Thermodynamic Laws Chapter 22: Thermodynamic Applications Chapter 23: Irreversibility & Entropy Chapter 24: Thermodynamics Review

Chapter 12 Harmonic Oscillators In V1p21-1 Feynman says: “In the study of physics…a strange thing occurs again and again: the equations which appear in different fields of physics, and even in other sciences, are often almost exactly the same, so that many phenomena have analogs in these different fields. So the study of a phenomenon in one field may permit an extension of our knowledge in another field.” “The harmonic oscillator, which we are about to study, has close analogs in many other fields; although we start with a mechanical example of a weight on a spring…we are really studying a certain differential equation. This equation appears again and again in physics and other sciences…[including] charge flowing back and forth in an electrical circuit; the vibrations of a tuning fork which is generating sound waves; the analogous vibrations of the electrons in an atom, which generate light waves;…a thermostat trying to adjust a temperature;…the growth of a colony of bacteria…; foxes eating rabbits eating grass, and so on.” The equations governing all these phenomena are called linear differential equations with constant coefficients. The words “linear" and “constant” make these the simplest of all differential equations — a good place to start. The form of equations of this type is: f(t) = a x + a dx/dt + … + a d x/dt n

0

1

n

n

where t is the independent variable, x is the dependent variable, all a are constant, and n is the order of this linear differential equation. This is called a differential equation because it contains derivatives. j

The Harmonic Oscillator The simplest mechanical example of behavior governed by a linear differential equation is a mass on a spring, illustrated in Figure 12-1. Here m is the mass, x is the vertical height, and x = 0 is the equilibrium height, the height at which that mass can rest motionless on this spring.

Figure 12-1 Mass on a Spring

We will assume the spring is ideal, perfectly elastic, and obeys Hooke’s law, which means the force F exerted by the spring is given by: F=–kx Here, k is the spring constant and x is the displacement from the equilibrium position. The minus sign signifies that the force opposes the displacement: if the mass moves to +x, the spring’s force is directed toward –x, and vice versa. The differential equation is then: F = ma = –kx d x/dt = –(k/m) x 2

2

From Chapter 6, we recall two related functions, sine and cosine, whose second derivatives are proportional to minus themselves. Specifically, d (sinωt) /dt = –ω sinωt d (cosωt) /dt = –ω cosωt 2

2

2

2

2

2

Indeed, the sine and cosine functions are identical except for a phase shift: sin(ø) = cos(ø–π/2). Either function fits our need; using sine will require less writing if the mass is at x=0 at t=0, while using cosine will require less writing if the velocity of the mass is zero at t=0. Let’s pick the cosine, and start with a mass that is stationary at displacement x=A, and is released at t=0. x = A cosωt d x/dt = –ω x ω = k/m 2

2

2

2

Note that A could have any value and satisfy the same equation. This is what we mean by a linear differential equation. The series of terms in our original differential equation had derivatives of various orders, but each was proportional to x or x , not x or √x or any other power. So if we multiply x by any constant A, Ax will satisfy all the same linear differential equations as does x. This 0

1

2

is not true for the independent variable t: if we divide t by 2, dx/dt will double and d x/dt will quadruple — t/2 will not satisfy all the same equations as does t. 2

2

Due to the cosine term, x oscillates up and down. The mass will start at x=A, drop through x=0, all the way down to x=–A, stop there for an instant, rise again, pass through x=0, and return to a momentary pause at x=A. That cycle will repeat indefinitely. A is called the amplitude of the oscillation. The period of the oscillation is how much time is required for the mass to complete one full cycle; that time is t = 2π/ω = 2π√(m/k), since the cosine function repeats every 2π radians. For a given m and k, the period of oscillation never changes; hence the name harmonic. Note that the mass oscillates with the same period regardless of the amplitude of oscillation. If we compress the spring twice as much, the force doubles, the acceleration doubles, the distance traveled in one second doubles, which exactly balances the fact that the mass has twice as far to travel to complete its cycle. The factors that determine the oscillation period are m and k. A greater mass is harder to move; it slows the motion and lengthens the period. Quadrupling the mass doubles the period, due to the square root. Conversely, a stronger spring has a larger k, exerting a greater force and reducing the period. Quadrupling k halves the period. There is one more “knob” to play with: a phase shift. We saw earlier that subtracting π/2 from the argument of the cosine function transforms it into the sine function. We can add or subtract any constant angle ø. The equation: x = A cos(ωt+ø) allows us to describe any starting position and velocity, what we call initial conditions. This equation can be expanded according to the usual rules of trigonometry: x = (Acosø) cosωt – (Asinø) sinωt Depending on application and personal preference, one can write this equation in any of the following equivalent ways: x = C cosωt + B sinωt x = A cos(ωt+ø) x = A cos(ω [t–t ] ) 0

In the first equation above √(C +B ) = A the amplitude of oscillation. In all three, ω is called the angular frequency of oscillation, which is measured in radians per second. If j complete oscillations occur per second, ω equals 2π j. 2

2

Initial Conditions V1p21-4

Any specific linear differential equation can be solved by many different equations. We noted earlier that their linearity ensures that Ax solves any equation that is solved by x. We also noted that sine functions solve the mass-on-spring equation when the starting height is zero, while cosines solve the equation when the starting velocity is zero. The three forms of the solution listed above are each able to describe any mass moving on any ideal spring, with any starting position and velocity. What we need to do next, therefore, is to learn how to connect these initial conditions to the adjustable constants in our three solutions. Let’s take the first solution as an example and take its time derivative: x(t) = C cosωt + B sinωt v(t) = –Cω sinωt + Bω cosωt These two equations provide the position and velocity at any time t. The spring determines the acceleration at any time t: a(t) = –k x(t) / m Let’s assume we know x(0) and v(0), the initial conditions at time t=0. We can then calculate C and B. C = x(0) B = v(0) / ω Similarly, for the second solution: x(t) = A cos(ωt+ø): v(t) = –Aω sin(ωt+ø) if x(0) equals 0: ø = π/2 A = –v(0) / ω if x(0) is not 0: tanø = –v(0) / [ω x(0)] A = x(0) / cosø For the third solution: x(t) = A cos(ω[t–t ]) v(t) = –Aω sin(ω[t–t ]) 0

0

if x(0) equals 0: t = π / 2ω A = v(0) / ω 0

if x(0) is not 0:

tan(ωt ) = v(0) / ω x(0) A = x(0) / cos(ωt ) 0

0

Next, let’s examine the kinetic and potential energy of this system, using our second solution as an example. The potential energy U is how much energy is stored in the compressed (or stretched) spring, with displacement +x (or –x). Recall that F = –grad(U), which for one-dimensional motion reduces to: F = –dU/dx U = –∫ F dx U = –∫ (–kx) dx U = + k x /2 = (k A /2) cos (ωt+ø) 2

2

2

The potential energy oscillates as the spring displacement oscillates, as expected. Now calculate the kinetic energy T, using the fact that ω =k/m. 2

T = mv / 2 T = (m A ω /2) sin (ωt+ø) T = (k A /2) sin (ωt+ø) 2

2

2

2

2

2

The kinetic energy also oscillates as the velocity of the mass oscillates. The total energy is: T+U = (kA /2) {sin (ωt+ø) + cos (ωt+ø)} T+U = k A /2 = m A ω /2 2

2

2

2

2

2

The two expressions in the last line are equivalent; we show both for sake of completeness. Since neither equation contains time, the total energy T+U is constant, as required by energy conservation.

Forced Oscillations V1p21-5 The above description of harmonic oscillation assumes no external forces affect the motion. Let’s now examine the impact of an external force F (t). The differential equation becomes: ext

F = m d x/dt = –kx + F (t) 2

2

ext

To make any progress with this equation, we need to know more about F (t). Specifically, we need its time dependence. As we shall discover, knowing a system’s response to a sinusoidal force is sufficient to solve any problem. Let’s therefore assume: ext

F(t) = f cos(ßt) Here, f is a constant and ß is a constant angular frequency that may differ from ω. Even with a complete knowledge of F(t), there is no guaranteed method to solve an arbitrary differential equation. There is nothing wrong with trying a possible solution to see if it works. If you

guess wrong, you will soon discover that your guess does not solve the differential equation, and then you can try something else. Clever guessing is both a science and an art; experience and perseverance also help. As Feynman says in V1p24-3: “Being physicists, we do not have to worry about the method as much as we do about what the solution is.” Feynman suggests a solution in which the mass oscillates at the frequency of the external force according to: x = D cosßt Let’s plug Feynman’s “guess” into the differential equation and see if it works. m (–Dß cosßt) = –k(D cosßt) + f cosßt m (–Dß ) = –kD + f D (k–mß ) = f D = f / (k–mß ) = f / [m (ω –ß )] 2 2

2

2

2

2

Feynman’s “guess” solves this differential equation for this specific value of D, which we show in two equivalent forms. The second seems more revealing. Recall that ω is the angular frequency of oscillation of that mass on that spring (that ratio of k/m) in the absence of any external force; ω is called the natural frequency of the mass-spring system. By contrast, ß is the angular frequency of the external driving force, also called the applied frequency, which can have any value. Let's consider some limiting cases of the applied frequency ß. We can rewrite D as: D = f / [m ω (1–ß /ω )] 2

2

2

For ß much less than ω (ßω, D is approximately –f/[mß ]. A negative D means the mass oscillates in the opposite direction of the driving force. Amplitude D drops rapidly as ß increases. 2

The most interesting situation is when ß and ω are nearly equal. Our solution shows that D grows very rapidly as ß nears ω. Feynman suggests the real-world example of pushing a child on a swing: if you push at the right pace, the child will gleefully reach great heights. Our solution actually shows that D is infinity at ß=ω, which is impossible. No real mass on a real spring can oscillate with infinite amplitude — the spring will break first. This result highlights the limited validity of our assumptions of zero friction, ideal Hooke’s law behavior, and perfect elasticity. Infinity is almost always a wrong answer in physics; nothing in our observable universe is infinite. (Well, Einstein might disagree. He said only two things were infinite, the universe and human stupidity, but added that he was not completely sure about the universe.)

Algebra & Complex Numbers V1p22-7 Many natural phenomena are best understood using complex numbers, numbers that involve i, the square root of –1. Feynman devotes an entire lecture to elementary algebra, including the calculation of powers and logarithms, and the manipulation of complex numbers. As our purpose is not to teach mathematics, but rather to utilize its discoveries to better understand physics, I will very briefly summarize the key results. Feynman lists these basic operations of algebra, each one matched with its inverse operation:

Feynman marvelously describes the progression of mankind’s concept of numbers. First, we counted on our fingers and discovered the natural numbers: 1, 2, 3…. Subtraction (and credit card spending) expanded our minds to encompass zero and the negative integers. Division led to the need for fractions and rational numbers, such as 1/9th. Roots required irrational numbers, such as √2, and ultimately imaginary numbers, such as √(–1). We define real numbers as those not containing an imaginary component, whereas pure imaginary numbers include any real multiple of i=√(–1). Complex numbers have the form: x+iy, where x and y are both real numbers. The complex conjugate of any complex number expression is obtained simply by reversing the sign of i. Thus: a–ib = (a+ib)* = complex conjugate of a+ib Remarkably, complex numbers are the end of the long line of successive generalizations. Every possible algebraic equation can be solved with complex numbers; there is no next step, number-wise. Some useful equations involving complex number are: (a+ib) + (x+iy) = (a+x) +i(b+y)

(a+ib)(x+iy) = ax –by + i(ay+bx) (a+ib)(a+ib) = a –b + i2ab (a+ib)(a–ib) = (a+ib)(a+ib)* = a + b 2

2

2

2

Perhaps the most important complex number equation is Euler’s identity: e = cosy + i siny iy

Let’s confirm this by recalling from Chapter 11 the infinite series expression for e , and setting x = iy. x

e e e e e e

= Σ x /n! = 1 + x + x /2! + x /3! + x /4! +… = Σ (iy) /n! = 1 + iy + i y /2! + i y /3! + i y /4! + … = 1 + iy – y /2! – iy /3! + y /4! … = 1 – y /2! + y /4! … + i { y – y /3! + y /5! …}

x

n

n

x

2

iy

3

4

n

n

iy

2

iy

2

iy

2

2

3

3

3

4

4

4

4

3

5

Now compare that with the infinite series for siny and cosy: cosy = 1 – y /2! + y /4! – … siny = y – y /3! + y /5! – … 2

4

3

5

The infinite series for the sum cosy + i siny equals the infinite series for e . We can now rewrite the sine and cosine series as: iy

i siny = Σ (iy) /n! sum over all odd n cosy = Σ (iy) /n! sum over all even n n

n

n

n

Lastly, we show one of the most beautiful equations of mathematics: e +1=0 iπ

Mathematicians have a joke: How many mathematicians does it take to change a light bulb? Answer: –e . Clearly, their humor is no better than ours. iπ

Let’s examine the graphical representation of e , as illustrated in Figure 12-2. The horizontal axis represents all real numbers, while the vertical axis represents all pure imaginary numbers. iθ

Figure 12-2 Complex Number Plane

The vector in the above figure is re , which we rewrite: iθ

re = r cosθ + i r sinθ re = x + iy iθ iθ

The last step employs the normal trigonometric relations: x=rcosθ and y=rsinθ. By representing complex numbers as vectors in the complex plane, we separate the vector’s length r and angle θ. This complex exponential representation is remarkably effective in analyzing harmonic oscillators, waves, and other periodic functions, as we will see next.

Harmonic & Circular Motion Let’s recall the equation we derived at the start of this chapter for the harmonic oscillation of a mass on a spring: x(t) = A cos(ωt+ø) Now rewrite that equation using complex exponentials. I will often write exp{i(ωt+ø)} instead of e to facilitate ereading. i(ωt+ø)

x = real part of {A exp(iωt+iø)} x = real part of {A exp(iø) exp(iωt)} The above quantity Aexp(iωt+iø) is a vector in the complex number plane, and x is its horizontal component, which is the real part of (x+iy). Comparing the vector Aexp(iωt+iø) to the vector re in Figure 12-2, we can equate the vectors’ lengths, r=A, and their angles, θ=ωt+ø. Thus, as time progresses, we can think of the vector Aexp(iωt+iø) rotating in the complex plane. The vector’s tail end stays at the origin, x=y=0, while its tip traces a circle of radius A. The time for one complete circle, the time for the vector’s angle to go through 2π radians (360 degrees), equals 2π/ω. Lastly, we have exp(iø), which sets the value of angle iθ

θ at time t=0. Perhaps this helps explain why ø is called a phase shift. It is important to note that the exponent (x in e ) must be a dimensional-less quantity, a number or an angle; x cannot have dimensions of time, distance, energy, etc. In the above example, ω has units of radians/time; hence ωt is an angle, as are ø and θ. x

Forced Oscillations Using Complex Numbers With our new complex number techniques, let’s reanalyze the forced harmonic oscillator: a mass on a spring subject to an external driving force: F = f cosßt Recall that the natural frequency of oscillation of the unforced mass is ω =√(k/m). Our linear differential equation is: m d x/dt = –kx + F 2

2

We now allow both F and x to be vectors in the complex plane. At the end, we will take the real part of x as our solution. With F now written fe , try a solution for x of the form x=De . ißt

m (–ß ) (De ) = –k(De ) + fe m (–Dß ) = –kD + f –Dß = –ω D + f/m D (ω –ß ) = f D = f / [m (ω –ß )] 2

ißt

ißt

ißt

ißt

2

2

2

2

2

2

2

Since all these quantities are real numbers, our real solution is: x(t) = D cosßt When we deal with more complicated problems, we will see that using the complex plane representation and being able to differentiate exponentials is much easier than dealing with trig functions.

Chapter 12 Review: Key Ideas 1. Many natural phenomena are governed by linear differential equations of the form: f(t) = a x + a dx/dt + … + a d x/dt n

0

1

where all a are constants. j

n

n

2. A mass m on a spring that obeys Hooke’s law: F=–kx is a harmonic oscillator with natural frequency: ω = √(k / m) whose displacement x is a periodic function: x(t) = A cos(ωt + ø) with amplitude A and phase shift ø. 3. If a mass on a spring is also subject to an external driving force: F = f cos(ßt) its displacement x is given by: x(t) = D cos(ßt) with D = f / [m (ω –ß ) ]. 2

2

4. Euler’s identity is e = cosy + i siny iy

Here i = √(–1) 5. A complex number (x+iy) can be written as a vector in the complex number plane: r e with x = rcosθ and y = rsinθ. iθ

Here x is called the real part and y the imaginary part of the complex number.

Chapter 13 Resonances In V1p23-2 Feynman stresses that solving linear differential equations in the complex number plane works well, because the equations are linear. One can get into trouble trying this with nonlinear equations. For example, consider the differential equation: d x/dt + x = F 2

2

2

This equation is nonlinear due to the x term. Now try the complex plane technique: replace the displacement x with (x+iy) and the driving force with (F+iG), and take the real part of the result as our solution. 2

d (x+iy) /dt + (x+iy) = (F+iG) d x/dt + id y/dt + x – y + i2xy = F+iG 2

2

2

2

2

2

2

2

2

The real part of this equation is: d x/dt + x – y = F 2

2

2

2

This is wrong — it is not the equation we started with. The erroneous y term comes in because the differential equation is nonlinear. Fortunately, there are a great many linear phenomena in nature. 2

Forced Oscillator with Damping V1p23-3 Let’s go back to the forced oscillator, but now with an added twist: a damping factor, such as friction. There are many types of friction with many different functional forms, as we discussed in Chapter 9. For this example, assume the force of friction is proportional to velocity v, as it would be for an object moving slowly through a viscous fluid such as oil. To make our equations a bit simpler, we will define the frictional force as: F = –µm v fric

Here, µ is the coefficient of friction per unit mass. Our linear differential equation then becomes: m d x/dt = –kx + F –µm dx/dt 2

2

ext

If we tried to solve this with trig functions instead of complex numbers, we would get the following:

–mDß cosßt = –kDcosßt +fcosßt +µmDsinßt –mDß = –kD + f + µmDtanßt 2 2

This is quite a burden to solve. The complex plane technique is much easier. As before let: Complex displacement x = D exp{ißt} Complex driving force F = f exp{ißt} Natural frequency ω = √(k/m) ext

The differential equation is then: m(–ß )x + kx + µm(iß)x = F –ß D + ω D + iµßD = f/m D = (f/m) / [ω –ß +iµß]) 2

ext

2

2

2

2

Now we see something new: D is a complex number. We can rewrite D in the form of a vector in the complex plane: D = r exp{–iθ} x(t) = real part of [D exp{ißt}] x(t) = real part of [r exp{–iθ} exp{ißt}] x(t) = r cos(ßt–θ) Recall that: F = real part of [f exp{ißt}] = f cos(ßt) ext

The angle of the driving force is ßt, while the angle of the displacement is ßt–θ. For positive values of θ, this means the angle of the displacement is always less than the angle of the driving force. Hence for θ>0, the displacement lags the driving force. Let’s calculate r and θ: D = r exp{–iθ} = (f/m) / [ω –ß +iµß]) 2

2

r = D D* r = (f/m) / [(ω –ß +iµß) (ω –ß –iµß)] r = (f/m) / [(ω –ß ) + (µß) ] 2 2

2

2

2

2

2

2

2 2

2

2

2

exp{+iθ} /r = 1/D = (m/f) [ω –ß +iµß] cosθ + isinθ = (mr/f) [ω –ß +iµß] sinθ = µß mr / f 2

2

2

2

All the factors on the right side of the last equation are positive, so θ is always greater than zero. By the definition we have chosen, this means the displacement x always lags behind the drive force F.

Those looking carefully at equation (23.12) in V1p23-4, and similar presentations in other sources, may be confused. Some define θ with the opposite polarity, so that θ0 throughout the range, as we have done here. The most interesting situation is when ß is nearly equal to ω. Here we can rewrite the equation for r using:

2

ω – ß = (ω+ß)(ω–ß) is nearly 2ß(ω–ß) 2

2

r = (f/m) / [4ß (ω–ß) + (µß) ] r = (f/2ßm) / [(ω–ß) + (µ/2) ] 2 2

2

2

2

2

2

2

2

r has the form A/(z + B) near z = 0 2

2

at ω = ß, r = (f/2ßm) / (µ/2) = f / (µmß) Figure 13-1 shows plots by SpringerImages.com of r and θ over a range of ß/ω, with r plotted on a relative scale. The nice peak in r is called a resonance. As the last equation shows, the peak height is limited only by friction (or other damping factors).

Figure 13-1 r and θ vs. ß/ω

The peak to width ratio is often characterized by the quantity Q = ω/µ. A large Q means a tall narrow peak. If you push a child on a swing, the biggest thrills for the least effort come if you push at the right rate (ß=ω) and if the system has a large Q. Conversely, if you design a bridge, you want to be sure Q is small at the frequencies of all potential

driving forces. Unfortunately, the first Tacoma Narrows Bridge had a large Q for 40 mph (64km/h) winds. It shook itself to pieces and collapsed five months after opening. Perhaps the designers should have read Feynman Simplified. We see the lag angle θ increases on an “S” curve as the frequency ß of the driving force increases, with the most rapid increase near θ = 90 degrees when ß=ω. For ß>ω, the displacement is in the opposite direction of the driving force.

Electrical Resonances Resonances occur frequently in nature. Electrical systems, for example, often have resonant circuitry. Figure 13-2 shows the symbols for three simple types of passive circuit elements — resistors, capacitors, and inductors — and a circuit containing those three parts.

Figure 13-2 Electrical Element Sy mbols & A Circuit

In electrical systems, voltage is a difference of electrical potential energy per unit charge, much as acceleration is the gradient of mechanical potential energy per unit mass. Capacitors are charge storage devices, symbolized by two parallel conductors separated by a gap. In Chapter 10, we calculated the electric field between two infinite parallel plates, E=µ/ε , which is a good approximation to a capacitor if the plate dimensions are much larger than their separation. In that expression, µ is the charge density, which equals the total charge q divided by the plate area A. For capacitor plate separation d, V = Ed = qd/ε A. Capacitance C is defined as q/V, which for an ideal capacitor is ε A/d. Ideal or not, once C is measured, we have: 0

0

0

C V= q Resistors resist the flow of electric current, much as friction resists mechanical motion. Without resistance, an infinite current would flow wherever a voltage existed. When a voltage V is applied across a device with resistance R, the current I flowing through the device is given by: Ohm’s law: V = I R The last element in Figure 13-2 is an inductor. When current I flows through an inductor, symbolized by a coiled wire, a magnetic field is created that stores energy. If more current flows, the magnetic field increases, requiring energy to be supplied by another part of the circuit. Inductance is the electrical analog of mass; increasing current flow requires energy, just as accelerating a mass requires energy. For inductance L, the voltage V required to change the current I is:

V = L dI/dt Since I = dq/dt, V = L d q/dt 2

2

For the circuit in Figure 13-2, the linear differential equation for the voltage V between the two nodes at the bottom is: L d q/dt + R dq/dt + q/C = V(t) 2

2

Just like the mechanical analog of a mass on a spring, with a driving force and frictional damping, this equation is solved in exactly the same manner. Let V(t) = ve , q = De . We solve this differential equation in the complex plane, and then take the real part as follows: ißt

ißt

{L(–ß ) + R(iß) + 1/C}D = v D = v / (–ß L +ißR +1/C) 2

2

With ω = 1/LC and µ = R/L, we have: D = v / [L(ω –ß +iµß)] 2

2

2

This is exactly the same equation we derived for a mass on a spring — nature uses the same rules over and over again. Again the system oscillates at the frequency ß of the driving force V. The system resonates when ß is nearly equal to ω, its natural frequency. Since D is complex, there is a phase lag: charge moves after the voltage changes. Each of the circuit elements we described above was idealized, just as we idealized perfectly rigid bodies and perfectly elastic springs. The real world is more complex. At the end of this chapter is a table comparing the properties of electrical and mechanical systems that are so often governed by the same linear differential equations.

Resonances, Resonances Everywhere In many different fields of science, we often find resonance phenomena playing crucial roles. Here is an example in geophysics: Earth’s atmosphere is a driven oscillator. Our Moon not only causes water tides on our planet, it also creates tides in our atmosphere. In any selected direction, the Moon’s gravitational force has a component in that direction that varies sinusoidally with a frequency of one full cycle every 12.42 hours. (See Chapter 8 for a description of tides.) Physicists studying masses on springs can easily try different masses, different springs, and different driving frequencies, generating lots of data to analyze. The atmosphere does not provide the same opportunity to “turn the knobs and see what happens.” We have only one definite data point: the response at a driving frequency of once per 12.42 hours. Being a bit more clever, in 1960,

geophysicists Walter Munk and Gordon MacDonald measured the atmosphere’s oscillation amplitude and phase (how much the air tide lags behind the direction of the Moon), and estimated its natural frequency and damping factor. That is enough, as we have seen, to calculate the entire resonance curve. But in V1p23-7, Feynman points out that if this were the end of the story, it would amount to “…very poor science. From two numbers we obtain two numbers, and from those two numbers we draw a beautiful curve, which of course goes through the very points that determined the curve!” Many curves, with many different theoretical interpretations, could match two numbers. We need more confirmation before believing that our curve is meaningful. Lo and behold, nature provided another data point. There was a scientific benefit to the catastrophic explosion of Krakatoa volcano in 1883, which gave the atmosphere one terrific jolt. The explosion made the atmosphere’s “bell ring”, enabling scientists to measure its natural frequency. It turned out to be 10.5 hours. That is quite close to Munk and MacDonald’s estimate of 10 hours and 20 minutes, which is shown in Figure 13-3.

Figure 13-3 Resonance of Earth’s Atmosphere

The next example involves the transition between two energy levels in an atomic nucleus. When a nucleus drops from a higher energy level, an excited state, to a lower one, the ground state, it must conserve energy in some manner. In some cases, the nucleus emits a gamma ray, the highest energy form of light. In whatever direction the gamma ray is emitted, the nucleus must recoil to conserve momentum, and move in the opposite direction. This nuclear recoil consumes some of the transition energy, reducing the energy carried off by the gamma ray. Such gamma rays therefore do not have sufficient energy to drive the reverse process, in which another identical nucleus absorbs the gamma ray and transitions from its ground state to its excited state. In 1958, Rudolf Mossbauer discovered that, in a solid crystal at low temperatures, atoms recoil not individually but as a group, which reduces the recoil energy to almost zero. Let see why. A gamma ray of energy E has momentum E/c, its energy divided by the speed of light.

The recoiling atom or atoms of total mass M must have equal but opposite momentum: p=–E/c. Since E is always much less than the mass energy of the atoms, we can employ non-relativistic equations. Atomic momentum p = M v = E / c v = E / Mc Recoil energy = Mv /2 = E / (2Mc ) 2

2

2

Plugging in the numbers for iron-57 decay: E = 14,400 eV and Mc for one iron atom is 57 billion eV (eV means electron-volt, the energy an electron gains across a potential of one volt). For one iron atom, the recoil energy equals 0.0018 eV, about 1 part in 8 million of the gamma ray energy. Small as that seems, that energy loss is 100,000 times too much to trigger the reverse process. 2

At reduced temperatures, Mossbauer was able to achieve the collective recoil of 200,000 atoms. That increased the effective M by a factor of 200,000 and reduced the recoil energy to only 10 nano eV, or 1 part in 1.4 trillion. Using two cold iron crystals and moving them toward one another at a velocity of 0.02 cm/sec (0.008 inches/sec), Mossbauer increased the apparent energy of the gamma rays just enough to compensate for recoil losses and drive the reverse process in the receiving crystal. (If you want to learn more about how motion changes a photon’s apparent energy, read Our Universe 2: Redshift, Expansion & Dark Energy). A plot of gamma ray absorption versus relative crystal speed is shown in Figure 13-4. Each tick mark on the horizontal scale corresponds to 0.1 cm/sec. The Q factor of this resonance is over one trillion. Using the extreme precision of the Mossbauer effect, in 1959, Robert Pound and Glen Rebka were able to confirm Albert Einstein’s prediction of gravitational redshift. Mossbauer received the 1961 Nobel Prize for his remarkable experiments.

Figure 13-4 Mossbauer Effect

Resonances of a different type are prevalent in high-energy particle physics experiments. Here the resonance width is not determined by any type of frictional effect, but rather by the Heisenberg Uncertainty Principle, which we will explore later in this course. For now, suffice it to say that if a particle exists for an exceedingly short time interval Δt, its mass must be uncertain. To be specific, the full width of its mass resonance peak Δm is given by:

Δm = h / (2π dt) Here, h is Planck’s constant. The numbers are: Δm × Δt = 6.58×10 eV-sec. –16

Figure 13-5 plots the production rate of hadrons (particles that are composed of quarks) in electronpositron collisions. Note that this is a logarithmic plot: the production rate is about 300 times greater at the peak than at the local minimum at 60 GeV (60 billion eV).

Figure 13-5 Z Boson Resonance

A short-lived Z boson can be produced if the effective mass energy of the colliding electron and antielectron pair matches the Z mass. About 27% of Z's decay into hadrons, so producing Z’s dramatically increases the number of hadrons produced in the collisions. If nature were not uncertain, Z’s would be produced only if the collision energy exactly matched the Z mass of 91.2 GeV, which would be almost impossible to achieve. Thanks to Heisenberg, a resonance occurs when the collision energy is near the Z mass. The width of the mass peak is 2.5 GeV, corresponding to the Z lifetime of 2.6×10 sec. Z resonance production is analogous to pushing a swing or ringing a bell, hitting the resonance greatly enhances the response. –25

Many students taking Feynman’s course are distressed by the myriad of topics he often covers in a single lecture. What other freshman physics course examines Z boson production, the Mossbauer effect, and the oscillation of Earth’s atmosphere? What other course covers these all at once? No one says Feynman’s course is easy. But, nowhere else will you learn so much. No other professor illuminates so profoundly nature’s intimate interactions. If I can help you master Feynman’s Lectures on Physics, you will have the confidence to attack nature’s deepest mysteries, in a way that no other students can.

Energy of a Forced Oscillator

The kinetic energy of an oscillator is proportional to its velocity squared, in the usual way. But, as Feynman says in V1p24-1, we must do the squaring carefully. In the complex plane: v = v + iv x

y

v • v = v v* = v + v 2

x

2 y

This equation provides the amplitude of the complex velocity v. But this is not what we need to calculate kinetic energy. For that we need the square of the real part of v, namely v . Over one complete cycle, v rotates 360 degrees, so v and v have the same average value. Thus, the average value of v , written , equals vv*/2. 2

x

2 x

2 x

2 y

2 x

This relationship is true for any complex vector: its length squared equals twice the square of its real part, when averaged over a full cycle. Now, let’s calculate the energy of our driven, damped, harmonic oscillator. We will do this by calculating the power P expended by the driving force F, which equals F•v. Since the motion is entirely in one-dimension P = F•v = Fv. F = m d x/dt + kx +µmdx/dt P = F v = F dx/dt P = m d x/dt (dx/dt) +kx(dx/dt) +µm(dx/dt) P = (1/2) d {m(dx/dt) +kx } /dt +µm(dx/dt) 2

2

2

2

2

2

2

2

That last step is a bit tricky. It employs these two identities: d{(dx/dt) }/dt = 2 (dx/dt) (d x/dt ) d{x }/dt = 2 x dx/dt 2

2

2

2

Now, m (dx/dt) /2 = kinetic energy T 2

And, kx /2 = potential energy U 2

We can confirm that kx /2 is indeed the potential energy stored in the spring: Recall: 2

–dU/dx = F = –kx U = ∫ kx dx = kx /2 2

With all that, our power equation becomes: P = (1/2) d{T+U}/dt + µm (dx/dt)

2

Feynman notes that T+U is the stored energy in the oscillating mass-spring system, and says, since the oscillation repeats exactly cycle after cycle, T+U is constant over full cycles, even though T+U does vary within each cycle. This means the power expended by the driving force, averaged over full cycles, is consumed by the frictional element:

=

= µm

= µm D ß

= µm D ß /2 2

2

2

2

2

2

2

Above, we used the fact that over one or more full cycles, the average value of sin ø equals the average value of cos ø, and since their sum equals 1, each average must equal 1/2. 2

2

If we were dealing with an electrical harmonic oscillator, we would simply replace: x with q dx/dt with I µm with R The result is:

= R < I > = R I /2 2

2

max

Here, power is expended in heating the resistance R, which is sometimes called Joule heating, in recognition of the contributions of James Prescott Joule (1818-1889) to the study of heat and its relation to mechanical work. Getting back to the stored energy, we can calculate its average value : x(t) = D cosßt v(t) = –Dß sinßt T = m (Dß) sin ßt /2 = m (Dß) /4 2

2

2

U = m ω D cos ßt /2 = m ω D /4 2

2

2

2

2

= + = m D (ω + ß ) /4 2

2

2

When a system is initially stationary and a driving force commences at time t, the force must expend energy to build up the oscillation to the point that its average energy equals . Thereafter, the power expended by the force is entirely consumed by the frictional elements. Earlier we characterized the quality of an oscillator using Q=ω/µ, the ratio of the resonance’s peak height to peak width. Q is the quality factor at the peak, at ß=ω, when the driving force frequency equals the system’s natural frequency. One can also define a quality factor Q(ß) for situations in which ß is not equal to ω. Define Q(ß) as (the average energy in the oscillator) divided by (the average work per radian expended by the driving force). The average work per radian equals (the average power expended) multiplied by (1/

ß), the time to oscillate through one radian. The equations are: Q(ß) = / (

/ß) Q(ß) = ß /

Q(ß) = ß {mD (ω +ß ) /4} / (µm D ß /2) Q(ß) = (ω +ß ) / (2µß) Q(ß=ω) = ω / µ 2

2

2

2

2

2

2

The last equation matches what we had before for ß=ω.

Parallel & Serial Circuits In his discussion of electrical systems, Feynman adds a side note regarding parallel versus serial circuit elements, which is not directly related to harmonic oscillators. Back in Figure 13-2, three circuit elements are connected in series: an inductor, a resistor, and a capacitor. The linear differential equation for this system is: L d q/dt + R dq/dt + q/C = V(t) 2

2

Here the same amount of charge q flows through each element. In the complex plane, the applied voltage V(t), charge q(t), and current I(t) are given by: V(t) = v exp{ißt} q(t) = D exp{ißt} I(t) = dq/dt = iß q Putting these into our equation yields: L(iß) q +R(iß) q + q / C = V L(iß) I +R I + I / (ißC) = V 2

We can write this in a new form, with Z being the impedance (a complex number) of the jth circuit element and Z being the total impedance of the entire group of circuit elements. j

IZ +IZ +IZ =V=IZ 1

2

3

The three key points of serial circuits are: (1) The same current flows through each element sequentially. (2) The voltage drop across the elements may be very different. (3) The total voltage drop is the sum of the voltage drops across each element.

Two circuit elements connect in series are shown on the left side of Figure 13-6.

Figure 13-6 Serial & Parallel Impedances

By contrast, the circuit elements on the right side of Figure 13-6 are connected in parallel. The three key points of parallel circuits are: (1) Different currents may flow through each element. (2) The voltage drop across each element is the same. (3) The total current flow is the sum of the currents flowing through each element. Let’s now calculate the total impedance Z of two parallel elements whose individual impedances are Z and Z : 1

2

I Z =I Z =V=IZ I=I +I V/Z = I = V/Z + V/Z 1/Z = 1/Z + 1/Z 1

1

2

1

2

2

1

1

2

2

We see that impedances are summed quite differently in serial and parallel circuits. Here is a comparison of properties of mechanical and electrical systems:

Chapter 13 Review: Key Ideas

1. A complete analysis of harmonic oscillators — including driving forces, damping forces, and transients — is presented in the Review section of Chapter 14. 2. Earth’s atmosphere is a driven oscillator, with a natural frequency of one cycle per 10.5 hours, driven by our Moon’s tidal gravitational forces at one cycle per 12.42 hours. 3. The Mossbauer effect can have a Q value over one trillion in a crystal at very low temperatures, due to the collective recoil of 200,000 atoms. 4. The potential energy, kinetic energy, and total energy, averaged over a full cycle, of a driven, damped oscillator remain constant at: = m D ß /4 = m D ω /4 = m D (ω +ß ) /4 2

2

2

2

2

2

2

Once the motion becomes periodic, the power expended by the driving force, averaged over a full cycle, is:

= µ m D ß /2 2

2

This power is absorbed by the damping element.

Below are some common electrical circuit elements.

Here q is charge, I is current, V is voltage drop across element, C is capacitance, R is resistance, L is inductance, and Z(ß) is complex impedance at frequency ß. When two impedances Z and Z are combined, they have total impedance Z given by: 1

2

Z = Z + Z if connected in series 1

2

1/Z = 1/Z + 1/Z if connected in parallel 1

2

Chapter 14 Transients & Linear Systems Review Transients The systems discussed so far display repetitive periodic motion, responding to driving forces that continue forever. What happens when the music stops? Let’s analyze two scenarios that are mathematically very similar: (1) the sudden cessation of a periodic force that had driven the system for a great many cycles; and (2) a sudden impact, a force that acts for only an instant. In the first scenario, once the driving force stops, the energy dissipated by friction must now come from the oscillator’s stored energy. Frictional losses continuously diminish the oscillation, which fades away like the ringing of a bell no longer being struck. Let’s assume that the driving force stops at time t=0, at which time the system has displacement x(0) and velocity v(0). Whatever came before is now irrelevant. The only elements of history that can impact the future are x(0) and v(0). As Feynman points out in V1p24-3, for a high Q system, the energy lost per cycle is a small fraction of the stored energy, so the ringing diminishes very slowly. If a fixed percentage of the stored energy is lost each cycle, the amplitude of oscillation declines exponentially. That is a marvelous property of exponentials. We can show this for an exponential function f, whose exponent is some constant b multiplied by time. f(t) = exp{bt} = e df/dt = b e = b f b = (df/dt) / f

bt

bt

We see that b, the fractional change in function f, is the same at every instant of time. A second marvelous property of exponentials is that they can represent growth, decline, and/or oscillation. Their third marvelous property is that exponentials are the easiest functions to differentiate and integrate. What more could you ask for?

All the advantages of exponentials lead us to try a solution of the form: x(t) = D exp{izt} Here, z is a constant to be determined. Inserting x(t) into the differential equation yields: 0 = m d x/dt + kx +µm dx/dt 0 = m De (iz) + kDe +µmDe (iz) 0 = –z + ω + iµz z = iµ/2 ± √(–µ /4 + ω ) 2

2

izt

2

2

izt

izt

2

2

2

Let’s assume for a moment that ω>µ/2, so the square root is a real number, which we call σ. The two possible solutions for z are then: σ = ω – µ /4 2

2

2

z = iµ/2 ± σ Both are valid solutions. In fact, the most general solution is any linear sum of these two. Pick any two constants in the complex plane, Ae and Be , and get: ia

ib

x(t) = A exp{ia} exp{it(iµ/2 + σ)} + B exp{ib} exp{it(iµ/2 – σ)} = exp{–tµ/2} Θ where Θ = Aexp{i(a+σt)} + Bexp{i(b–σt)} Since the first exponential has a negative real exponent, the overall amplitude of the displacement x diminishes exponentially over time. This is what we expected, and are familiar with from the fading ringing of bells. The two exponentials in Θ have imaginary exponents, which means they are combinations of sine and cosine functions; Θ is the oscillatory part of x(t). Let’s evaluate the oscillatory expression Θ. Θ = A cos(a+σt) + iA sin(a+σt) + B cos(b–σt) + iB sin(b–σt) For Θ to be a real quantity, the imaginary terms must cancel one another. This means b=–a and B=A, which means Ae and Be are complex conjugates of one another. (If you enjoy math puzzles, prove that the cancellation of the imaginary terms is satisfied by two solutions that are mathematically equivalent.) This yields: ia

Θ = 2A cos(a+σt)

ib

x(t) = 2A exp{–tµ/2} cos(a+σt) The amplitude of x is 2A at t=0 and its phase angle is a. With those two “knobs” we can match any initial position and velocity conditions at t=0. This situation is called underdamped, for reasons that will become apparent shortly. A graph of x(t) is shown in Figure 14-1.

Figure 14-1 Decay ing Harmonic Oscillator

Recall that σ = ω –µ /4. This means for small frictional losses (µn, and a is nonzero. Hence d is nonzero. j

n

n

The last equation is therefore an nth order polynomial in b, which means it can be factored, restated as the product of n terms in the form: 0 = (b–z ) (b–z ) … (b– z ) 1

2

n

This polynomial has exactly n roots in the complex plane, z through z : the polynomial is zero when b equals any of these z values. 1

n

Some of the roots may be identical, in which case they do not provide independent solutions. Hence, an nth order linear differential equation can have up to n independent solutions.

Concluding Remarks Alas, not all equations in physics are linear. A simple example of a nonlinear differential equation is the motion of a pendulum. In Figure 14-5, a ball of mass m swings on a line of length r.

Figure 14-5 Pendulum

Let g be the acceleration of gravity, and x be the instantaneous direction of motion (dx=rdø). The force is directed so as to reduce the magnitude of ø. The differential equation is:

m d x/dt = –gm sinø d ø/dt = –(g/r) sinø 2

2

2

2

Since sinø is nonlinear, so is the differential equation. While there are advanced and complicated techniques for solving such equations, it is often possible (and much easier) to solve them approximately. If the arc of the pendulum’s swing is much less than its length (ø 0: x = A exp{–t(µ+Ω)/2} + B exp{–t(µ–Ω)/2} 2

2

2

The total stored energy, potential plus kinetic, averaged over a full cycle of a driven, damped oscillator is constant, as given by: = m D (ω +ß ) /4 2

2

2

The power expended by the driving force is absorbed by the damping element. Averaged over a full cycle, its value is:

= µ m D ß /2 2

2

Earth’s atmosphere is a driven oscillator, with a natural frequency of one cycle per 10.5 hours, driven by our Moon’s tidal gravitational forces at one cycle per 12.42 hours. The Mossbauer effect has a Q value over one trillion, due the collective recoil of 200,000 atoms. From the solutions to problems with sinusoidal driving forces, we can find solutions to problems with impulsive forces and therefore any forces, because of linear superposition. An nth order linear differential equation has n roots, each of which provides a solution. But if some roots are equal, they will not provide independent solutions. Electrical systems behave analogously to mechanical systems. The table below compares their properties.

Below are some common electrical circuit elements.

Where q is charge, I is current, V is voltage drop across element, C is capacitance, R is resistance, L is inductance, and Z(ß) is the complex impedance at frequency ß. When two impedances Z and Z are combined, they have total impedance Z given by: 1

2

Z = Z + Z if connected in series and 1

2

1/Z = 1/Z + 1/Z if connected in parallel 1

2

Chapter 15 Kinetic Theory of Gases We now embark on a journey of many chapters to understand the properties of macroscopic matter, based on Newton’s laws of mechanics and the atomic theory, the model that everything is made of atoms. Kinetic theory is the description of matter, based on behaviors of colliding atoms. As Feynman states in V1p40-1: “We assert that the gross properties of matter should be explainable in terms of the motion of its parts.” The branches of physics involved in this study are called thermodynamic and statistical mechanics. Ideally we would begin this study with a complete understanding of atoms, which requires a complete understanding of the laws of nature that govern atoms, namely quantum mechanics. Ideally, yes. But no one can teach everything all at once, and no one can learn everything all at once. Our approach will be primarily to follow the historical development of this subject, while occasionally highlighting the errors of the then-current, but now-discarded models — errors whose resolutions sometimes led to dramatic scientific advances. With this approach, you will come to understand not only the subject but also how science progresses through halting steps and ever better approximations. Recall that our goal is the least inadequate description of truth. Scientists began developing thermodynamics nearly four centuries ago, well before the atomic theory was firmly established. Its approximate understandings became effective, and helped propel the Industrial Revolution by replacing the manual labor of humans and domesticated animals with inexhaustible machines. In V1p39-2, Feynman begins by presenting an interesting example of the success of partial knowledge: the rule of simple integral proportions in the chemistry of gases. When different gases react chemically, it is observed that the amounts of each reacting gas are always small integer multiples of one another. This observation ultimately led Amedeo Avogadro (1776 – 1856) to conclude that, at a fixed temperature and pressure, equal volumes of gas contain equal numbers of molecules. Let’s see how much we can understand about gases from Newton’s laws of mechanics and the atomic model. Since we cannot keep track of the individual positions and velocities of trillions of trillions atoms, we must find ways to relate macroscopic properties to the average properties of immense numbers of atoms. As you will see, “average” will become ubiquitous in our discussion.

Gas Pressure Pressure is one macroscopic property of gases. Gases exert pressure because they are comprised of atoms in continuous motion. In V1p39-2, Feynman says that if our ears were a few times more sensitive, a constant cacophony would overwhelm us. That constant racket would come from individual air molecules banging against our eardrums. Let me provide the actual numbers: each second, at standard temperature and pressure, every exposed surface suffers 8×10 air molecule impacts per square centimeter, which is about the area of two human eardrums. That is nearly a trillion, trillion drumbeats per second. Fortunately, our ears are not sensitive enough to notice. 23

To analyze gas pressure quantitatively, consider the gas cylinder in Figure 15-1. Let’s call the gas volume V, the area of the piston’s face A, the horizontal cylinder length L, and the coordinate that measures the piston’s horizontal position x. With these definitions, V=AL.

Figure 15-1 Piston in Cy linder Filled with Gas

We will assume there is nothing outside the cylinder — no gas molecules to impact the piston from the outside. If so, a force F is required to hold the piston stationary, because gas molecules inside the cylinder bang against the piston and exert some force. The same amount of force is exerted uniformly across the piston’s face, so we define pressure P to be the force per unit area. For a piston face of area A, the pressure is: P=F/ A If we push the piston inward an infinitesimal distance dx, the work done compressing the gas is: dW = F dx = P A dx = – P dV Here, x increases to the right, and an increasing x reduces gas volume V. Let’s now calculate the force F exerted by the gas on the piston, recalling that force is the time

derivative of momentum p: F = dp/dt We first calculate the momentum transferred to the piston by the impact of a single gas molecule, and then multiply by the number of such hits in a time interval dt. We shall assume each impact is perfectly elastic, meaning that the gas molecule has the same total energy after the collision that it had before the collision. (In V1p39-3, Feynman points out that if the collisions were not elastic, the gas would lose energy, transferring it to the piston, which would heat up, eventually heating the gas, until some equilibrium is achieved. Let’s just assume we are already at equilibrium.) If the piston’s face were a perfect plane, it would exert forces only in the x-direction, and only change the x-component of the velocities of impacting gas molecules. Even a real-world piston can only change the x-components, on average, due to symmetry. Hence, in an elastic collision, with an immovable wall in the yz-plane, v after the collision equals – v before the collision. x

x

The momentum change dp for one collision of a molecule of mass m is: dp = 2 m v

x

Next, we need the number of such collisions during a time interval dt. During dt, each molecule moves a distance dx in the x-direction equal to (v )dt. On average, half will be moving toward +x and half toward –x. Thus, within dt, each molecule moving toward –x that starts within a distance dx= (v )dt of the piston will hit the piston. x

x

Assuming the gas is uniformly distributed, the fraction of gas molecules within dx is the ratio of dx to the total horizontal distance L. For N total gas molecules, the number j that will hit the piston in time dt is then: j = (1/2) N dx / L = (1/2) N v dt / L x

The 1/2 is due to half the molecules moving toward –x. Recall that L=V/A. The total momentum dp transferred to the piston during time interval dt is then: dp = (2 m v ) (1/2) (N A / V) v dt F = dp/dt = (N A / V) m v P = F / A = (N / V) m x

x

2

x

2

x

Since each molecule has a different velocity, what we really need for the total pressure is the average value of v , which is what stands for in the last equation. 2

x

2 x

Let’s take this a step further. In bouncing throughout the cylinder, gas molecules, on average, have the same velocities in each of the three coordinate directions: x, y, and z. Thus: = = = (1/3) ( + + ) = (1/3) 2

x

2

2

y

z

2

2

x

2

x

2

y

2 z

2

x

So, P = (1/3) (N / V) m PV = (2/3) N 2

2

Note that is the average kinetic energy of the gas molecules. 2

Molecules with multiple atoms have internal motions, such as vibration and rotation, each with associated energies. We will discuss those later. For now let's restrict ourselves to monatomic molecules, for which we need only consider their kinetic energy mv /2. 2

Since the average kinetic energy multiplied by the total number of molecules equals the gas’ total kinetic energy Ŧ, we have: PV = (2/3) Ŧ, with Ŧ = total kinetic energy I use the symbol Ŧ for kinetic energy because thermodynamics conventionally uses T for temperature. In V1p39-6, Feynman acknowledges the confusion to innocent students caused by the multiple meaning of many physics symbols. We use p for pressure and momentum, v for volume and velocity, and t can be temperature, kinetic energy, time, or torque. His only advice: “One must keep one’s wits about one!” Where possible, I will use slightly different symbols, such as Ŧ and T. Thermodynamics deals with all sorts of gases, most are more complicated than simple monatomic gases. For greater generality, physicists have adopted this notation: PV = (γ–1) Ŧ Sorry that we are using γ here, which means something else in special relativity, but that is the standard convention. For a simple monatomic gas, such as helium at room temperature, γ=5/3. This γ is called the specific heat ratio or the adiabatic index. If we compress a gas, work is expended pushing against the gas pressure. If that work is entirely converted into increasing the gas’ kinetic energy (rather than heating the walls, for example), we call that adiabatic compression. In this case, the work done, which we learned was dW = –PdV, increases kinetic energy Ŧ: –P dV = dŦ for an adiabatic process –P dV = d(PV) / (γ–1) –P dV (γ–1) = (V dP + P dV)

–P dV γ = V dP –γ dV / V = dP / P Recall that: ∫ dx / x = ln(x). This yields: –γ ln(V) = ln(P) + C V =Pe P V = some constant –γ

C



Indefinite integrals always produce an arbitrary constant, because the derivative of any constant is zero. Thus for a monatomic gas, compressed adiabatically (without heat losses), the pressure multiplied by the 5/3rds power of the volume is a constant.

Squeezing Light V1p39-6 Inside a very hot star, photons can be trapped by the charged particle plasma and act like a hot gas confined inside a box. We get the same equations as in the last section, except we need the proper expressions for a photon’s momentum and energy: p rather than mv , and E=pc rather than mv /2. 2

x

x

dp = (2p ) (1/2) (N A / V) v dt P = dp/dt / A = (N / V)

x

x

x

x

Again: p v = (1/3) p•v = (1/3) pc P = (N / V) E / 3 x

x

Thus for photons, γ–1 = 1/3 and γ = 4/3, so: P V = some constant 4/3

We have calculated the pressure of light in a star — look what you have learned to do!

Temperature & Kinetic Energy V1p39-6 We now wish to establish the connection between the kinetic energy of a gas and its temperature. We said before that when a gas is compressed its temperature rises. The work energy expended in pushing against the gas pressure is converted into heat energy, raising the gas temperature. We also know that if we put ice cubes in hot tea, we will eventually get diluted tepid tea. Heat energy flows from hot objects to cold objects until both bodies have the same temperature.

Despite our intuitions, and despite the feeling of diving into cold water, there is no “negative heat” that flows from cold to hot. To understand thermodynamics, the science of heat, we need a quantitative understanding of temperature. What does “hot” mean exactly? The only mechanical energy that a monatomic gas, a loose collection of individual atoms, can have is kinetic energy. (Atoms do have mass energy, but that cannot change at temperatures below millions of degrees. They also have gravitational potential energy, but we assume here that our gas volumes are small enough that any height changes result in negligible changes in gravitational potential energy.) Hence heat energy can only be the kinetic energy of the motion of individual atoms. Temperature is how we measure the average kinetic energy of motion at the atomic level. At temperature T, the kinetic energy in each degree of freedom equals kT/2, at equilibrium. Let’s address each part of that statement. T is measured on the Kelvin temperature scale, which is like the centigrade scale, except its zero is shifted. Absolute zero temperature is defined to be zero Kelvin, or 0 K (we do not say “degrees Kelvin”), and a change of 1 K equals a change of 1ºC. Water normally melts at 273.15 K and boils at 373.15 K. Since kinetic energy is never negative, T can never be negative on the Kelvin scale. The k in kT/2 is Boltzmann’s constant, which has a value of 1.38×10 joules per Kelvin. –23

Finally, each independent motion is one degree of freedom. For example, we have learned that x, y, and z are independent directions of space that permit independent motions. At temperature T, the average kinetic energy associated with motion in the x-direction is: /2 = kT/2 2 x

In three dimensions, there are three degrees of freedom; hence the average kinetic energy of 3-D motion is: /2 = (3/2) kT 2

Polyatomic molecules have additional degrees of freedom: they can rotate about various axes, and their atoms can vibrate back and forth along a bond axis. The concept that each degree of freedom has the same energy, on average at equilibrium, is called the principle of equipartition. The following lengthy analysis connects temperature and average atomic kinetic energy in a monatomic gas. Feynman does not claim that all the derivations in the remainder of this section are mathematically rigorous. I think some are, some are not, but all are instructive. You decide. You can always skip on to the section titled: Ideal Gas Law. We begin by demonstrating the equality of the kinetic energies of two different gases in equilibrium.

For simplicity, both gases will be monatomic, meaning the “molecules” are really just individual atoms. When two atoms collide at modest temperatures, their collision is perfectly elastic, because individual atoms cannot readily absorb energy internally. As we said earlier, their energies are entirely kinetic. The nearest macroscopic analog is a collision between two absolutely rigid balls; if the balls cannot deform, there cannot be any dissipative losses. In V1p39-8, Feynman reasons that, regardless of the initial conditions, atomic collisions will eventually lead to an equilibrium in which atoms of both types have the same average kinetic energy. We will demonstrate this using the concept of the center of mass (CM) of two or more objects. While we have used the term before, we now require a precise definition. The momentum of the CM of N objects is the vector sum of the momenta of each object. The CM frame is the reference frame in which the CM momentum is zero — the frame in which the center of mass is stationary. When viewed in the center of mass of the collision of two atoms, the direction of the atoms’ recoil is randomly and uniformly distributed over all possible directions, and has no correlation to the atoms’ initial directions of motion. Over time, the collisions will randomize the directions of the atoms’ motion. All that goes by very quickly. The next section explains this more thoroughly for those wishing a better understanding. You can skip onward, but you will miss some good physics.

Collision Kinematics in CM In our analyses, we will treat atoms as if they were Newtonian, tiny rigid billiard balls. Figure 15-2 shows a collision between two different atoms as viewed in their CM frame. The diagonal solid line is the axis of symmetry, which passes through the centers of both atoms and their point of contact. The symmetry axis is rotated from the incoming direction by the angle ø.

Figure 15-2 Collision in Center of Mass

Call the masses of the two atoms m and m . The atoms’ velocities are defined to be u and u before the collision and v and v afterward. 1

1

2

2

1

2

We will shortly show that the magnitude of each atom’s velocity, its speed, is the same before and after the collision. With that, we see the v’s are at the same angles to the axis of symmetry as are the u’s. Thus the scattering angle of each atom is: θ=π–2ø. Energy and momentum conservation require v =u and v =u , as we now prove. 2

1

2

1

2

2

2

2

m v + m v = p = 0 in CM v =–v m /m 1

1

2

1

2

m m v v

1 1 2

2

2

2

2

2

1

v /2 + m v /2 = E (–v m / m ) + m v = 2E m /m + m v = 2E m (1+m /m ) = 2E 2 1

2

2

2

2

2

2

2

1

2

2

2

2

2

1

2

2

2

2

1

v = 2E / {m (1+m /m )} v = v (–m /m ) = 2E /{m (1+m /m ) } 2

2

2

2

1

2 2

2

1

2

2

1

1

1

2

Repeating the above derivation for the u’s produces exactly the same equations: u = 2E / {m (1+m /m )} u = u (–m /m ) = 2E /{m (1+m /m ) } 2

2

2

2

1

2 2

2

1

2

2

1

1

1

2

These equations show that u =v and u =v , and that the magnitudes of all these velocities are completely determined by the initial values of energy and mass. 2

2

2 2

2

1

2

1

Now let’s consider the scattering angle. The left side of Figure 15-3 shows the effective cross-section of an oncoming atom, call it atom #1, as viewed by the other colliding atom, atom #2. If the atoms’ radii are r and r , they will collide if their centers pass within r=r +r . We describe this by saying a collision occurs if atom #2’s center hits a circle of radius r centered on atom #1; π r is the effective cross-section. 1

2

1

2

2

Figure 15-3 Collision Cross Section

Any point within the circle on the left side of the figure is equally likely to be the collision point. Consider the shaded portion of that circle, an annular ring, all of which lies in a narrow range of

angles, ø±dø, relative to the velocity of atom #1, the horizontal axis. As we see, ø runs from 0 to π/2. This is the same angle ø as in Figure 15-2. On the right side of of Figure 15-3, the surface area A of the sphere within the shaded region is the ring’s circumference times its arc length rdø: A = 2π (r sinø) (r dø) A = 2π r sinø dø = –2π r dcosø 2

2

The last step above demonstrates the important result that the surface area of an annular ring on a sphere is proportional to dcosø. Do not let the minus sign bother you; for dø>0, dcosø 0 = < (v –v ) • (m v + m v ) / (m +m ) > 0 = < m v – m v + (v •v )(–m +m ) > 1

2

1

CM

2

1 1

2

1

2 2

1

2

2

1

2

2

1

2

1

2

In V1p39-9, Feynman argues that, at equilibrium, the average directions of the colliding atoms are uncorrelated, hence =0. The above equation then reduces to: 1

2

= 2 1

1

2

2

2

This means the atoms’ kinetic energies are equal, on average. I am not convinced. Colliding atoms only collide if they approach one another. In the CM frame, define the x-axis to be along their line of approach. In this case, both atoms’ y- and z- velocities are zero, and their x-velocities have opposite sign. Hence, v •v is negative in the CM of every atomic collision. Since the dot product is invariant, is negative in all reference frames and coordinate systems. In a footnote, Feynman says his statement is true but his proof is not rigorous, adding: “We have found no simple proof of this result.” 1

1

2

2

A better argument is to consider every pair of atoms, one of each type, whether they are currently involved in a collision or not. We can still calculate their v and relative velocity v –v , which are still uncorrelated at equilibrium. But now, v and v truly are uncorrelated as well, so =0, which proves that the average kinetic energy of both types of atoms are truly equal. CM

1

2

1

2

1

2

Let me add a comment about why =0 if v and v are uncorrelated. If we pick any two molecules, call them 1 and 2, the dot product of their velocities is virtually never zero. They would have to be exactly perpendicular to have zero dot product. That is never going to happen. However, what we focus on in thermodynamics and statistical mechanics are global averages of trillions upon trillions of molecules. When we average over such immense collections, as many dot products will be positive as negative, so we can be sure the sum over all pairs is indeed infinitesimal. 1

2

1

2

From above we have: < m v /2 > = < m v /2 > 2

1

1

2

2

2

According to our definition of temperature, this shows that “equilibrium” means both gas types have the same temperature, which is what we should expect.

Two Gases and a Piston

We now consider a system with the same two monatomic gases, but this time the gas types will be separated. Figure 15-4 shows a cylinder with a central piston that can freely move left and right. One gas type is on the piston’s left side, and the other gas type is on the piston’s right side.

Figure 15-4 Two Ty pes of Gases Separated by a Piston

Let’s say the atoms on the left have mass m and those on the right have mass m . 1

2

If the pressure on the right side of the piston were higher than on the left, the piston would move left, thus raising the pressure on the left and lowering the pressure on the right. Eventually the piston will settle at a position with equal pressure on each side. In that equilibrium position, let the average speed of the atoms on the left is v , and on the right is v . Let’s also define the atomic density (N/V = number of atoms divided by volume) to be n on the left and n on the right. Then from our prior gas pressure equation, we have: 1

2

1

2

P =P = (1/3) (N / V) m (1/3) n m = (1/3) n m n = n 2

LEFT

RIGHT

2

1

1

1

2 1

1

2

1

2

2

2

2

2

2

2

The quantities in ’s are the average kinetic energies of the two types of atoms. In addition to equalizing pressures, there is another condition that the gases must achieve to reach thermal equilibrium: the average kinetic energies of the separated gas types must become equal. Earlier we prove their energies are equal if the gases are in the same volume. In V1p39-9, Feynman provides three arguments for the equality of average kinetic energies of the separated gases. In his first argument, Feynman opens a hole in the piston that only the smaller right-side atoms can penetrate. The smaller atoms will switch sides at some rate and exchange energy with atoms on each side. Eventually, after enough border crossings, all atoms will have the same average kinetic energy. In his second argument, Feynman replaces the hole with a dumbbell with a universal joint in the

middle and one large ball on each side, which penetrates the piston. As atoms strike the balls, the dumbbell transfers energy across the barrier, and equilibrium is eventually achieved. Both of these arguments require changing the setup, which seems to defeat the purpose of studying separated gases. His third argument is better. Feynman points out that, microscopically, even at equilibrium, the piston is not absolutely stationary, but rather continually jiggles as it is struck by one atom after another. If we treat the piston as one giant molecule, our earlier proof states that it must reach equilibrium with the gas to its left. If x is its only allowed direction of motion, the piston’s mv must equal the left-side gas’ mv . Similarly, it must be in equilibrium with the gas to its right. This can only be true if the left and right gases are also in equilibrium — having the same average kinetic energy. 2

x

2 x

This last argument rests upon the idealization that the piston is an absolutely rigid body that can be treated as a single entity. More realistically, the piston is a collection of atoms that must eventually come into equilibrium with themselves and with the gas molecules bombarding them from both the left and right sides. At equilibrium, all these atoms must have the same average kinetic energies. Finally… After all that, we have demonstrated that at equilibrium: (1) n (m v /2) = n (m v /2) (2) m v /2 = m v /2 2

1

1

2

1

2

2

1

2

2

2

1

2

2

Hence, n = n — equal volumes of gas contain equal numbers of molecules (although we have only proved it for monatomic molecules, so far). 1

2

We see that equilibrium is the state that a complex system eventually attains in which all of its parts have the same temperature and kinetic energy, on average.

Polyatomic Molecules Let's now extend these derivations to polyatomic molecules. Consider a diatomic molecule comprised of atom A and atom B, and define their masses to be m and m , with M=m +m . A

B

A

B

In V1p39-11, Feynman says that when a polyatomic molecule hits another molecule, “the only thing that counts is how fast [it is] moving.” During the collision, Feynman says the molecule’s interatomic bonds play no significant role. In calculating collision kinematics, we may therefore disregard the molecule’s internal motions: how the atoms are vibrating and rotating about the molecule’s center of mass. This is a simplifying assumption

that is approximately true on average at modest temperatures. Let’s compute the diatomic molecule’s kinetic energy. We begin with the equation for its center of mass velocity: v = (m v + m v ) / M M v = m v + m v + 2m m (v •v ) M v = 3kTm + 3kTm + 2m m (v •v ) M v = 3kTM + 2m m (v •v ) M v /2 = 3kT/2 + (v •v ) (m m /M) CM

A

2

A

B

2

2

CM

2

A

B

2

2

A

2

B

B

A

B

A

B

2

CM

2

A

B

A

B

A

B

2

CM

A

B

A

A

B

B

2

CM

A

B

We need to calculate v •v . Using the same logic as before, the relative velocity between the two atoms in the molecule, (v –v ), is uncorrelated with the velocity of the molecule’s center of mass at equilibrium. A

B

A

B

0 = < (v –v )•v > 0 = < (v –v )•(m v +m v )/M > 0 = < m v +(m –m )(v •v ) –m v > 0 = 3kT –3kT +(m –m ) 0 = < v •v >, if m not equal to m A

B

A

CM

B

A

A

B

B

2

A

2

A

B

A

A

B

A

B

B

A

B

A

B

B

B

A

This is an interesting result. I am surprised that the velocities of two bound atoms are not correlated — after all, they move together, bounce together off the piston, cylinder walls, and other molecules. But, they also move opposite to one another as they vibrate and rotate. Let’s divide their velocities into two components, v respectively. Then:

COM

and v , their common and opposite motions OPP

0 = = 0 = v – v •v + v •v – v v =v A

B

COM

OPP

COM

OPP

2

2

COM

COM

2

OPP

OPP

COM

OPP

2

OPP

COM

That says the atoms move as much in opposition as they move together, which I did not expect. Our derivation of =0 does not directly apply to H , N , O or other homonuclear molecules with m =m . However, if we set m =m +dm, equals zero for every nonzero dm, so the limit as dm goes to zero of is indeed zero. Unless nature is discontinuous at dm=0, our result should apply to homonuclear molecules as well. Note however that sometimes nature is discontinuous: certain quantum behaviors arise only when two objects are exactly identical. A

B

B

2

A

B

A

A

A

2

2

B

B

Plugging = 0 into our equation for the molecule’s kinetic energy yields: A

B

M v /2 = 3kT/2 2

CM

Hence the diatomic molecule, treated as a single entity, has the same average kinetic energy at

equilibrium as does a monatomic molecule. But, if we treat the two bound atoms as separate entities, their total average kinetic energy would be 3kT, twice as much. That’s OK. A diatomic molecule’s total energy actually is 3kT, with half being the kinetic energy of the motion of the molecule as a whole, and the other half being the internal motions of its component atoms vibrating and rotating. Clearly we can extend this result to any polyatomic molecule. If a molecule has n atoms with masses m and velocities v , j = 1...n, the key equation above becomes: j

j

M v = Σ {m v } + Σ { m m (v •v ) } M v = Σ {3kTm } = 3kT M M v /2 = 3kT/2 2

2

2

CM

2

j

j

2

j

j≠k

j

k

j

k

2

CM 2 CM

j

j

Note that the right hand sum in the first equation is over all non-equal values of j and k. Therefore, at equilibrium, the average kinetic energy of n-atom molecules are distributed as: kinetic energy of CM: 3 kT/2 internal motion energy: 3(n–1) kT/2 total kinetic energy: 3n kT/2

Ideal Gas Law V1p39-10 At the start of this chapter we showed: P V = (2/3) N 2

We can replace with 3/2 kT, yielding: 2

P V = N kT This means that, at equal temperatures and pressures, equal volumes of gas contain the same number of molecules. We derived that just from Newton’s laws and the assumption that everything is made of atoms. For historical reasons, chemists prefer to measure quantities in moles, rather than the number of atoms or molecules. One mole is the number of atoms in 12 grams of carbon-12, which is Avogadro’s number N = 6.022×10 . 23

A

Chemists also define R = kN = 8.314 joules per mole-Kelvin. In those terms, the more common form of the equation is: A

PV= nR T Here n is the number of moles of gas. This is called the Ideal Gas law, the law for gases comprised

of point-particles that interact only mechanically, without any chemical or nuclear reactions.

Chapter 15 Review: Key Ideas 1. Equilibrium is the state that a complex system eventually attains in which all of its parts have the same temperature and kinetic energy, on average. All of the following statements assume equilibrium conditions. 2. A gas containing N molecules in a volume V, exerts an omnidirectional pressure P, a force per unit area, given by: P = (2/3) (N / V) < mv /2 > 2

Here < > denotes the average value per molecule. 3. At equal temperature and pressure, equal volumes of different gases contain equal numbers of molecules. 4. The principle of equipartition states that at temperature T: < mv /2 > = kT/2 per degree of freedom 2

Here, Boltzmann’s constant k = 1.38×10 joules per Kelvin, temperature is measured in Kelvin (0K = absolute zero temperature; water normally melts at 273.15K and boils at 373.15K), and each independent motion constitutes one degree of freedom. For 3-D motion = 3kT/2. –23

2

5. A gas molecule containing n atoms has, on average, a total kinetic energy of 3nkT/2, of which 3kT/2 is the kinetic energy of the motion of entire molecule’s center of mass, and 3(n–1)kT/2 is the kinetic energy of its internal motions: vibrations and rotations about its CM. 6. In diatomic molecules, atoms move in opposite directions as much as in the same direction. 7. Ideal gases are comprised of point-particles that interact only mechanically, without any chemical or nuclear reactions. The Ideal Gas Law is: P V = n R T = N kT Here, N is the number of molecules, n is the number of moles of gas, 1 mole = 6.022×10 molecules, and R = 8.314 joules per mole-Kelvin.

23

8. Any change in pressure and/or volume in which no energy is dissipated (total work plus kinetic energy is constant), is called adiabatic. During adiabatic changes:

P V = (γ –1) Ŧ P V is constant γ

Here Ŧ is the total kinetic energy of the gas, and γ=5/3 for monatomic gases and γ=4/3 for light inside a star.

Chapter 16 Statistical Mechanics We will continue discussing equilibrium conditions, that subset of phenomena that complex systems eventually attain when the average temperature and kinetic energy of all its parts have equalized. The laws of mechanics that apply specifically to equilibrium conditions are called statistical mechanics, the subject of this chapter. We have learned that, on average, the kinetic energy associated with each degree of freedom is: < m v /2 > = kT /2 2

Here, we will explore how much variation exists. For example, does every atom have exactly that energy? If not, how many have twice that energy? We want to understand how molecules are distributed in space and in velocity. We will later discover that the velocity distribution is always the same, because collision kinematics is independent of external forces. But, let’s first examine spatial distributions. Consider the distribution of molecules in an artificial atmosphere that has the same force of gravity and temperature throughout. (This is not Earth’s atmosphere, which is colder at high elevations, and extremely hot at extremely high elevations.) Figure 16-1 illustrates a column of air of varying pressure, with a device suggested by Feynman to equalize temperatures.

Figure 16-1 Pressure & Temperature Equalizer

In Feynman’s scheme, large rigid balls are confined to a vertical tube. Balls at each end of the tube are exposed to air molecules that bounce against them. The balls also bounce against one another. Thus at equilibrium, all the balls and air molecules must reach the same temperature. (If you want to warm the upper atmosphere or cool your home, all you need is an insulated 20km-long tube.) Also as the figure indicates, air pressure changes with height. The pressure is P at height h and is P– dP at height h+dh, with both dP and dh being positive. We will consider the limit in which dh and dP become infinitesimal. The pressure difference dP supports the weight of the air between h and h+dh. Let’s see how that works. The pressure at any height is: P = (N / V) kT Define µ(h) to be the molecular number density (number N divided by volume V) at height h. The pressure difference is then: dP = P(h) – P(h+dh) dP = [µ(h) – µ(h+dh)] kT dP = – [dµ/dh] dh kT Now, what is the weight of the air between h and h+dh? Let m be the mass of one molecule. The number of molecules per unit area between h and h+dh equals µ(dh), the density multiplied by dh. The gravitational force per unit area pulling down on those molecules is: F / A = – g m µ dh At equilibrium, the differential pressure dP pushing up must exactly balance the gravitational force pushing down: dP = – F / A –[dµ/dh] dh kT = mg µ dh dµ/µ = – (mg / kT) dh Now take the indefinite integral of both sides. ln(µ) = – (mg / kT) h + C µ = µ exp{– mgh / kT } 0

Above, µ =e is an arbitrary constant from the indefinite integral, which we choose to equal the molecular number density at sea level, defined to be h=0. Since g and k are constants, the variable C

0

factors affecting the density distribution are mass, height, and temperature. As elevation increases at a fixed temperature, the density of molecules decreases exponentially, with heavier molecules decreasing faster than lighter ones. The lightest polyatomic gas is H , which is quite rare in our atmosphere, only about 1 part per million. Its low mass results in a very slow rate of density decline with increasing elevation. Its rarity is due in part to the extreme altitudes that hydrogen molecules can reach. From there, hydrogen easily drifts off into space. 2

Argon-40 is the third most common gas in Earth’s atmosphere, at almost 1% abundance. At 20 times the mass of H and 1.4 times the mass of N , Argon abundance drops rapidly with altitude. 2

2

Boltzmann’s Law Let's examine again the exponent of the density distribution: –mgh/kT. Since we assumed g was constant, this exponent is equivalent to: – U / kT Here, U=mgh is the gravitational potential energy of one molecule of mass m. This is in fact a general rule, more general than the equation we derived above. Let’s show this by redoing our derivation with more general assumptions. Let U(h) be the potential energy of one molecule at height h. The potential U could come from any conservative force, including the electrostatic force between charged bodies. In Chapter 10, we noted that Feynman said all fundamental forces are conservative. From U, we can calculate the force per unit area F/A. The force f per molecule is: f = –grad(U) = – dU/dh And the number of molecules per unit area between h and h+dh is µ(dh) as before. If dU/dh is positive, F is negative and points downward. F / A = (–dU/dh) µ dh = – µ dU From above, we can use the following general equations that are independent of the type of force: dP = – F / A dP = – dµ kT Combining the equations yields: dP = dU µ –dµ kT = dU µ

dµ/µ = – dU / kT ln(µ) = – U/kT + C µ = µ exp{–U/kt}, which is Boltzmann’s law. 0

Solid, Liquid, or Gas We can now tackle a more complex potential: the electrostatic potential between two molecules. Figure 16-2 illustrates the potential energy U(r) of two molecules as a function of their separation r.

Figure 16-2 Molecular Potential Energy vs. Distance

As we discussed in Chapter 9, atoms and molecules attract one another at moderate separations and repel one another at very small separations. Each type of molecular pairing has an optimal separation, labeled r in the figure, where the potential energy reaches a minimum. If we know U(r), we can use Boltzmann’s law to determine the distribution of separations among a large group of molecules. 0

For a pair of molecules, call them j and k, U(r ) is determined by their separation r . The potential U for an ensemble of N molecules is the sum of the pair potentials: jk

jk

N

U = Σ U(r ) sum for j < k N

jk

jk

Note that we must sum each pair of j and k once and once only; we should not include both j=2, k=7 and j=7, k=2. This is because this pair of molecules has one potential energy that they share. We accomplish this by restricting the sum to j = < –µ z dz/dt > m < z dz /dt > = –µ < d(z )/dt > /2 2

2

2

2

2

Next, we make the substitution: < z dz /dt > = < d[ (z dz/dt) ]/dt – (dz/dt) > < z dz /dt > = d [< z v >] /dt> – < v > 2

2

2

2

2

2 z

z

The middle term equals zero because the direction of velocity v is random and uncorrelated with position z, which means =0. Plugging that in yields: z

z

– m < v > = –µ < d(z )/dt >/2 < d(z )/dt >/2 = m < v > / µ = kT / µ < d(z )/dt > = 2 kT / µ < z > = 2 kT t / µ 2 z

2

2

2 z

2

2

Since all three directions of space are equivalent, the mean squared distance is three times the mean squared z-distance, or: < R > = 6 kT t / µ 2

This diffusion equation, developed by Einstein, was of great significance in the advancement of science. Its confirmation led to the universal acceptance of the atomic theory. This equation allowed, for the first time, measurements of the mass of atoms and hence the number of atoms per gram of various materials. This set the atomic scale and defined the values of Boltzmann’s constant k, Avogadro number, and the mole.

Chapter 17 Review: Key Ideas 1. The thermal energies of atoms and subatomic particles drive their chaotic motion, leading to macroscopic consequences, including: Brownian motion; electrical circuit noise; and vibrations in sensitive instruments. Identifying and understanding these effects enabled the first ever measurements of the mass and density of atoms. 2. At 300K (about room temperature): kT = 4.14×10 joules 1 joule = 1 kg m /s –21

2

2

3. In a random walk of N steps of length r in uncorrelated directions, the average distance R that a molecule reaches is: R = r √N 4. The mean square distance from its origin that a particle in chaotic motion reaches in time t is: < R > = 6 kT t / µ 2

Here, µ is the medium’s drag coefficient (F=–µdz/dt).

Chapter 18 Kinetic Theory at Equilibrium We will now explore the ramifications of our previous results concerning the spatial distribution of particles at equilibrium. In Chapter 16, we found that the probability of particles being at location x, where their potential energy is (Ux), is proportional to: exp[ –U(x) / kT] Kinetic theory can reveal the general characteristics of many complex phenomena, although a detailed understanding often requires substantial additional knowledge. As such, kinetic theory can be a good starting point, particularly when dealing with mysterious new phenomena. In this chapter we will discuss: the evaporation of a liquid, the emission of electrons from a metal, and chemical reaction rates — quite a broad range of topics.

Liquid Evaporation V1p42-1 Imagine a sealed box at equilibrium partially filled with molecules in the liquid phase and partially filled with a vapor of the same type of molecules in the gas phase. Let n be the density of molecules in the vapor, and n be the density of molecules in the liquid. We wish to know the ratio of n to n for various conditions. V

L

V

L

From prior discussions, we know that the force between atoms is complicated. Figure 16-2 shows the potential energy U(r) of a pair of atoms in a molecule as a function of their separation r. The potential is very modest for large separations, and grows more negative (the attraction increases) as the atoms get closer. There is an optimal separation r at which the potential energy is most negative. 0

The binding energy of two objects is defined as the amount of work energy required to completely separate them. In this case, the binding energy of a pair of atoms equals minus their potential energy. In the liquid phase, molecules are close to r and their binding energy, E , is close to the maximum possible value: –U(r ). 0

L

0

By comparison, the binding energy of molecules in the vapor, E , is very much smaller, since their separations are much larger. V

Let W be the work energy required to move a molecule from the liquid to the vapor phase; W equals the difference in binding energies of the two phases, which is very nearly equal to the binding energy of the liquid. In this case, Boltzmann’s law of population ratios, derived in Chapter 16, becomes: n / n = exp{–(E –E ) / kT} = exp{–W / kT} V

L

L

V

In this chapter, we will explore several phenomena that are characterized by equations of this form, focusing on situations in which W>>kT. In those situations, the exponential dominates all else, allowing us to neglect complex details, thereby making the physics simpler. To demonstrate this, consider the following equation: Q = A exp{Z} dQ = dA exp{Z} + A dZ exp{Z} dQ/Q = dA/A + dZ Now, let Z=21 and compare the effects of a 1% change in A versus a 1% change in Z: for dA/A= 0.01, dQ/Q = 0.01 for dZ/Z = 0.01, dZ = 0.21, dQ/Q = 0.21 A change in Z is 21× more impactful than the same percentage change in A. It is much more important to properly understand Z than A. Or said another way, if we can correctly analyze Z, we will understand the dominant part of the process, even if our knowledge of A is sketchy. Let’s see how this works in a real situation with: n / n = exp{–W/kT} V

L

Imagine we start with W=21kT, which means: for W = 21 kT, n / n = 7.6 × 10 V

–10

L

Now raise the temperature of the liquid by 10%, dropping W/kT to 18.9. The exponential becomes 8.2 times larger, so we might promptly claim the vapor density becomes 8.2 times larger. That would be approximately correct. We have ignored the possibility that the liquid might expand somewhat, lowering n , or that the vapor might expand, lowering n . Those effects are unlikely to be more than 10% changes, because they are proportional changes, unlike the exponential effect of changing W/kT. L

V

If we were designing a large-scale chemical factory, it would be very important to calculate all this very precisely. But since our interest is in understanding the fundamental laws of nature, we want to focus on key principles. In our case, small changes in n and n are not particularly interesting. V

L

The Boltzmann equation we used above is quite general. Let’s re-examine the liquid/vapor density ratio to see if it conforms to the general equation.

Vapor molecules continually bombard the surface of the liquid. As they do, some will bounce back and remain in the gas phase and some will embed themselves into the liquid. Let’s say the fraction of vapor molecules that hit the surface and condense into the liquid is C. At the same time, liquid molecules are colliding with one another and exchanging energy. Occasionally, one molecule attains enough kinetic energy to exit the liquid and enter the gas phase. To evaporate, a molecule’s kinetic energy must be at least W, and also it must be near the surface and moving upward. The exact conditions to exit the surface are complex; we will not deal with those here. Let’s just say that, of all the molecules that have kinetic energy at least W and hit the surface from below, some fraction do evaporate and we will call that fraction V. So the probability of evaporating is proportional to both V and exp[–W/kT], the fraction with adequate energy. To calculate the number of vapor molecules condensing during time t, we need to include another factor. The number hitting the surface in time t is proportional to their vertical velocity v . To hit the surface v must be negative (let v =–u), and the molecules must start within a distance ut of the surface to arrive there within time t. Z

Z

Z

Similarly, the liquid molecules must have positive v (let v =+u), and be within ut of the surface to arrive within time t. At equilibrium, within a time t, the number evaporating and the number condensing are equal, and we have the following equation: Z

Z

n u t C = n u t V exp{–W/kT} n C = n V exp{–W/kT} V V

L

L

Here, we assumed the u’s are the same on both sides of the equation. Feynman thinks you might object to that. After all we know the evaporating molecules have much higher energy than the average liquid molecules. Well, to be more precise, they did have more kinetic energy before they separated from their bound sites in the liquid; their kinetic energy had to be at least W. But separating from the liquid’s attractive potential reduced their energy by precisely W. Their remaining energy has exactly the same Boltzmann distribution as the average molecule’s energy. Recall a similar situation in our analysis of molecules in an atmosphere subject to gravity. Only the more energetic molecules can rise up against gravity, but they lose energy in doing so. The end result is that the average kinetic energy is the same at all heights, provided that the temperature is the same at all heights. Here too, the average kinetic energy, and hence average mean square v , is the same for all molecules, provided the temperature is the same everywhere inside the box. Z

Within the precision of our analysis (we know nothing about C and V), the vapor/liquid ratio is consistent with Boltzmann’s equation. Accepting Boltzmann’s equation implies C=V, at equilibrium. Since condensation seems much easier to analyze properly than liquid molecules pushing their way to the surface, it is easier to derive C and then use V=C. We can now address a different circumstance. If a pump at the top of the box continuously removes all vapor molecules, what is the evaporation rate? The rate at which liquid molecules evaporate is not affected by whether or not vapor molecules

condense. Hence, we can use the expression we computed above: Evaporation rate = n v C exp{–W/kT} L

Z

Thermionic Emission V1p42-4 Now consider the emission of electrons from a metal, such as a heated tungsten filament. (There once were a vast number of such filaments in vacuum tubes and incandescent light bulbs — do you remember them?) This problem is really the same as the evaporating liquid. The nuclei of atoms attract electrons. In metals, electrons in the outer most orbits, valence electrons, are weakly held by individual atoms and can flow relatively freely throughout the metal. The amount of energy required to completely remove an electron from a metal is called the metal’s work function. While work functions vary with many factors, including surface characteristics, they are typically a few electron volts. For pure elements these range from 2.14 eV for cesium to 5.93 eV for platinum. Imagine a hot metal filament in an enclosed evacuated space. At equilibrium, the number of electrons leaving the filament equals the number of electrons returning to the filament. We use the same equation as above, using F for filament instead of L for liquid, and now the work function W equals qø, where q the charge of one electron and ø is the voltage required to extract it. n v C = n v V exp{–W/kT} V

F

Again, comparing this with the general equation tells us that C=V. If we now apply an external electric field that sweeps up every electron in the “vapor”, the return rate will drop to zero, but the emission rate will be unchanged. Hence the emission rate, which equals the current I collected by our external field, is: I = q v n C exp{–W/kT} F

A typical work function of 4 eV equals kT at 46,400 K. Since filaments are operated at temperatures that are 40× less, we have W/kT=40, so the quantities that multiply the exponential, the “coefficients”, have far less leverage than do ø and T. Quantum mechanics changes the coefficients, and indeed no one, to Feynman’s knowledge, has derived a completely satisfactory expression for the coefficients, due to many detailed complexities. For example, W might change with temperature, but this effect is nearly impossible to separate from uncertainties in the coefficients.

Thermal Ionization V1p42-5

We next apply the same ideas to another situation: atoms being ionized due to thermal motion. Imagine we fill a sealed box with a neutral gas — each atom has equal numbers of protons and electrons. We then heat the gas to a high temperature, increasing the molecules’ kinetic energies, and the frequency and vigor of their collisions. At a high enough temperature, electrons are occasionally knocked out of their atomic orbits and move independently within the box, forming an “electron gas.” The atoms that lose electrons are called ions. When the box and its contents have reached equilibrium, let n be the density of neutral atoms (number/volume), n be the density of ions, and n be the density of free electrons. Also we know n =n , because the creation of a free electron also creates an ion. (For simplicity, let's assume the number of atoms that have lost multiple electrons is negligible.) 0

i

e

i

e

In this situation, what are the relationships between the densities of electrons, ions, and neutral atoms? The energy required to remove an electron from an atom is called the ionization energy. We’ll call it W for our gas. The rate at which neutral atoms ionize R is: I

R = n C exp{–W/kT} I

0

I

Here the exponential is present because only those atoms with available energy W can ionize. The factor C describes details of the atoms and electrons, including their effective sizes, which do not concern us at the moment. I

The rate R of the reverse process, in which free electrons and ions combine to produce neutral atoms, has no exponential energy factor because no energy is required. N

R =n n C N

I

e

N

Again we have a C factor that describes volume factors. At equilibrium the ionization and neutralization rates must be equal, thus: N

n n C = n C exp{–W/kT} n n / n = (C /C ) exp{–W/kT} I

e

I

e

N

0

0

I

I

N

This is the Saha equation. The factors in ( )'s that we are ignoring cannot be calculated properly with classical physics, since atoms and electrons are inherently quantum mechanical. One interesting aspect of the Saha equation is its volume dependence. The n’s are all number densities (N/V).

What happens if we increase the gas volume by a factor of 100, while keeping the temperature constant? At first it seems each n becomes 100× smaller. But that would make the left side of the equation 100× smaller, while the right side remains constant. Since that is impossible, the answer lies elsewhere: the ionization fraction must also change. Define: N = total number of all atoms V = volume occupied by all atoms n = number density of all atoms = N /V f = n /n, fraction of ionized atoms I

Recalling that n =n , the Saha equation then becomes: I

e

(fn) / (1–f)n = (C /C ) exp{–W/kT} f / (1–f) = (1/n) (C /C ) exp{–W/kT} f / (1–f) = (V/N) (C /C ) exp{–W/kT} 2

I

N

2

I

N

2

I

N

The left side of the last equation can become arbitrarily large for f close to 1. As gases become more dilute, V/N increases, and so does the ionization fraction f. Feynman says this helps explain why the tenuous gas of interstellar space is strongly ionized, even though its frigid temperature makes kTkT, the exponential will dominate the equation. Correctly understanding W reveals much about the dynamics, even if our understanding of A is limited. In what follows, A represents a quantity that is too complex to calculate. Nonetheless, in each case below, we can successfully characterize important phenomena. 2. For a liquid/gas mixture, with liquid binding energy W, the equilibrium ratio of molecular populations is: Gas/Liquid = A exp{–W/kT} 3. For a gas mixture of ionized and neutral molecules, with ionization potential W, the equilibrium densities of ions n , free electrons n , and neutral atoms n , are given by the Saha equation: I

e

0

n n / n = A exp{–W/kT} I

e

0

4. For the chemical reaction X+Y –> XY, with XY binding energy W, the equilibrium densities are governed by: n n / n = A exp{–W/kT} X

Y

XY

5. Chemical reactions often have an activation energy A* that must be overcome for both the reaction and the reverse reaction to proceed; these reaction rates are proportional to: X+Y–>XY: n n exp{– A* / kT} XY–>X+Y: n exp{– (A*+W) / kT} X

XY

Y

Chapter 19 Kinetic Theory Near Equilibrium We have developed many important equations describing the conditions of matter after it reaches equilibrium. Now, we will consider situations that are near equilibrium — where conditions are almost but not entirely uniform. These include diffusion of one substance into another, electric currents in batteries, sedimentation, centrifugal separation, molecular diffusion, and heat conduction. Before addressing how different substances diffuse through one another, we must learn more about collisions in a gas at equilibrium.

Collisions At Equilibrium V1p43-1 Every molecule in a gas suffers an endless sequence of collisions with other gas molecules. Let J be the number of collisions that each molecule has on average during a time T. At equilibrium global properties never change; with the same temperature and pressure, and with the same types and numbers of molecules, the number of collisions per second must be the same every second. Hence, J must be proportional to T. Thus, the relationship must be of this form: J=T/τ Here, the quantity τ is the mean collision time, the average time between collisions. If each molecule is hit one billion times per second, on average, τ is 1 nanosecond. Of course, molecules do not collide exactly once every nanosecond, synchronized to some master clock. But even completely random events do have some average rate of occurrence.Since molecules have no clocks to keep track of time, the probability of a collision must be the same at every instant of time. That probability is such that each molecule has an average time between collisions of τ, and the probability that any particular molecule M has a collision during any infinitesimal time interval dt is: Prob(M is hit during dt) = dt / τ Now define N(t) to be the number of molecules that have not had a collision between some starting time 0 and time t. Clearly N(t) decreases over time. The number of collisions in time dt suffered by a group of molecules equals their total number N(t)

multiplied by dt/τ, the probability of an individual molecule colliding in dt. For dt so small that no molecule has two or more collisions, the number of collisions in dt equals the decrease in N(t) during time dt. Said another way: during dt, the number of molecules that have never collided decreases by the number that do collide. Number collisions in dt = N(t) dt / τ Change in un-collided in dt = N(t) – N(t+dt) N(t) – N(t+dt) = N(t) dt / τ –dt dN(t)/dt = N(t) dt / τ dN(t) / N(t) = – dt / τ Integrating over time, we get: ln{ N(t) } = – t / τ + C N(t) = exp{C} exp{–t / τ} = N exp{–t / τ} 0

Hence, N(t), the number of un-collided gas molecules at time t, declines exponentially — yet another exponential relationship in nature. Here, C is the arbitrary integration constant that arises from every indefinite integral. We choose: exp{C} = N = N(0) 0

N(0) is the number of un-collided gas molecules at time 0. The probability of any particular molecule M not being hit by time t is N(t) / N , or: 0

Prob(M is not hit by time t) = exp{–t / τ} The exponential relationship ensures that the starting time t=0 is irrelevant. Whenever one starts, an exponential decay has exactly the same properties. Indeed, with an exponential, the average time to the next collision is the same regardless of how long it has been since the last collision. Molecules cannot record their histories; they cannot know when they are due for another hit. Exponentials have these very simple and powerful properties, and once we learn that the probability of some event is proportional to time, we know there is an exponential relationship. Another way to describe the collision rate is in terms of the average distance between collisions, which is defined as the mean free path. If the average time between collisions is τ, and the average molecular velocity is v, the mean free path λ is: λ=vτ The probability that molecule M has a collision while traveling an infinitesimal distance dx is: Prob(M is hit within dx) = dx / λ Just as we derived above for the time dependence, the probability that molecule M suffers no hits

while traveling a distance x is: Prob(M is not hit within x) = exp{–x / λ} The mean free path λ depends on the size and density of gas molecules. As we discussed in Chapter 15, in classical non-quantum physics, we imagine atoms and molecules as being tiny rigid spheres. Two spheres with radii r and r will collide if their centers pass within r +r of one another. We describe this by saying a collision occurs if the center of a moving molecule hits anywhere within an area called the collision cross section σ, defined by: 1

σ = π (r +r ) 1

2

1

2

2

2

From the cross section σ, we can calculate the mean free path λ. Consider a molecule moving in the +x-direction through a gas whose molecular number density (number/volume) equals n. Now imagine a thin slab of gas normal to the x-axis, as shown in Figure 19-1. Let the slab have thickness dx and surface area A parallel to the yz-plane.

Figure 19-1 Collision Cross Section

This slab has volume A×dx and contains nA×dx molecules. Since each molecule has a collision cross section σ, we can think of the molecules covering a total area a given by: Area molecules cover: a = σ n A dx The probability that the incident molecule hits a molecule in the slab is the ratio of the total area covered by the molecules divided by the total slab area a/A, assuming aT , let W be the work produced, and let Q and Q be the amounts of heat energy extracted from reservoir 1 and transferred to reservoir 2, respectively. If the internal energy of the heat engine does not change, energy conservation requires: 1

1

2

W=Q –Q 1

2

2

2

1

In analyzing lifting machines, we defined a “reversible” machine as a device that: (1) operates between two or more states; (2) operates equally well in forward or reverse, going from state B to state A as easily as going from A to B; and (3) operates without energy losses to friction or any other non-conservative force. Clearly, reversible machines are an idealization. For a heat engine to be reversible, there is yet one more utopian requirement: we can never allow two bodies with different temperatures to come in contact. As the second law states, heat flows from hot to cold, but never from cold to hot. If bodies with different temperatures come in contact, heat will flow irreversibly. In the case of reversible lifting machines, we imagine applying infinitesimal forces to move them either forward or backwards. Here also, we will imagine transferring heat with infinitesimal temperature differences that we can safely neglect in our analyses — another idealization. Let’s discover what we can learn with all these simplifying assumptions. Figure 21-1 shows a sequence of four steps in the operation of a reversible heat engine. The two thermal reservoirs are labeled T and T , and a gas container with a piston shuttles between them. 1

2

Figure 21-1 Step 1: Isothermal Expansion

In Step 1, the gas is expanded isothermally at temperature T , with heat Q transferring from the reservoir to the gas. 1

1

We know that gases cool when expanded, so the key to ensuring reversibility is expanding the gas slowly enough that its temperature never deviates appreciably from T . If it did, a hot body would be in contact with a colder body, making the operation irreversible. 1

Figure 21-1 Step 2: Adiabatic Expansion

In Step 2, with the container between thermal reservoirs, the gas expands adiabatically and cools to

temperature T . No heat flows since the container is not in contact with the reservoirs. 2

Figure 21-1 Step 3: Isothermal Compression

When the gas cools to T , its container is placed on thermal reservoir 2, where, in Step 3, it is compressed isothermally, with heat Q transferring from the gas to the reservoir. 2

2

Figure 21-1 Step 4: Adiabatic Compression

In Step 4, with the container between reservoirs, the gas is compressed adiabatically until it reaches temperature T . It can then be placed back on reservoir 1 without temperature change or heat transfer, returning the reversible system back to its exact starting condition. 1

Figure 21-2 is a graph of gas pressure versus volume through the sequence of four steps, with the shaded area being the work done. We are not requiring the gas to be an ideal gas, but if it were, the governing rules would be: P V is constant for isothermal steps 1 and 3 P V is constant for adiabatic steps 2 and 4 γ

Figure 21-2 Carnot Cy cle

Since the specific heat ratio γ is always > 1, pressure falls more rapidly with increasing volume during adiabatic expansion (Step 2: B to C) than isothermal expansion (Step 1: A to B). Conversely, pressure rises more rapidly during adiabatic compression (Step 4: D to A) than isothermal compression (Step 3: C to D). To maintain a constant temperature, the gas must absorb heat Q during Step 1, and must release heat Q during Step 3. 1

2

Our 4-Step program brings the system back to its starting conditions. The cycle is reversible if the internal energy of the engine is unchanged. (One possible internal energy change is the piston becoming hotter.) If reversible, we could just as easily run the engine backwards, from A to D to C to B to A. It is easy to calculate the work done during one complete cycle. The work done by the gas is: W = ∫ P dV This is the area under the curve in each Step. During expansion Steps 1 and 2, dV>0 and W is positive, meaning the gas does work on the piston. During compression Steps 3 and 4, dVT >T . 1

2

3

Figure 21-3 Heat Engines with T1>T2>T3

Engine 1 extracts heat Q from reservoir 1 and delivers heat Q to reservoir 3, while producing work: 1

3

W =Q –Q 13

1

3

Engine 2, running backwards, extracts heat Q from reservoir 3 and delivers heat Q to reservoir 2, while producing negative work: 3

W =Q–Q 0 LO

HI

HI

LO

HI

LO

Reversibility is an idealization that assists our understanding. But, no macroscopic, real-world process is truly reversible; all real systems have some friction and temperature differences. Every macroscopic natural process results in some increase in entropy, some conversion of work energy into heat energy that can never again be reconverted into useful work energy.

Chapter 21 Review: Key Ideas 1. The First law of thermodynamics is the conservation of energy: the heat energy Q added to a system plus the work energy W done on the system equals the increase in the system’s internal energy Ŧ : Q + W = ΔŦ

2. The Second law of thermodynamics states: (a) no process can have the sole net effect of converting heat into work; (b) a reversible heat engine can take heat Q at temperature T , deliver heat Q at T , and produce work W according to: W = Q – Q = Q (T –T )/T ; 1

1

2

1

1

2

1

2

2

1

(c) no heat engine can outperform a reversible heat engine; (d) total entropy is unchanged in a reversible process; (e) total entropy increases in any irreversible process; (f) all macroscopic real-world processes are irreversible. 3. The Third law of thermodynamics is: at absolute zero temperature, entropy reaches a minimum, a constant value that depends only on the atomic-level properties of the substance. When heat Q is added to a system at temperature T, its entropy increases by Q/T. 4. Absolute thermodynamic temperature T is defined by: T=Q/S 5. The Zeroth law of thermodynamics is: if two systems A and B are each in equilibrium with system C, then A and B are in equilibrium with one another.

Chapter 22 Applications of Thermodynamics In V1p45-1, Feynman says thermodynamics can be confusing because: “there are so many different ways of describing the same thing.” One can say pressure is a function of temperature and volume. But, one can equally well say volume is a function of temperature and pressure. Adding internal energy and entropy multiplies the number of confusing combinations. Feynman chooses temperature and volume as the independent variables, using these as the inputs to equations for the other quantities, the dependent variables. Since this is somewhat unconventional, a restatement of major equations in the format familiar to chemists is at the end of this chapter.

Partial Derivatives We need to introduce another mathematical concept from calculus: the partial derivative. No worries, it is actually simpler than the common derivative that we have been using since Chapter 6. Partial derivatives are important when there are two or more independent variables, as there are in thermodynamics. Consider the common (or normal) derivative of a function f with respect to time. f(x,y) = x + xy 2

df/dt = 2x dx/dt + y dx/dt + x dy/dt We need all these terms because, in general, both x and y vary. But sometimes we want to consider changes in f when y is constant and only x changes. Then we need the partial derivative with respect to x: ∂f/∂x = 2x + y When x is constant and y changes, take the partial derivative with respect to y: ∂f/∂y = x When there are more than two independent variables, ∂f/∂x means take the derivative of f with respect to x while holding all other variables constant — ∂f/∂x really is simpler than df/dx. What I just presented is the most common notation for partial derivatives used by mathematicians and physicists. Feynman adds a bit extra to help beginners remember the variable that is being held

constant; he writes: (∂f/∂x) for the partial derivative of f with respect to x, with y held constant. Since that little redundancy may be helpful, I will follow his lead for the rest of our study of thermodynamics. By then, you will probably find this redundancy annoying. y

Let’s write out in full detail the change in f as both x and y change by the infinitesimal amounts dx and dy: df = f(x+dx,y+dy) – f(x,y) df = f(x+dx,y+dy) –f(x,y+dy) +f(x,y+dy) –f(x,y) df = dx (∂f/dx) + dy (∂f/∂y) y

x

Basic Equations of Thermodynamics Let’s apply this to the thermodynamic internal kinetic energy Ŧ that is a function of volume V and temperature T: dŦ = dV (∂Ŧ/∂V) + dT (∂Ŧ/∂T) T

V

The last partial derivative above is the called the specific heat at constant volume, and denoted C : V

C = (∂Ŧ/∂T) V

V

In the last chapter, we extensively discussed how internal energy Ŧ changes with changes in heat Q and work W. At constant pressure, this is expressed: dŦ = dQ + dW = dQ – P dV Feynman cautions that while one might compare these three equations and identify pressure P with (∂Ŧ/∂V) , this would be incorrect. The reason is that the last equation assumes constant pressure while the first does not. One must be very careful to note what is allowed to change and what is not. T

In the last chapter, we discussed the 4-step Carnot cycle of a reversible heat engine. In a graph of pressure vs. volume, we found that the area enclosed by the four curves equals the work done by the gas during one complete cycle. Recall that curves AB and CD (Steps 1 and 3) are isothermal, governed by: PV = constant. Curves BC and DA (Steps 2 and 4) are adiabatic, governed by: PV = constant. γ

Now consider the same 4-Step cycle, but with infinitesimal changes in pressure, volume, heat, and temperature. All curves become straight lines in the limit that their lengths become infinitesimally short, as illustrated in Figure 22-1.

Figure 22-1 Infinitesimal Carnot Cy cle

In the infinitesimal limit, the four curves become two pairs of nearly parallel lines forming a parallelogram. The area of the parallelogram equals the area of the dotted rectangle, which equals dP×dV. This area is the work done by the gas. Let’s compare the slopes of the parallelogram’s sides by calculating their values of dP/dV. The slopes of the BC and DA lines are both given by the adiabatic rule: PV = constant P γ V dV + V dP = 0 P γ dV + V dP = 0 dP/dV = – γ P / V γ

γ–1

γ

The slopes of the AB and CD lines are both given by the isothermal rule, which is the same rule but with γ=1: dP/dV = – P / V From the last chapter, we know that the work W done by the gas is: W = Q (T –T ) / T 1

2

1

In the infinitesimal limit, this becomes: W = dQ (dT / T) Equating this with the area of the above parallelogram, we get: dP dV = dQ dT / T Note that dP is the height of the dotted rectangle; it is the pressure change at constant volume during the adiabatic step in which the temperature changes by dT. dP = dT (∂P/∂T)

V

Therefore, we can replace dP and solve for dQ: dV dT (∂P/∂T) = dQ dT / T dV T (∂P/∂T) = dQ V

V

We can now put this expression for dQ into the equation for dŦ. (*) dŦ = dQ – P dV dŦ = dV T (∂P/∂T) – P dV (*) (∂Ŧ/∂V) = T (∂P/∂T) – P V

T

V

Feynman says the two (*)’d equations are: “the basic equations of thermodynamics from which all others can be deduced.” Don’t worry. I will not make you deduce them; all the equations are listed in the review sections.

Applications From our study of kinetic theory, we know that increasing the temperature of a gas increases its pressure, due to its molecules’ increased rate and intensity of bombardment on the enclosure walls. And for the same underlying reasons, a gas cools when expanded without heat flow. We sought to find macroscopic equations that describe these phenomena without referencing the gas’ atomic-level properties. The (*)’d equations provide that description. Let’s see how these equations apply to something quite different from a gas: the rubber bands we discussed at the start of the last chapter. When a rubber band is stretched it gets warmer and attempts to contract. A heated rubber band will also forcefully contract. We can analyze it as follows: when heat dQ is added, the rubber band’s internal energy increases and work is done. The work W done by a rubber band of length L is: W = – F dL This corresponds to the gas equation: W = P dV If we replace P with –F, and replace V with L, we can apply the gas equations to the rubber band problem. This yields: dŦ = dQ + F dL dQ = –T (∂F/∂T) dL L

Here, (∂F/∂T) is the additional force required to maintain a constant rubber band length if its temperature increases slightly. Knowing that quantity, the last equation relates the energy change dQ associated with a change in length at any specific temperature. L

We can also describe a battery with voltage V and current I=dq. The work done by a reversible battery, such as an ideal storage battery, is: W = V dq Substituting V for P and q for V, we get: dŦ/dq = T (∂V/∂T) – V q

This equation tells us that the internal energy of the battery decreases due to the work it does on the external electrical circuit but increases due to its internal heating. The internal heating is determined by the rate at which the battery’s voltage increases with temperature for a constant charge. When current flows through a battery, chemical reactions occur. Feynman suggests a “nifty” way of measuring the reaction energy: measure the rate of voltage change with temperature when no current flows.

Equivalence of Temperature Definitions Feynman next shows that our two definitions of temperature are equivalent. Consider an ideal gas for which the internal energy Ŧ depends on temperature T, not on volume V. This means (∂Ŧ/∂V) =0, since Ŧ does change at constant T. Our basic equation simplifies to: T

T (∂P/∂T) – P = (∂Ŧ/∂V) T (∂P/∂T) – P = 0 V

T

V

Now, let’s hold V constant and write ) at the end of each equation to remind us of that. At constant V, the partial derivatives can be replaced by common derivatives. V

T dP/dT = P ) dP/P = dT/T ) ln(P) = ln(T) + C ) (Eqn 1): P = T C ) V

V

V

V

Above, C is an arbitrary integration constant that defines the meaning of “one degree.” Since N, k, and V are all constants in this case, the last equation is consistent with our familiar ideal gas law: (Eqn 2): P = T (Nk / V) Feynman notes that the T in (Eqn 1) comes from our generalized thermodynamic equations based on the absolute thermodynamic temperature scale defined by Q=ST, whereas the T in (Eqn 2) is based on the temperature definition of kinetic theory: = 3kT/2. 2

We have just shown that these two definitions are equivalent for an ideal gas. Since we can use the

temperature of an ideal gas to measure the temperature of anything with which it is in equilibrium, the equivalence of the two temperature definitions extends to all substances.

Clausius-Clapeyron Equation Consider a volatile liquid/vapor mixture held at constant temperature T inside a container with a piston that adjusts its volume. Let’s select a substance other than water whose characteristics are wonderfully atypical, as discussed in Chapter 1. If the mixture is sufficiently compressed, all of the vapor will condense into liquid. If the mixture is sufficiently expanded, all the liquid will evaporate into vapor. In between these extremes, varying amounts of vapor and liquid will coexist. Figure 22-2 graphs the substance’s pressure P vs. volume V for two isothermal conditions: the solid curve is for temperature T; and the dashed curve is for the infinitesimally lower temperature T–dT.

Figure 22-2 Isothermal Curves for Liquid Vapor Mixture

While you can see the figure, note some key elements. The vertical dotted lines indicate: V , the greatest volume at which the system is entirely liquid; and V , the least volume at which the system is entirely gaseous. The gray parallelogram represents a Carnot cycle that we will define shortly. L

G

Let’s begin on the left, where volume V is very small. The very steep slopes of both curves reflect the limited compressibility of liquids. As we slowly increase V at constant temperature T, the system moves along the upper, solid curve. At V=V , molecules in the liquid begin evaporating into the gas phase. For a large range of V, the liquid and vapor phases are in equilibrium at a constant pressure, the vapor pressure of this substance at temperature T. At this P and T, molecules evaporate into the gas and condense back into the liquid at the same rate, regardless of gas volume. L

When V has increased to V , all the liquid has evaporated. The system is then entirely in the gas G

phase, governed by the usual isothermal rule: PV=constant. If we repeat that process at the infinitesimally lower temperature T–dT, the system moves along the lower, dashed curve, and goes through the same sequence of phase changes. In each case, the pressure at each phase transition will be slightly lower at the lower temperature. This is because atoms have less kinetic energy at lower temperatures. We now define a reversible Carnot cycle operating between T and T–dT and between the volume limits V on the left and V on the right, where the system transitions to all liquid and all gas, respectively. This cycle closely tracks the perimeter of the shaded parallelogram in the figure; the cycle deviates slightly from the parallelogram at each end, but that error is small and becomes negligible in the limit that dT goes to zero. L

G

From our earlier studies, we know that the work W done by the liquid/vapor mixture is related to the heat Q transferred from T to T–dT, according to: W = Q (T – [T–dT]) / T = Q dT / T We also know that the work W equals the area of the parallelogram, which is: W = width × height W = (V –V ) × (∂P /∂T) dT G

L

VAP

Here, the height is the change in vapor pressure due to the change in temperature dT. Combining these two equations for W yields: Q dT / T = (V –V ) (∂P /∂T) dT Q / [T (V –V )] = ∂P /∂T G

G

L

L

VAP

VAP

This equation, deduced by Carnot but named after Rudolf Clausius and Benoit Clapeyron, relates Q, the heat required to evaporate a liquid, to the temperature dependence of its vapor pressure. Usually, when a liquid evaporates its volume becomes vastly larger. This means V >>V and (V –V ) is nearly equal to NkT/P, which reduces our equation to: G

L

G

L

Q / [T (NkT/P)] = ∂P/∂T dP / P = (Q / Nk) dT / T ln(P) = (Q / Nk) (–1/T) + C P = exp{C} exp{–Q / NkT} 2

Here, C is an integration constant. This also assumes Q is not a function of temperature, which Feynman says is a poor assumption. We can compare this equation to the prediction of kinetic theory, while recalling that kinetic theory gives only approximate results, due our incomplete atomic-level knowledge of vaporization and condensation. Kinetic theory says the population of the gas phase is proportional to:

exp{ –(U –U ) / NkT} G

L

Since Q is the heat energy required to convert the entire system from liquid to gas phase, it nicely matches (U –U ), the total internal energy of the system in the gas phase U minus that in the liquid phase U . G

L

G

L

In V1p45-7, Feynman emphasizes the advantages and disadvantages of thermodynamics versus kinetic theory, as highlighted by these equations. Firstly, he says the thermodynamic equation is exact while the kinetic equation is approximate. Secondly, the thermodynamic equation does not depend on the precise mechanism of evaporation or condensation. Conversely, in those situations in which we can precisely understand the atomic-level phenomena, kinetic theory yields a precise result with all the correct constants, whereas the thermodynamic equation is a differential equation resulting in arbitrary integration constants. In V1p45-7, Feynman says: “When [detailed] knowledge is weak, and the situation is complicated, thermodynamic relations are really the most powerful. When the situation is very simple and a theoretical analysis can be made, then it is better to try to get more information from [kinetic theory].”

Black Body Radiation In Chapter 15, we derived the equation for the pressure of light: PV = E/3, where E is the total energy of all photons. Now, let’s combine that with what we have learned in this chapter, and find what more we can say about black body radiation inside a hot box. Use the equation for (∂Ŧ/∂V) with total kinetic energy Ŧ = E = 3PV. T

(∂Ŧ/∂V) = T (∂P/∂T) – P 3P = T (∂P/∂T) – P 4P = T (∂P/∂T) T

V

V V

Since our box has a constant volume, we can replace the partial derivatives with normal derivatives. 4P = T (dP/dT) dP/P = 4 dT/T ln(P) = 4 ln(T) P = C T for some constant C 4

This is consistent with the result we obtained in Chapter 20, and illustrates again the pros and cons of thermodynamics versus kinetic theory. Thermodynamics obtains the above result with much less effort but is unable to calculate the integration constant C. Kinetic theory obtains the exact equation, including the constant, but at much greater effort.

Equations in Chemical Notation Chemists and others prefer to use pressure P and temperature T as the two independent thermodynamic variables. For solids and liquids, volume is harder to control than pressure. We can easily change independent variables by noting that: d(PV) = P dV + V dP Now combine this identity with the internal energy equation. d(Ŧ) = dQ – P dV d(Ŧ+PV) = (dQ–P dV) + P dV+ V dP d(Ŧ+PV) = dQ + V dP d(H) = dQ + V dP Here, enthalpy H = Ŧ + PV. To convert from Feynman’s equations, in which T and V are the independent variables, to chemists’ equations, in which T and P are the independent variables, make these substitutions: Ŧ goes to H P goes to –V V goes to P In chemists’ formulation, the two basic equations of thermodynamics are: dH = dQ + V dP (∂H/∂P) = –T (∂V/∂T) + V T

P

Chapter 22 Review: Key Ideas 1. Partial derivatives are useful when dealing with multiple independent variables; ∂f/∂x is the change in f when only x changes. For example: f = x + xy ∂f/∂x = 2x+y ∂f/∂y = x 2

Feynman adds an extra ( ) to denote that y does not change in (∂f/∂x) . y

y

Partial derivatives are related to normal derivatives by: df = dx (∂f/dx) + dy (∂f/∂y) y

x

2. The basic equations of thermodynamics, from which Feynman says all else is easily derived, are: dŦ = dQ – P dV (∂Ŧ/∂V) = T (∂P/∂T) – P T

V

3. The above equations can be adapted to many applications. For a rubber band of length L: work = – F dL Substituting –F for P, and L for V, yields the basic equations for rubber bands: dŦ = dQ + F dL dQ = –T (∂F/∂T) dL L

For a battery with voltage V and current I=dq, substitute V for P and q for V: dŦ/dq = T (∂V/∂T) – V q

4. The Clausius-Clapeyron equation relates the heat Q required to evaporate a liquid to the volume of its gas and liquid phases, V and V , and to the temperature dependence of its vapor pressure: G

L

Q / [T (V –V )] = ∂P /∂T G

L

VAP

5. Feynman emphasizes the advantages and disadvantages of thermodynamics versus kinetic theory: when detailed knowledge is weak, thermodynamics is more powerful, and when detailed knowledge is complete, kinetic theory is best.

Chapter 23 Irreversibility & Entropy In this chapter we delve deeper into the complexities of irreversibility and entropy. We begin with the famous Ratchet & Pawl thought experiment, conceived by Feynman, which demonstrates the futility of attempting to defeat the second law of thermodynamics. From our prior analyses of Carnot cycles and reversibility, we discovered two firm constraints imposed by the laws of thermodynamics. The first is that even the best possible heat engines are limited in their ability to extract work energy by moving heat from high objects to cold ones. The second is that work energy cannot be extracted from heat in a complete cycle at a single constant temperature. In V1p46-1, Feynman explains how these constraints arise from fundamental principles.

Ratchet & Pawl Machine Figure 23-1 presents views from the left side, front, and right side of a ratchet and pawl machine. This device is designed to attempt to extract work energy from heat alone, contrary to the second law.

Figure 23-1 Ratchet & Pawl Machine

It contains two gas volumes at temperatures T and T , with an axle running between them. In volume 1, gas molecules bombard vanes attached to the axle. In volume 2, a ratchet and pawl permit the axle to rotate in only the clockwise direction, which winds a rope, thereby lifting a weight. We define ø to be the axle rotation angle that corresponds to one gear tooth: for a ratchet with n teeth, ø=2π/n. 1

2

When a vane is hit with sufficient force in the clockwise sense, the axle turns and the ratchet lifts the pawl. After the axle rotates angle ø, a spring pulls the pawl back into the locked position. When a vane is hit in the counterclockwise sense, the pawl prevents the axle from turning.

The kinetic energies of hot gas molecules in volume 1 exert torques on the axle, with equal frequency in both clockwise and counterclockwise directions. But due to the ratchet and pawl, the axle turns only clockwise. This device seems capable of converting heat energy into work energy: hot gas in volume 1 turns the axle and lifts a weight. The gas temperature in volume 2 seems irrelevant, and we can choose T =T . Everything is then at a single temperature, and the device appears to violate the second law of thermodynamics. 2

1

But closer examination reveals hidden complexities in this seemingly simple device, particularly regarding the pawl. To prevent counterclockwise rotation, the pawl must be forcefully returned to the locking position. Our design uses a spring for that purpose, but gravity or any other return mechanism would result in the same ultimate conclusion. If the return process were completely elastic, the pawl would continually bounce up and down, defeating its purpose. Hence the pawl’s return must be inelastic, which creates a critical problem. When the axle rotates clockwise, work energy is required to lift the pawl and stretch its spring. Since the pawl's return is inelastic, that work energy is lost each time the pawl snaps back. This lost work energy is converted into heat energy, warming the pawl, the ratchet, and the gas in volume 2. Eventually the pawl’s temperature rises so much that its random thermal motion occasionally lifts it out of the locking position, allowing the axle to rotate backwards and lower the weight. Quantitatively, let ε be the energy required to lift the pawl out of the locked position and stretch its spring. Boltzmann’s law says the probability of gas molecules molecules in volume 1 transferring energy ε to a vane and rotating the axel equals exp{–ε/kT }, while the probability that the pawl has enough thermal energy ε to spontaneously unlock itself equals exp{–ε/kT }. 1

2

Hence, when T =T , the ratchet turns clockwise as often as it turns counterclockwise, on average, and no net progress is made lifting the weight. Despite the clever design, our machine does no useful work. 2

1

The second law holds.

Ratchet & Pawl Engine In V1p46-2, Feynman takes us up a notch. Let's start the ratchet and pawl machine with T T . Surprisingly, this makes the machine run backwards. The hot pawl oscillates vigorously. If it lands at the edge of a gear tooth, nothing happens. But if it lands on the long smooth side of a gear tooth, it pushes the ratchet backwards, transferring heat energy from the hotter volume 2 to the colder volume 1. The machine, so cleverly conceived to run forward and lift the weight, will instead vigorously do exactly the opposite. 2

1

Next, let’s operate our machine at non-equilibrium conditions, where the probabilities of clockwise and counterclockwise rotation are not equal, and where thermodynamics does not apply. Here, Feynman assumes T =T =T. While he does not say so, this requires energy from an external source, making the machine no longer a closed system. The clockwise angular velocity ω=dø/dt is proportional to the difference in rotation probabilities. 1

2

ω ~ exp{–(ε+τø) / kT} – exp{–ε / kT} ω ~ exp{–ε / kT} [exp{–τø / kT} – 1] Figure 23-3 plots ω as a function of torque τ. Here, a clockwise rotation lifts the weight, which means a positive torque from the weight pulling down seeks to drive the machine backwards, counterclockwise. A negative torque, as would result from winding the weight's rope around the axle in the opposite direction, seeks to drive the machine forward, clockwise.

Figure 23-3 Angular Velocity of Ratchet & Pawl vs. Torque

We see that positive torques result in moderate backwards (counterclockwise) rotation, contrary to the machine’s intended objective. This sharply contrasts with the exponential effect of large negative torques, driving the machine forward(clockwise) at very high speed.

A rectifier is an electrical analogy of the above machine, with an electric field taking the role of the weight and electric current taking the role of angular velocity. Like the ratchet and pawl machine, an electrical rectifier responds very asymmetrically to its driver, a weight for the former and a voltage for the latter.

Maxwell’s Demon V1p46-5 In 1867, Maxwell suggested a hypothetical demon with the same objective as Feynman’s ratchet and pawl. An initially homogeneous gas is divided by a wall into a left half and a right half. Maxwell’s demon guards a trap door in the wall separating the two gas volumes. The demon opens his door to highenergy molecules moving left and low-energy molecules moving right. After some time, the left side has a higher average energy and temperature than the right side. As this continues, heat flows from cold to hot, in violation of the second law. A less magical version of Maxwell’s suggestion is a hinged door, closed by a spring, which opens only when struck by a high-energy particle moving left. The resolution of this apparent paradox is that the energy needed to open the door diminishes the energy of the leftward-moving particle that struck it. While that particle does move into the left gas volume, its post-collision energy is only average. Additionally, such collisions warm the door, making it open erratically, and mixing the gases.

Reversibility of Nature’s Laws We found above that, without a driving torque and without temperature differences, our ratchet and pawl machine will not rotate preferentially in either direction. In V1p46-4, Feynman suggests there is a deeper principle of physics at work: time reversal symmetry. Time reversal symmetry basically means that what can happen, can also un-happen. This basic principle of nature applies to almost all phenomena. One exception: I know of no way to un-fail a physics exam. Let’s pursue other alternatives. Recall Newton’s famous equation: F=ma. Since acceleration is the second time derivative of position, it is unchanged if we reverse the polarity of time: replacing t by –t. a = d x/dt = d x/d(–t) 2

2

2

2

If the right side of F=ma is unchanged by reversing time, so too must the left side. And indeed, in mechanics, the fundamental forces are all functions of objects’ positions: the variable t is absent. Even in the most complex situations, an object’s position can be written as a polynomial with time as the independent variable. (Taylor’s theorem says any smooth function can be so represented.) Picking the x-axis as an example, the Taylor series for x(t) is an infinite sum over n:

x(t) = Σ t d x(0) /dt / n! n

n

n

n

Here d x(0)/dt is the nth-order time derivative of x evaluated at x=0, and n! is n factorial = 1×2×… ×n. The n=0 term is simply x(0), since by definition: t =d /dt =0!=1. This sum defines how x changes over time, going perhaps from point A to point B. n

n

0

0

0

We now define X(–t) by replacing t with –t in this equation: X(–t) = Σ (–t) d x(0) /(–dt) / n! n

n

n

n

The minus signs all cancel, yielding: X(–t) = x(t), for all t and any function x(t) The function X(–t) defines how x changes in reversed time, going from B to A. But since X(–t) and x(t) are identical, any situation that has x(t) as a solution must also have X(–t) as another solution. Clearly, what we just proved for x(t) for one object applies equally to all coordinate values of all objects. The above discussion focused on mechanics; electromagnetism is somewhat more complex, but the same conclusion results. The Lorentz force contains the term v×B; reversing time reverses velocity since v=dx/dt, but it also reverses B because magnetic fields arise from moving charges and their motion reverses when time reverses. The net result is the force itself is unchanged. In V1p46-7, Feynman explains at length why we do not need to worry here about retarded and advanced electromagnetic fields. All this is far removed from thermodynamics, and is best left to its thorough discussion in Feynman Simplified 2B. After Feynman gave his lectures, James Cronin and Val Fitch discovered a small effect in the weak interaction that does violate time reversal symmetry. In fact, that was the subject of my Ph.D. thesis. (To learn more about that, read Higgs & Bosons & Fermions....) While this is profoundly important in cosmology, it is far removed from our current discussions. Thus, if the basic laws of nature allow an object to evolve from A to B, they also allow that object to evolve from B to A. These laws do not distinguish between time running forward and time running backward. This is truly remarkable and completely contrary to everything we see in everyday life.

Irreversibility Are all natural phenomena reversible? Certainly not. As Feynman says in V1p46-5: “Just try to unscramble an egg! Run a [movie] backwards, and it takes only a few minutes for everybody to start laughing. The most natural characteristic of all phenomena is their obvious irreversibility.”

This is particularly puzzling to physicists because all natural laws are time-symmetric (with the very limited exception in weak interactions mentioned earlier). If one imagined filming atomic level interactions and playing that film backwards, the time-reversed interactions are entirely consistent with all known laws of physics — in fact, the probabilities of all time-reversed reactions are exactly the same as the corresponding normal-time reactions. At the microscopic level, everything is reversible. How then does irreversibility appear everywhere macroscopically? Irreversibility is one example of emergent phenomena, phenomena that do not exist at the subatomic level, but arise in large systems due to their increased scale. Size does matter. Life is another prime example. Entropy and irreversibility are intimately connected. As we discovered earlier, the second law says that entropy never decreases, that it is unchanged in reversible processes, and that it increases in all irreversible processes. The second law forbids heat from flowing from cold to hot. The ratchet and pawl machine demonstrates how basic mechanics prevents this, even in an idealized system. Let’s now explore this limitation in greater depth at the atomic level. Recall from Chapter 21, that when a gas expands at constant temperature, heat dQ=PdV must be added to the gas and its entropy increase dS is: dS = ∫ dQ / T = ∫ P dV / T dS = ∫ (N kT / V) dV/T = Nk ∫ dV / V dS = Nk [ ln(V ) – ln(V )] dS = Nk ln(V /V ) AFTER

AFTER

BEFORE

BEFORE

For example, if we double a gas’ volume at constant temperature, entropy increases by: dS = Nk ln(2) Next, Feynman describes a box with a partition in its middle, with neon atoms on the left and argon atoms on the right, and with equal pressures and temperatures throughout. Both neon and argon are monatomic, ideal, noble gases. If we remove the partition, each gas has the opportunity to double its volume. Neon and argon atoms will intermingle, collide, and thoroughly mix, until eventually they achieve equilibrium with concentrations equalized throughout the box. As above, the entropy changes are: dS = + N k ln(2) dS = + N k ln(2) dS = + N k ln(2) Ne

Ne

Ar

Ar

Total

Total

As Feynman stresses, the expansion of neon and argon atoms to fill the entire box is an excellent

example of an irreversible process that is entirely composed of a vast number of completely reversible processes — the motions and collisions of individual atoms. If we filmed this mixing on the atomic scale and played the film backwards, every time-reversed movement and every time-reversed collision would be completely consistent with all natural laws, but the sum of all the legitimate time-reversed events would violate the second law. Why? We know intuitively that a box of mixed atoms never spontaneously separates into neon atoms on the left and argon atoms on the right, but exactly why does that never happen? The essential point is that the mixed box is less orderly and more probable than the partitioned box of separated atoms. The second law prescribes increasing disorder. This is a matter of probabilities. At the moment that the partition is removed, the gases are bound to mix for virtually any combination of velocities and positions of all N atoms. That is in stark contrast to the reverse process. Once fully mixed, only an extremely small set of velocities and positions of all N atoms will result in all neon atoms moving to the left side of the box while all argon atoms move to the right side. “Un-mixing” is not absolutely forbidden by nature’s laws, but it is so extraordinarily improbable that it will probably never happen, even once in the entire age and expanse of our observable universe.

Order & Entropy Order and disorder are not defined in terms of human esthetics. Instead, these terms are defined using combinatorial mathematics. In a gas of neon and argon atoms, swapping any two neon atoms makes no discernible difference. Indeed, in any large system, a vast number, call it Ω, of microscopically distinct arrangements are macroscopically indistinguishable: each such arrangement leads to the same global temperature, pressure, and state of equilibrium. A disorderly system has more macroscopically indistinguishable arrangements, and thus a higher value of Ω, than an orderly one. Generally, each individual microscopic arrangement is equally probable. In that case, the entropy S of this large system is: S = k ln(Ω) For a neon/argon gas, imagine starting with an empty box and placing atoms inside one at a time. If both gases are allowed to be anywhere within the box, there are twice as many choices for placing each atom than if neon must be on the left and argon must be on the right. In the mixed case, we have twice as many choices N times, for a total of 2 times as many choices, as in the segregated case. N

Hence: S S S S

MIX MIX MIX MIX

–S –S –S –S

SEG SEG SEG SEG

= k ln(Ω ) – k ln(Ω ) = k ln(Ω / Ω ) = k ln(2 ) = Nk ln(2) MIX

MIX N

SEG

SEG

A scrambled egg has much less order than an intact egg. The intact egg has an elaborate structure, with different materials with different characteristics and capabilities precisely distributed. Scrambling destroys that delicate structure, turning order into disorder, and increasing the egg’s entropy. The fundamental laws of nature do not prohibit eggs from being unscrambled. But the number of microscopic atomic arrangements that unscramble an egg is infinitesimal compared to the number of microscopic atomic arrangements that leave it scrambled. Therefore, the second law is a statement of probability, distinguishing it from other physical laws, such as the conservation of electric charge. Physicists firmly believe natural processes absolutely never change the total electric charge in any closed system. Whereas we believe natural processes are extraordinarily unlikely to reduce entropy in any macroscopic closed system. Since increasing disorder is nature’s way, Feynman concludes this lecture pondering why our world is not completely disordered. He wonders whether the degree of order in our patch at this time is entirely accidental. Indeed, if one examined every small portion of the neon/argon box, some portions would have more than their share of neon. In 10,000 atoms, Gaussian statistics tells us to expect 5000±50 neon atoms. (For n events from a binomial distribution with individual probability p, the expectation value equals np and one standard deviation equals √[np(1–p)]). This means 68.27% of 10,000-atom-boxes will have between 4950 and 5050 neon atoms; and 99.994% will have between 4800 and 5200 neon atoms. But it also means that one sample in a billion will have more than 5300 neon atoms and less than 4700 argon atoms, making this sample significantly more orderly than average. If our patch is a one in a billion, what does this say about other patches — what about the rest of the universe? Feynman says if ours is merely accidental luck, then other patches are far more likely to be closer to the average, less ordered, than ours. Yet when astronomers explore the vastness of the cosmos, they find the same atomic elements, the same chemical compounds, and similar stars and galaxies to our own. It seems our patch may be typical. It seems more likely, Feynman concludes, that: (1) our entire observable universe began in a very highly ordered state; (2) disorder continually increased, sometimes extremely rapidly; but (3) a significant level of order yet remains. He says: “So that is the way toward the future. That is the origin of all irreversibility, that is what makes

the processes of growth and decay, that makes us remember the past and not the future, remember the things which are closer to that moment in the history of the universe when the order was higher than now, and why we are not able to remember things where the disorder is higher than now, which we call the future. So, as we commented in an earlier chapter, the entire universe is in a glass of wine, if we look at it closely enough.” “Another delight of our subject of physics is that even simple and idealized things, like the ratchet and pawl, work only because they are part of the universe.” Incidentally, in my experience, Feynman never consumed alcohol. He said he liked the way his brain worked in its natural state.

Chapter 23 Review: Key Ideas 1. Feynman’s ratchet and pawl thought experiment, and Maxwell’s demon, demonstrate the impossibility of extracting work energy from heat, without temperature differentials. 2. The fundamental laws of nature are time-symmetric, applying equally if time runs forward or backward. Irreversibility is an emergent phenomenon, absent on the atomic scale, but dominating macroscopic scales. The same fundamental laws govern systems large and small, but size affects probabilities. 3. Entropy S measures the disorder a macroscopic system. Every large system has a number Ω of distinct arrangements of its microscopic constituents, each equally probable but all macroscopically indistinguishable. For such a system: S = k ln(Ω)

Chapter 24 Review of Chapters 15 – 23 We have explored three related fields of physics that deal with the macroscopic properties of matter. Kinetic Theory is the analysis of the macroscopic physical behaviors of large collections of small particles, typically gases composed of molecules. Statistical Mechanics examines the behaviors of vast collections of particles due to the laws of mechanics, typically Newtonian mechanics. Thermodynamics is the study of the properties of macroscopic matter in equilibrium. Equilibrium is the state that complex systems eventually attain in which all of their parts have the same temperature, kinetic energy, and positional distributions, on average. The macroscopic properties of a large system stop changing once it reaches equilibrium. Feynman emphasizes the advantages and disadvantages of thermodynamics versus kinetic theory: when detailed knowledge is weak, thermodynamics is more powerful, and when detailed knowledge is complete, kinetic theory is best.

Ideal Gases at Equilibrium 1. A gas containing N molecules in a volume V, exerts an omnidirectional pressure P, a force per unit area, given by: P = (2/3) (N/V) 2

Here < > denotes the average value per molecule. 2. The principle of equipartition states that at equilibrium, at temperature T: = kT/2 per degree of freedom 2

Here, Boltzmann’s constant k = 1.38×10 joules per Kelvin, temperature is measured in Kelvin (0K = absolute zero temperature), and each independent motion constitutes one degree of freedom. For 3D motion = 3/2 kT. –23

2

3. Ideal gases are comprised of point-particles that interact only mechanically, without any chemical or nuclear reactions. The Ideal Gas law is:

P V = n R T = N kT Here n is the number of moles of gas, 1 mole = 6.022×10 molecules, and R = 8.314 joules per moleKelvin. 23

4. At equal temperature and pressure, equal volumes of different gases contain equal numbers of molecules. 5. Any change in pressure and/or volume in which total gas energy (work energy plus kinetic energy) is constant, is called adiabatic. During adiabatic changes: P V = (γ –1) Ŧ P V is constant γ

Here Ŧ is the total kinetic energy of the gas, and the specific heat ratio γ equals 5/3rds for monatomic gases and 4/3rds for light inside a star. 6. The position distribution µ(x) and velocity distribution f(v) are independent. For an energy potential U(x), Boltzmann’s law says: µ(x) = µ(0) exp{ –U(x) / kT} f(v) = f(0) exp{ –mv / 2kT} 2

Brownian Motion The thermal energies of atoms and molecules drive their chaotic motion with macroscopic consequences, including: Brownian motion; electrical circuit noise; and vibrations in sensitive instruments. In a random walk of N steps of length r in uncorrelated directions, the average distance R that a molecule reaches is: R = r √N The mean square distance from its origin that a particle in chaotic motion reaches in time t is: < R > = 6 kT t / µ 2

Here µ is the medium’s drag coefficient.

Systems with Large Energy Barriers At equilibrium, thermodynamic systems with two states, whose energy separation is W, have a

population ratio of A exp{ –W / kt} Here, A may be too difficult to calculate. But if W>>kT, the exponential dominates the equation, making A less important. For a liquid/gas mixture, with liquid binding energy W, the equilibrium ratio is: Gas / Liquid = A exp{ –W / kt} For a mixture of ionized and neutral gas molecules, with ionization potential W, the equilibrium densities of ions n , free electrons n , and neutral atoms n , are given by the Saha equation: I

e

0

n n / n = A exp{ –W / kt} I

e

0

For the chemical reaction X+Y –> XY, with XY binding energy W, the equilibrium densities are: n n / n = A exp{ –W / kt} X

Y

XY

Chemical reactions often have an activation energy A* that must be overcome for both the reaction and the reverse reaction to proceed; these reaction rates are proportional to: X+Y –> XY: n n exp{ –A* / kt} X

Y

XY –> X+Y: n exp{ –(A*+W) / kt} XY

Systems Near Equilibrium 1. We parameterize “particle” collisions with the mean time between collisions τ, the mean free path λ, and the collision cross section σ. Here, “particles” includes molecules, atoms, and subatomic particles. Two particles of radii r and R collide if their centers come within (r+R); their collision cross section is: σ = π(r+R) . For particle density n, λ=1/nσ. 2

For any specific particle, the probabilities of not colliding are: Prob(no hit during time t) = exp{–t/τ} Prob(no hit in distance x) = exp{–x/λ} 2. Molecular Drift. If a force F acts on S particles but not on B particles that are at equilibrium, the S particles drift with average velocity: v

DRIFT

= F τ / m= µ F

Here µ is the S particles’ mobility. 3. Ionic Conductivity. A gas containing ions has electrical resistance R and transports an electric current I when exposed to an electric field E, according to: I = q nµ E A R = b / (q n µA) 2

i

2

i

Here, A is the cross sectional area, q is the ion charge, n is the ion density, and b is the length of the gas container along the direction of E. i

4. Molecular Diffusion. If a small number of S particles with density n are placed at one location in a gas of B particles that are at equilibrium, the number of S particles crossing a surface of area A in the yz-plane per unit time is: s

–A D dn /dx s

where D is the diffusion coefficient. Einstein showed D = µ kT. If a force F in the +x-direction acts only on S particles, at equilibrium: dn /dx = n µ F / D = n F / kT s

s

s

5. Thermal Conductivity. In the absence of convection, a gas conducts heat along the z-axis according to: dQ/dt = –κA dT/dz κ = k / [σ(γ–1)] Here, A is the area through which heat flows.

Laws of Thermodynamics 1. The First law of thermodynamics is the conservation of energy: the heat energy Q added to a system plus the work energy W done on the system equals the increase in the system’s internal energy Ŧ: Q + W = ΔŦ 2. The Second law states: (a) no process can have the sole net effect of converting heat into work;

(b) a reversible heat engine can take heat Q at temperature T , deliver heat Q at T , and produce work W according to: 1

1

2

2

W = Q –Q = Q (T –T )/T ; 1

2

1

1

2

1

(c) no heat engine can outperform a reversible heat engine; (d) total entropy is unchanged in a reversible process (e) total entropy increases in an irreversible process; (f) all macroscopic real-world processes are irreversible. 3. The Third law is: at absolute zero temperature, entropy reaches a minimum, a constant value that depends only on the atomic-level properties of the substance. When heat Q is added to a system at temperature T, its entropy increases by Q/T. 4. Entropy S measures the disorder a macroscopic system. A system with Ω distinct arrangements of its microscopic constituents, each equally probable but all macroscopically indistinguishable, has entropy S = k ln(Ω). 5. The Zeroth law is: if systems A and B are each in thermodynamic equilibrium with system C, then A and B are in thermodynamic equilibrium with one another, and are therefore at the same temperature. 6. Absolute thermodynamic temperature T=Q/S. 7. The fundamental laws of nature are time-symmetric, the same for t and –t. Irreversibility is an emergent phenomenon, absent on the atomic scale, but dominating macroscopic scales. The same fundamental laws govern systems large and small, but size affects probabilities. 8. The basic equations of thermodynamics are: dŦ = dQ – P dV (∂Ŧ/∂V) = T (∂P/∂T) – P T

V

Here ∂P/∂T is a partial derivative, the derivative of P with respect to T, while all other variables (particularly V) are held constant.

Limits of Classical Physics 1. Newton’s laws are relatively few and their statements are relatively simple. Yet, they reveal a tremendous wealth of understanding of the natural world. But they do not correctly describe the fundamental nature of atoms. The correct description requires quantum mechanics.

2. The specific heat ratio γ = 1+2/D for D degrees of freedom. Classical physics predicts diatomic molecules have D=7, contrary to observation. Quantum theory shows that at low temperatures some degrees of freedom are “frozen out” due to quantization of atomic energy levels. 3. Black body radiation is the light emitted by any object due to its temperature. Classical physics predicts this radiation has the Rayleigh-Jeans spectrum: I(ω) = (ω/πc) kT, which is utterly wrong 2

Planck launched quantum mechanics by assuming atomic radiation is quantized in increments of ħω, which yields the correct black body spectrum: I(ω) = ħω / [π c (exp{+ħω/kT}–1)] 3

2

2

h = 6.626×10 m kg/s –34

2

ħ = h/2π = 1.0546×10 m kg/s –34

2

Here, h is Planck’s constant. The allowed quantized oscillator energies are: E = (J+1/2) ħω, for J = 0, 1, 2, … J

The probability of being in state J is proportional to exp(–E /kT). J

E = ħω/2 is the zero-point energy, the minimum energy nature allows any oscillator to have. 0

4. Einstein discovered that the interaction of radiation with atoms entails three processes: absorption, spontaneous emission, and stimulated emission. The rate of spontaneous emission is independent of light intensity. Absorption and stimulated emission occur when light’s energy ħω exactly equals the energy difference between two available electron orbits, in which case the rates of these two processes are equal and proportional to the light intensity at frequency ω. Einstein's laws of radiation are the basis for lasers.

Meet The Author Congratulations and thank you for reading my book. I know your time is valuable, and I sincerely hope you enjoyed this experience. I’d like to tell you something about myself and share some stories. First, the obligatory bio (as if 3 “tweets”-worth can define anyone): I have a B.S. in physics from Caltech, a Ph.D. in high-energy particle physics from Stanford University, and was on the faculty of Harvard University. Now “retired,” I teach at the Osher Institutes at UCLA and CSUCI, where students honored me as “Teacher of the Year.” In between, I ran eight high-tech companies and hold patents in medical, semiconductor, and energy technologies. My goal is to help more people appreciate and enjoy science. We all know one does not have to be a world-class musician to appreciate great music — all of us can do that. I believe the same is true for science — everyone can enjoy the exciting discoveries and intriguing mysteries of our universe. I’ve given 400+ presentations to general audiences of all ages and backgrounds, and have written 3 printed books and 39 eBooks. My books have won national and international competitions, and are among the highest rated physics books on Amazon.com. I’m delighted that three of these recently became the 1st, 2 and 3 sellers in their fields. nd

rd

Richard Feynman was a friend and colleague of my father, Oreste Piccioni, so I knew him well before entering Caltech. On several occasions, Feynman drove from Pasadena to San Diego to sail on our small boat and have dinner at our home. Feynman, my father, my brother and I once went to the movies to see “Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb.” It was particularly poignant watching this movie next to one of the Manhattan Project’s key physicists. At Caltech I was privileged to learn physics directly from this greatest scientist of our age. I absorbed all I could. His style and enthusiasm were as important as the facts and equations. Top professors typically teach only upper-level graduate classes. But Feynman realized traditional introductory physics did not well prepare students for modern physics. He thought even beginners should be exposed to relativity, quantum mechanics, and particles physics. So he created a whole new curriculum and personally taught freshman and sophomore physics in the academic years 1961-62 and 1962-63. The best students thrived on a cornucopia of exciting frontier science, but many others did not. Although Caltech may be the world’s most selective science school, about half its elite and eager students drowned in Feynman’s class. Even a classmate, who decades later received the Nobel Prize in Physics, struggled in this class. Feynman once told me that students sometimes gave him the “stink eye” — he added: “Me thinks he didn’t understand angular momentum.” Some mundane factors made the class very tough: Feynman’s book wasn’t written yet; class notes came out many weeks late; and traditional helpers (teaching assistants and upper classmen) didn’t

understand physics the way Feynman taught it. But the biggest problem was that so much challenging material flew by so quickly. Like most elite scientists, Feynman’s teaching mission was to inspire the one or two students who might become leading physicists of the next generation. He said in his preface that he was surprised and delighted that 10% of the class did very well. My goal is to reach the other 90%. It’s a great shame that so many had so much difficulty with the original course — there is so much great science to enjoy. I hope to help change that and bring Feynman’s genius to a wider audience. Please let me know how I can make Feynman Simplified even better — contact me through my WEBSITE. While you’re there, check out my other books and sign-up for my newsletters. Printed Books, each top-rated by Amazon readers: Everyone's Guide to Atoms, Einstein, and the Universe Can Life Be Merely An Accident? A World Without Einstein Everyone's Guide to the Feynman Lectures eBook Series Feyman Simplified: 1A: Basics of Physics & Newton's Laws 1B: Harmonic Oscillators & Thermodynamics 1C: Special Relativity & the Physics of Light 1D: Sound, Waves, Angular Momentum, Symmetry & Vision 2A: Maxwell's Equations & Electrostatics 2B: Magnetism & Electrodynamics 2C: Relativistic Electrodynamics & Fields in Dense Matter 2D: Magnetism: Matter, Elasticity, Fluids & Curved Spacetime 3A: Quantum Mechanics Part One 3B: Quantum Mechanics Part Two 3C: Quantum Mechanics Part Three 4A: Math for Physicists

Everyone's Guide Series of Short eBooks Einstein: His Struggles, and Ultimate Success, plus Special Relativity: 3 Volumes, A to Z General Relativity: 4 Volumes, from Introduction to Differential Topology Quantum Mechanics: 5 Volumes, from Introduction to Entanglement Higgs, Bosons, & Fermions… Introduction to Particle Physics Cosmology Our Universe: 5 Volumes, everything beyond the Sun Our Place in the Universe: a gentle overview Black Holes, Supernovae & More We are Stardust Searching for Earth 2.0

Smarter Energy Timeless Atoms Science & Faith

Table of Contents Chapter 12 Harmonic Oscillators Chapter 13 Resonances Chapter 14 Transients & Linear Systems Review Chapter 15 Kinetic Theory of Gases Chapter 16 Statistical Mechanics Chapter 17 Brownian Motion Chapter 18 Kinetic Theory at Equilibrium Chapter 19 Kinetic Theory Near Equilibrium Chapter 20 “Newton, We Have a Problem!” Chapter 21 Laws of Thermodynamics Chapter 22 Applications of Thermodynamics Chapter 23 Irreversibility & Entropy Chapter 24 Review of Chapters 15 – 23