Entropy and the Second Law of Thermodynamics: ... or Why Things Tend to Go Wrong and Seem to Get Worse [1 ed.] 3031349490, 9783031349492, 9783031349508

This book is a brief and accessible popular science text intended for a broad audience and of particular interest also t

234 87 3MB

English Pages 141 [135] Year 2023

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Entropy and the Second Law of Thermodynamics: ... or Why Things Tend to Go Wrong and Seem to Get Worse [1 ed.]
 3031349490, 9783031349492, 9783031349508

  • Commentary
  • Publisher PDF | Published: 26 September 2023

Table of contents :
Acknowledgements
Contents
About the Author
1 Introduction
References
2 The Nature of Heat
References
3 The Laws of Thermodynamics
3.1 Energy and the First Law of Thermodynamics: Energy Conservation, or What’s Possible
3.2 Entropy and the Second Law of Thermodynamics: Energy Availability, or What’s Probable
References
4 Statistical Interpretation of the Second Law of Thermodynamics
References
5 Implications of the Second Law of Thermodynamics: Why Things Go Wrong
References
6 So, What’s to Do?
References
Further Reading
References
Index

Citation preview

Robert Fleck

Entropy and the Second Law of Thermodynamics ...or Why Things Tend to Go Wrong and Seem to Get Worse

Entropy and the Second Law of Thermodynamics

Robert Fleck

Entropy and the Second Law of Thermodynamics ... or Why Things Tend to Go Wrong and Seem to Get Worse

Robert Fleck Department of Physical Sciences Embry–Riddle Aeronautical University Daytona Beach, FL, USA

ISBN 978-3-031-34949-2 ISBN 978-3-031-34950-8 https://doi.org/10.1007/978-3-031-34950-8

(eBook)

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Considering the myriad ways in which energy regulates our lives, the laws of thermodynamics assume a dominant role that overshadows the significance of human laws. —Hans Christian von Baeyer, Warmth Disperses and Time Passes: The History of Heat (p. 18) …the world is getting worse… If the direction of the universe is towards degradation, what room is there in it for the emergence of exquisite structures, of people, and of noble thoughts and deeds?... How could the increasing mastery of matter be compatible with a future of the universe that was drifting inexorably towards a Hogarthian gutter? —Oxford chemist Peter Atkins, Galileo’s Finger: The Ten Great Ideas of Science (p. 124) Closed systems inexorably become less structured, less organized, less able to accomplish interesting and useful outcomes, until they slide into an equilibrium of gray, tepid, homogeneous monotony and stay there. —Harvard psychologist Steven Pinker, “The Second Law of Thermodynamics” (p. 17)

With the law of entropy, discovered by Rudolf Clausius, it became known that the spontaneous processes of nature are always related to a diminution of the free and utilizable energy, which in a closed material system must finally lead to a cessation of the processes on the macroscopic scale. —Encyclical address given by Pope Pius XII to the Pontifical Academy of Sciences (1951) And so castles made of sand/Fall in the sea eventually. —American musician Jimi Hendrix, Castles Made of Sand (1967)

Acknowledgements

It is a pleasure to acknowledge, first and foremost, Angela Lahee, physics executive editor for Springer Publishing, for her faith in and continuing support of this project which was skillfully guided through the production process by project coordinator Vijay Kumar Selvaraj. I thank both of them for their help in turning an idea into a book. My colleagues Anthony Aveni, Itzhak Goldman, and Anthony Reynolds carefully reviewed multiple iterations of the manuscript and offered a wealth of pertinent and helpful comments and suggestions that have greatly improved my story of the Second Law of Thermodynamics, “arguably the most important and fundamental idea in the whole of science” according to science writer John Gribbon, but nevertheless one of the most subtle and often misunderstood concepts in all of science. A double dose of thanks to Anthony Aveni, who taught me many years ago that there is much more to astronomy than just astronomy, some of which shines through in the final chapter here that addresses some of the sociocultural implications of the Second Law. My good friend Bob Franklin at InfoGraphicDesign prepared the diagrams with his usual attention to detail and creativity. Finally, I thank my wife, Sherry, who read a preliminary version of the introductory chapter and, despite having very little interest in science—and knowing nothing about the Second Law of Thermodynamics—immediately encouraged me to write this book. I thank a lifetime of students—so many of them—for working with me to understand the science of thermodynamics, so important in so many ways,

vii

viii

Acknowledgements

especially now in today’s warming world—and for all their questions that over the years have given me, in turn, a much better understanding of the subject. It has been a pleasure and a privilege to have spent a lifetime in the classroom with all of them, and it is to them that I dedicate this book. … on the beach in Daytona April 2023

Robert Fleck

Contents

1

Introduction References

1 7

2 The Nature of Heat References

9 19

3 The Laws of Thermodynamics 3.1 Energy and the First Law of Thermodynamics: Energy Conservation, or What’s Possible 3.2 Entropy and the Second Law of Thermodynamics: Energy Availability, or What’s Probable References

21

4

5

6

Statistical Interpretation of the Second Law of Thermodynamics References

26 40 68 69 85

Implications of the Second Law of Thermodynamics: Why Things Go Wrong References

87 100

So, What’s to Do? References

103 117

ix

x

Contents

Further Reading

119

References

121

Index

125

About the Author

Robert Fleck is Emeritus Professor of Physics and Astronomy in the Department of Physical Sciences at Embry-Riddle Aeronautical University in Daytona Beach, Florida, where for four decades he developed and taught a large number and a wide variety of undergraduate and graduate courses in physics, astronomy, general science, and history of science. For inspiring his students with his passion and enthusiasm for teaching and lifelong learning, he received the University Outstanding Teaching Award in 2000 and 2015, as well as over a dozen faculty appreciation awards from graduating senior classes. Professor Fleck is a NASA and National Science Foundation supported star and planet formation theorist; he has published in a wide variety of disciplines, including physics and astronomy and the history of science, and he has been a Visiting Scientist at the National Radio Astronomy Observatory and a Perren Visiting Fellow at the University of London. He also pioneered Embry-Riddle’s study abroad program, teaching classes in England, France, Italy, and Greece, and he has recently completed a book-length manuscript titled The Evolution of Scientific Thought: A Cultural History of Western Science from the Paleolithic to the Present. When not reading or writing, he enjoys swimming, surfing, cycling, and traveling.

xi

1 Introduction

Summary Of all the exemplars the English scientist and novelist C. P. Snow could have used over half a century ago to delineate the two-culture, science-humanities divide, he chose the Second Law of Thermodynamics and Shakespeare, each one a powerful ambassador for their respective “cultures.” This introductory chapter summarizes the book’s purpose to help the reader understand, with a minimum of mathematics, Entropy and the Second Law of Thermodynamics, nature’s Murphy’s Law describing the perceived perversity of a universe where it’s easy for things to go wrong—and to get worse—easier to make a mess than to clean it up. It is hoped that the reader will then feel more at home on the science side of Snow’s cultural divide and hopefully more comfortable in a universe that is nevertheless inexorably running down and out. Although there’s much more to science than the Second Law of Thermodynamics, very few laws of nature have had such wideranging implications for the workings of our world: indeed, it has been called “the most important and fundamental idea in the whole of science.” Even Shakespeare understood that the world “wears, sir, as it grows.”

Anything that can go wrong will go wrong. —attributed to aerospace engineer Edward A. Murphy Jr. (1949)

As a child of the universe, and as a teacher of physics, I had to write this book. It’s not a happy book. In fact, it’s quite depressing, almost guaranteed

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Fleck, Entropy and the Second Law of Thermodynamics, https://doi.org/10.1007/978-3-031-34950-8_1

1

2

R. Fleck

to bring you down. But that’s nature. Nothing we can do about it. Just have to live with it. I used to think I was just unlucky in life. Until I studied physics and learned about the Second Law of Thermodynamics. It’s a law of nature. One of those invariable patterns of behavior occurring with unvarying uniformity in nature. Not my favorite, but, again, nothing I—or anyone—can do about it. I’m not a pessimist and I didn’t write this book to bum you out. As a scientist (I’m an astrophysicist, a star and planet formation theorist) I’m a realist; as an educator I’m an idealist (a necessary prerequisite, especially in these times); and, despite nature, I’m a mild optimist (even if things don’t always seem to work out for the best). I wrote this book to help you understand why things often have a tendency to go wrong and sometimes seem to be getting worse, why it’s tough just to break even, let alone get ahead. I want you to know that none of this downward spiral in nature is your fault; it’s just nature. And I want you to appreciate that much of the anxiety over the Second Law— over living in a universe that is continually running down and out—stems from our early-modern conception of nature imagined to be quite separate from ourselves, a nature to be controlled rather than understood. I hope that by learning about all this, you’ll be better prepared to deal with it all. And, believe me, we have to deal with it. We have no choice in the matter. It’s a law of nature. After you understand all of this, maybe even you, too, will find a place for optimism in the midst of all this madness. I hope so. The world needs more of that, for sure. And by understanding the Second Law of Thermodynamics you’ll be part of at least one of the “two cultures” the English physicist and novelist C. P. Snow described in his influential 1959 Rede Lecture, published the same year in book form as The Two Cultures and the Scientific Revolution. His thesis was that science and the humanities (this latter represented by “literary intellectuals”), which together embody “the intellectual life of the whole of western society,” had split culture into two dangerously noncommunicative, non-overlapping, mutually incomprehensible camps, and that this division was a major impediment to a proper functioning society. He blamed this great divide on the British education system—specifically, on the overspecialization of the educated elite—for emphasizing the humanities over the sciences, despite the importance of the latter in our modern scientific world. Here is what he said [1, pp. 14–15]: A good many times I have been present at gatherings of people who, by the standards of the traditional culture, are thought highly educated and who have with considerable gusto been expressing their incredulity at the illiteracy of scientists. Once or twice I have been provoked and have asked the company

1 Introduction

3

how many of them could describe the Second Law of Thermodynamics. The response was cold: it was also negative. Yet I was asking something which is the scientific equivalent of: Have you read a work of Shakespeare’s? (Fig. 1.1)…. I now believe that if I had asked an even simpler question—such as, What do you mean by mass, or acceleration, which is the scientific equivalent of saying, Can you read? —not more than one in ten of the highly educated would have felt that I was speaking the same language. So the great edifice of modern physics goes up, and the majority of the cleverest people in the western world have about as much insight into it as their Neolithic ancestors would have had.1

More than half a century later, I must say that things haven’t changed much: the polarization of the “two cultures” paradigm still hangs heavily in our cultural skies today. Indeed, the divide has, if anything, widened with time, largely as a result of the increasing specialization necessary to succeed in today’s complex and highly specialized world. That said, I must admit that my scientist friends typically—but not always—know more about the humanities (which include the visual and performing arts, such as painting, sculpture, architecture, music, dance, and drama, as well as literature, history, and philosophy) than my humanities friends know about science. It wasn’t always that way. In Jonathan Swift’s 1726 satirical novel, Gulliver’s Travels, to take one example, Gulliver learns that astronomers on the floating island of Laputa have discovered two satellites revolving about Mars having orbital parameters in accordance with the laws of planetary motion discovered a century earlier by the German astronomer Johannes Kepler.2 How many novelists 1 Half a century on, author and educational theorist Robert Whalen has argued that the choice is no longer between two cultures but between an education system based on academic rigor and no culture at all (see R. Whalen, ed. From Two Cultures to No Culture: C. P. Snow’s “ Two Cultures” Lecture Fifty Years On, Civitas Books, London, 2009). Bridging the gap between the two cultures, the notable naturalists Loren Eiseley (in his 1964 essay “The Illusion of the Two Cultures,” The American Scholar 33, pp. 387–399; reprinted in his posthumously published 1978 anthology The Star Thrower, pp. 267–279) and Edward O. Wilson (in his 1998 masterful monograph, Consilience: The Unity of Knowledge ), and, more recently, physicist Tom McLeish (in his 2019 The Poetry and Music of Science: Comparing Creativity in Science and Art ) have passionately and persuasively stressed the common creative, imaginative, and aesthetic wellsprings of art (broadly interpreted to include the humanities) and science, despite the two being institutionalized on radically different terms and using different methods of exploration. Indeed, both activities share a sense of wonder at the mystery and beauty of the world around us, and both share the same goal to understand that world and our place in it. 2 Gulliver reports that the two lesser stars, or satellites (a neologism introduced by Kepler), that revolve about Mars do so in such a way “that the squares of their periodical times are very near in the same proportion with the cubes of their distance from the centre of Mars, which evidently shows them to be governed by the same law of gravitation that influences the other heavenly bodies” [2, p. 187]. Swift’s two posited partners for Mars was based on a prediction made a century earlier by Kepler, who, ever the mystical numerologist, had predicted that Jupiter would have the four satellites discovered by Galileo, and Saturn eight, based on his belief that the number of satellites for

4

R. Fleck

Fig. 1.1 Considered one of the most influential books ever published, Mr. William Shakespeare’s Comedies, Histories, & Tragedies is a collection of plays by the English poet and playwright, William Shakespeare (1564–1616; born the same year as Galileo, one of the founders of modern science), published in 1623 and commonly referred to by modern scholars as the First Folio. Shakespeare is widely regarded as the greatest and most influential writer in the English language, and thus an apt avatar of the humanities. How many—if any—of the bard’s “Comedies, Histories, & Tragedies” have you read? On which side—in any—of the culture divide do you live? This book will familiarize you with C. P. Snow’s science side the two cultures divide—specifically, the Second Law of Thermodynamics and the science of heat—with the hope that you will then understand why things tend to go wrong and seem to be getting worse in a universe that is inexorably running down and out. (Wikimedia Commons, public domain)

1 Introduction

5

in today’s specialized and dichotomous two-cultures world even know of Kepler’s laws, let alone know what they are and what they mean? How many people recognize that science and the humanities afford complementary ways of being human? But then, science is, for most people, just more difficult to understand compared to the humanities. For one thing, science, unlike the humanities, is cumulative and so one must spend a lot of time and effort learning the basics before moving on to more advanced topics. Most of us can, right now, sit down and read a book or paint a picture, but very few can write down the Dirac equation, let alone know anything about it (it’s the relativistic wave equation formulated by the English Nobel-laureate theoretical physicist Paul Dirac in 1928, the first to account fully for special relativity in the context of quantum mechanics; see, for example, Helge Kragh, Quantum Generations: A History of Physics in the Twentieth Century, Princeton University Press, 1999, p. 167). And, in any case, for some, it just isn’t “cool” today to know much about science, which, in addition, many find “boring beyond belief ” (borrowing the words actor Steve Martin uttered in the 1991 satirical romantic comedy film LA Story). Of all the exemplars he could have used to delineate the two-culture, science-humanities divide, Snow chose the Second Law of Thermodynamics and William Shakespeare, each one a powerful ambassador for their respective “cultures.” My purpose here is to help you understand, with a minimum of mathematics, the Second Law of Thermodynamics, nature’s Murphy’s Law describing the perceived perversity of the universe, in order to appreciate its various manifestations in the workings of a world where it’s easy for things to go wrong—and to get worse—easier to make a mess than to clean it up. (Surprisingly—amazingly, really—Robert March’s Physics for Poets, written for non-specialists—and even beginning with a poem—fails even to mention the laws of thermodynamics. Shakespeare would have been disappointed.) In the process, I hope also to increase your science literacy so that you’ll feel more at home on the science side of Snow’s cultural divide and hopefully more comfortable in a universe that is nevertheless inexorably running down

each planet would increase as one moved outward from the Sun according to the regular geometric progression 1 (Earth), 2 (Mars), 4 (Jupiter), and 8 (Saturn), each successive satellite count being twice the previous number. Interestingly, Mars does have two moons, Phobos and Deimos (Greek for fear and panic, appropriate names for the acolytes of the god of war), discovered by the American astronomer Asaph Hall during Earth’s exceptionally close pass to the planet in 1877, a century and a half after Swift wrote his satirical take on the new sciences that contributed so much to the European Enlightenment of Anglo-Irish Swift’s “century of light.” Of course, Kepler’s numerological lunar scheme was soon disproven by the later discovery of many more moons around the outer planets: the latest count—and we’re still counting—has nearly one hundred moons with confirmed orbits around both Jupiter and Saturn, not counting an uncountable number of associated smaller-sized “moonlets.” The count for Mars, however, remains at two.

6

R. Fleck

and out. And besides, in an ideal world that values scientific understanding, knowing more about why things go wrong just might make it a bit easier to deal with it. Although the word itself is formed from the Greek θšρμη—therme, ς—dynamis, meaning “power”—the science meaning “heat”—and δυναμι ´ of thermodynamics clearly reaches far beyond the study of the “power of heat.” Indeed, as we shall see, the insight it provides helps us to understand the processes of change—and corruption—in the universe that, in turn, contribute to the richness of the world we live in. Modern civilization, all of us, thrive off the corruption of a cosmos collapsing into chaos. As Oxford University chemist and author Peter Atkins proclaims in the opening sentence in his book on the subject, “No other part of science has contributed as much to the liberation of the human spirit as the Second Law of thermodynamics” [3, p. vii]. No wonder Snow picked it as the exemplar of science. Should the reader, resolved to become fully “human,” wish to build competency in the other of the two cultures—folded into our discussion here by presenting the science of thermodynamics in its historical context3 — I would suggest, following Snow’s archetype, starting with The Shakespeare Book: Big Ideas Simply Explained (New York: DK Publishing, 2015) and then moving on to the Folger Shakespeare Library’s Shakespeare Set Free series (New York: Washington Square Press and Simon & Schuster, 1993 – 2006) before sitting down with The Complete Works William Shakespeare (San Diego: Canterbury Classics/Baker & Taylor Publishing Group, 2014). Of course, there’s much more to the humanities than Shakespeare—just as there’s much more to science than the Second Law of Thermodynamics.4 But you’ve got to start somewhere. And very few humanists in history have grasped 3 Appreciating that we will know where we are today only when we know how we arrived here, one is reminded of Yoda’s admonition to the young Luke Skywalker, a long time ago in a galaxy far, far away: “Always you worry about where you are going; never do you think about where you have been.” 4 Moving from Shakespeare and literature to the visual arts—to pick just one other topic from the humanities—either Martin Kemp’s The Oxford History of Western Art (Oxford University Press, Oxford, 2000) or Fred Kleiner’s Gardner’s Art Through the Ages: The Western Perspective, 16th ed. (Cengage, Boston, 2020) would be good places to start. The two-volume Culture and Values: A Survey of the Western Humanities by Cunningham, Reich, and Fichner-Rathus (Cengage, Boston, 2017) surveys the humanities more broadly. For a broad appreciation of the sciences outside of the Second Law and thermodynamics, I would suggest—in keeping with the “Big Ideas Simply Explained” theme recommended for an introduction to Shakespeare—Colson, Hallinan, and David, eds. The Science Book: Big Ideas Simply Explained (DK Publishing, New York, 2019) or Peter Tallack, ed. The Science Book (Cassell & Co., London, 2001), both of which, in the spirit of the “two cultures,” trace the big discoveries in science chronologically, thereby giving the reader an appreciation for the historical development of science through time. For a brief but broad appreciation of science and its significance, the reader may wish to consult Elof Axel Carlson’s recent What Is Science? A Guide for Those Who Love It, Hate It, or Fear It (World Scientific, Singapore, 2021).

1 Introduction

7

the human condition as well as has Shakespeare—just as very few laws of nature have had such wide-ranging implications for the workings of the world we are a part of as has the Second Law of Thermodynamics. Besides, even Shakespeare understood that the world “wears, sir, as it grows.”5

References 1. C. P. Snow, The Two Cultures (Cambridge University Press, Cambridge, 1998; orig. publ. 1959) 2. J. Swift, Gulliver’s Travels (The New American Library, New York, 1960) 3. P. W. Atkins, The Second Law (W. H. Freeman & Co., New York, 1984)

5 Opening exchange between the poet and the painter in William Shakespeare’s play, Timon of Athens (ca. 1606).

2 The Nature of Heat

Summary Over the years, our understanding of heat—Aristotle’s “Element of fire”—changed from a very qualitative primordial essence of “hotness,” to a mysterious and imponderable fluid known variously as phlogiston or caloric (French chemist Antoine-Laurent Lavoisier’s Calorique, still with us in our word “calorie”), until, largely as a result of his careful observations during the boring of Bavarian cannon, the expatriated American Benjamin Thompson, Count Rumford, concluded that heat was “nothing but a vibratory motion taking place among the particles of the body.” Thus was caloric pushed into history just as its much-too-similar predecessor, phlogiston, had previously vanished into the nothingness it was, as heat was finally appreciated not as a specific chemical substance but rather as a general physical process related to the transfer of the thermal energy of molecular motion. A calorimetry example, similar to experiments conducted by Lavoisier, illustrates the principle of energy conservation (later formalized as the First Law of Thermodynamics) and the directionality of heat flow (later shown to be a consequence of the Second Law of Thermodynamics).

Heat is very brisk agitation of the insensible parts of the object, which produces in us that sensation from whence we denominate the object hot; so that what in our sensation is heat , in the object is nothing but motion. – English philosopher John Locke (1632–1704)

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Fleck, Entropy and the Second Law of Thermodynamics, https://doi.org/10.1007/978-3-031-34950-8_2

9

10

R. Fleck

As a prelude to our discussion of the laws of thermodynamics, we should begin where the science itself begins: with an understanding of the true nature of heat. If you want to know what’s happening—in life, the universe, and everything else—follow the heat. Heat, and the warmth it provides, is all around us—we feel it in the light from the Sun, we feel it while holding a warm cup of tea as we sit before a warm fireplace or campfire on a cold winter’s night, we feel—and see—it when we touch a red-hot stove, and, more intimately, we feel it within our own bodies. All life requires heat to survive. So, what is this thing called heat? Locke’s prescient understanding of heat introducing this chapter, essentially the same as ours today, was outside the canon of science in his time. Of the ancient four terrestrial elements popularized in antiquity by Aristotle—earth, water, air, and fire (Fig. 2.1)—only earth was known not to be elemental by the end of Locke’s seventeenth century; water and air would be decomposed into more elemental forms (hydrogen and oxygen for water; nitrogen and oxygen for air) in the following century. The English metaphysical poet and Shakespeare contemporary John Donne’s “Element of fire”1 persisted as the imponderable fluid caloric (from the Latin calor meaning heat), and would not be “quite put out” until the nineteenth century when the study of heat became primarily a branch of physics rather than of chemistry and it was finally fit into the theoretical framework of thermodynamics as a form of energy. In 1789, the same year that witnessed the start of a political revolution in France, the famous French chemist Antoine-Laurent Lavoisier (1743–1794), rightly recognized today as the “father of modern chemistry,” published his Traité élémentaire de chimie (Elementary Treatise of Chemistry). It was the first textbook on the new revolutionary chemistry centered on the author’s oxygen theory of combustion, and, indeed, the first ever truly and recognizably modern chemistry text—and it outlined the first revolution in science to be foretold by its author “destined to bring about a revolution in physics and chemistry.” Lavoisier demonstrated that each of the four ancient elements 1 Writing in his 1611 Anatomie of the World; First Anniversary, Donne bemoaned the disruption to long-held sensibilities accompanying the sun-centered universe proposed in 1543 by the Polish astronomer Nicolaus Copernicus, a rearrangement that destroyed the Aristotelian placement of the elements, with fire, the farthest from a central Earth in the terrestrial realm, put in its “natural” place—it was the element endowed with not only the essence of heat, but also with “absolute lightness” (how else explain why flames reach for the sky?)—all the way out to the orb of the Moon, separated from the central terraqueous earth-water globe by a shell of air. For Donne, it was all done: … new Philosophy calls all in doubt, The Element of fire is quite put out; The Sun is lost, and th’Earth, and no man’s wit Can well direct him where to look for it…. Tis all in pieces, all coherence gone….

2 The Nature of Heat

11

Fig. 2.1 The “Sacred Tetrad”: the four elements, a far cry from today’s periodic table of over one hundred elements, shown here with their four associated combinations of qualities, together with the four bodily humors, the four seasons of the year, the four winds, the four cardinal directions, the four ages of man, and the four divisions of the twelve zodiacal constellations. (Clearly, the number 4 was very special!) Note the element of fire (Latin, “IGNIS”) appearing on the left side of this early-twelfth-century manuscript illumination designed by Byrhtferth of Ramsey in Huntingdonshire, England, during the reign of King Aethelred (978–1016). Both the French chemist Antoine-Laurent Lavoisier (see Fig. 2.2) and the expatriate American entrepreneur Benjamin Thompson (see Fig. 2.3) referred to heat as an “igneous fluid.” In Aristotle’s highly phenomenological as-I-see-it (and as-I-touch-it) scheme, true to this early philosopher’s commitment to the reality of the concrete and sensory world, properties perceptible to contingent human senses—in the case of fire, “calida” (Latin for “hot”)—become the basic ontological characteristics of physical essence itself, a basic four-element cauldron of being. (Credit © Bodleian Libraries, University of Oxford; detailed information at https://digital.library.mcgill. ca/ms-17/folio.php?p=7v&showitem=7r_2ComputusRelated_20ByrhtferthsDiagram)

12

R. Fleck

involved oxygen (his term, oxygène, meaning “acid former,” from the Greek oxys, “sharp” or “biting,” hence acid; and gen, “to beget”): as a component of air, as a combination with hydrogen in water, as an oxide of earths (“calx”), and as a necessary ingredient for fire. In the course of introducing a new chemical nomenclature still used today (distinguishing, for example, carbon monoxide CO from carbon dioxide CO2 ), Lavoisier began his Traité, which contained a new table of chemical elements (“substances simple”—simple substances—Lavoisier called them, “the actual limit reached by chemical analysis”), many recognized still today as elements, with an extended discussion of Calorique. This was his new name for phlogiston (from the Greek phlogistos meaning “flammable”), the old principle of fire and heat that had roots reaching back to alchemy and Aristotelian essences, which he included along with light (Lumière) as a chemical element—mistakenly, we now know (yet he himself wisely warned his readers that “it is especially necessary to guard against the extravagancy of our imagination, which forever inclines to step beyond the bounds of truth, and is with difficulty restrained within the narrow limits of facts,” good advice for scientists—and everyone else). Considered a “subtle fluid” in this most mechanical of centuries—thanks to the success of the formulation by Isaac Newton (1642–1727) in the previous century of a new, mechanical, clockwork universe—caloric, after all, was conserved, as measured by Lavoisier’s calorimetry experiments involving the mixing of materials of different temperatures (easily reproduced by today’s students of physics; see Example 2.1 below and Fig. 2.2), all with the same rigor as were the ponderable substances as measured by weight. (As author Hans Christian von Baeyer reminds us, chemists, “dealing with the tangible stuff of the world, endowed the mysterious phenomenon of heat with as much concreteness as they could muster, and called it a fluid” [1, p. 8].) A fluid theory of heat was also consistent with the observed expansion or change of state of materials upon heating: increasing the amount of heat fluid in a body caused its expansion or, in the case of the latent (“hidden”) heat effecting phase changes between solid, liquid, and gas states, its increased fluidity. Phrases still used today, like “the flow of heat” and “heat capacity,” are remnants of the once fashionable fluid theory of heat. Example 2.1 Calorimetry: Conservation of “Caloric” As everyday experience so readily shows, mixing materials having different temperatures results in the transfer of heat—Lavoisier’s Calorique —from the warmer substance to the cooler one in such a way that, provided the materials are isolated (thermally and otherwise) from their surroundings, the amount of heat lost by the warm material equals the amount of heat gained

2 The Nature of Heat

13

Fig. 2.2 Apparatus from Lavoisier’s laboratory on display at the Paris Musée des Arts et Métiers bear witness to his experiments and discoveries. His wife Marie-Anne Lavoisier was a valued co-worker who assisted her husband in the laboratory, kept records of his experiments, and learned English and German in order to translate important scientific treatises for her celebrated husband. (The two are still with us in one of the best-known scientific portraits, Portrait of Antoine-Laurent Lavoisier and his Wife, now in New York City’s Metropolitan Museum of Art and painted by the famous French painter Jacques-Louis David in 1788, one year before the publication of Lavoisier’s Traité.) With chemical balance in hand, and with the precise quantitativeness it afforded, Lavoisier articulated one of the earliest clear statements of an important principle in science that we today call conservation of mass: “in all the operations of art and nature, nothing is created; an equal quantity of matter exists both before and after the experiment.” Precision instruments developed alongside the emerging new science of the early modern period fostered science’s essential experimental foundation and became increasingly important to the scientific enterprise. Arrested during the French Revolution’s “Reign of Terror,” Lavoisier was sent to the guillotine on the unsupportable charge of abusing his office as a former administrator of the Ferme Générale, a private company collecting taxes for the crown. The tragedy of his execution, literally cutting short his career in the fullness of his powers, is reflected most poignantly in the words of the French mathematician Joseph-Louis Lagrange: “It took them only an instant to cut off that head, and a hundred years may not produce another like it.” His quick demise was matched by that of phlogiston, no longer tenable within Lavoisier’s oxygen theory of combustion. (Photograph by the author)

14

R. Fleck

by cool material: pouring cold cream into hot coffee, for example, results in a mixture having a temperature lower than that of the hot coffee and higher than that of the cold cream. To take a simple example, the final equilibrium temperature T of a mixture of equal amounts of hot and cold water should be the average of their initial temperatures: T = (T H + T C )/2, where T H and T C denote, respectively, the temperatures of the hot and cold water. Using simple algebra, multiplying both sides of this equation by 2 and then writing 2 T = T + T and rearranging gives T H – T = T – T C , or, following the standard convention of using the Greek letter Δ (“delta”) to denote change (here being the difference between the final and initial temperatures), – ΔT H = ΔT C ; each material undergoes the same (absolute value) temperature change during the heat transfer. For example, mixing equal amounts of hot water at 30 °C and cold water at 10 °C results in a mixed equilibrium temperature of 20 °C, the average of the initial temperatures, all measured in degrees Celsius (°C). If there is twice as much hot water as cold water, we should expect the temperature change of the cold water to be twice that of the hot water: – 2 ΔT H = ΔT C , which is equivalent to stating that there is twice as much heat energy (“caloric”) in the hot water as in the cold water—and that the heat lost by the hot water equals the heat gained by the cold water, a “conservation of caloric/heat energy” that is a more restricted version of the more general statement of conservation of total energy, one of the most important principles in all of science (more on this later). Solving this equation algebraically yields a final equilibrium temperature of the mix T = 23 1 /3 °C for the same two initial temperatures, 30 °C and 10 °C. If the two mixed materials are not the same substances (for example, pouring cream into coffee), each side of the conservation equation must be multiplied by each material’s “thermal inertia,” a quantity called specific heat that accounts for how easily (or not) the specific material changes its temperature when it loses or gains heat. (As it turns out, water has a very high specific heat, the highest of any common substance, which is why the climate of regions near large bodies of water—like where I am in Daytona Beach—exhibits a smaller range of temperature than the climate far from water: it takes a lot of heat to cause even a very small temperature change for water. This also explains the greater day-to-night temperature variation in dry, arid regions compared to that in damp, humid areas containing more water vapor.) And if there are more than two materials being mixed (for example, pouring hot coffee into a room-temperature cup and then pouring cold cream into the coffee), the same principle—the total heat gained (in the coffee case, the heat gained by both the cup and the cream) is set equal to the total heat lost (in this case by the hot coffee), and the resulting equation can then be solved for the final equilibrium of the mix. As we shall see in Chap. 3, the Second Law of Thermodynamics imposes a direction on natural processes, from order to disorder. Mixing hot water with cold water, as discussed here, is a good example of this directionality in nature: before mixing, there is order in the sense that the hot and cold water are separate and distinct (hot water here and cold water there), but after mixing, separation and distinction—and hence order and organization, both spatially and energetically—have been lost. Although it is easy to mix the hot and cold water, the Second Law tells us that it is highly unlikely for the mixture to spontaneously on its own separate into hot and cold water (try it—you’d have to wait longer than the age of the universe—a very, very long time—to see that happen). Note that the information we have about the water decreases when the hot and cold water are mixed: before mixing we knew where the hot water is and where the cold water is, and we knew the distinctly different energies of the two, but after mixing we’ve lost that information—and, importantly, also lost the ability of the flow of heat to do work. We’ll learn more about the Second Law and information theory in Chap. 5, where we discuss more implications of this subtle but sublimely important law of thermodynamics.

2 The Nature of Heat

15

In Newton’s time, many, including Newton himself, regarded heat as we do today: motion of a body’s “insensible parts.” Anglo-Irish natural philosopher (as scientists of the time were called) Robert Boyle (1627–1691, of Boyle’s Law fame) and followers of his corpuscular philosophy maintained that heat is the “vehement and intestine commotion of the parts [of a heated body] themselves.” A similar interpretation was offered by Newton’s friend, John Locke, whose words introduce this chapter. Newton’s nemesis Robert Hooke (1635–1703) concurred: “Heat is a property of a body arising from the motion or agitation of its parts…. Nothing else,” he concluded, “but a very brisk and vehement agitation of the parts of a body.” For the French philosopher René Descartes (1596–1650; see Fig. 6.1), the world and everything in it was reducible to its machine-like mechanical essence—a universe of matter in motion—with heat, for example, explained not as an inherent “quality” of matter, but as the result of the violent motion of corpuscles (“little bodies”), much as we understand it today in its modern mechanical formulation. Unlike the ancients, who rarely quantified the world around them, seeking instead explanations in terms of qualitative essences, for Descartes, a difference in “hotness” was attributed not qualitatively to a difference in kind , but rather quantitatively to a difference in degree. All of this was not very different from how the ancient atomists understood heat as the motion of the indivisible and invisible particles constituting common matter. But none of these arguments was made quantitative, and so most regarded heat—Lavoisier’s Calorique and Aristotle’s “Element of fire”—as one of the many imponderable fluids, along with light, electricity, magnetism, and even gravity, imagined to flood Newton’s mechanical universe. At the beginning of the nineteenth century, just when geologists were speculating on the role of Earth’s internal heat as an agency of geological change, the nature of heat itself was an unresolved issue frequently the focus of rather heated debate. It was at this time that the concept of caloric was seriously questioned by the expatriated American Benjamin Thompson, Count Rumford (1753– 1814), the man who, in an interesting turn of events, married Lavoisier’s widow after the Reign of Terror decided it didn’t need scientists—and scientists didn’t need their heads. An English Heritage London “blue plaque” (Fig. 2.3) is affixed to a building not far from the Victoria and Albert Museum honoring “Sir Benjamin Thompson, Count Rumford” as an “Inventor and Adventurer.” Adventurer indeed! A Royalist and Tory spy in the American colonies, in 1776 Thompson abandoned his wife and infant daughter for England and was later knighted in recognition for his services to the King when he returned to America to command a regiment of royal forces. He was elected

16

R. Fleck

Fig. 2.3 An English Heritage “blue plaque,” one of nearly a thousand commemorating renowned residents of London including several scientists featured in our story (Newton, Joule, Kelvin, Maxwell, and Darwin), this one honoring “Sir Benjamin Thompson, Count Rumford” as an “Inventor and Adventurer.” The town of Rumford, now known as Concord, was the hometown of his first wife in New Hampshire. Count Rumford was an effective social reformer, laying the foundations for the modern welfare state and public schools. He reformed prisons, established orphanages, and preached the nutritional value of the potato. He even found time to establish a large park and beer garden in Munich, Germany. U.S. President Franklin Roosevelt rated him with Thomas Jefferson and Benjamin Franklin one of the “three greatest intellects America ever brought forth.” (Photograph by the author)

Fellow of London’s prestigious Royal Society on the basis of experiments he conducted on the explosive force of gunpowder, received the Society’s Copley Medal in 1792, and instituted the Rumford Medal, one of the highest honors conferred by the Royal Society, chartered the previous century by England’s King Charles II and still the foremost scientific society in Britain. Later, after immigrating to Munich, he was made Count of the Holy Roman Empire in recognition of his service as Minister of War and of the Interior to the Elector of Bavaria, making him the most powerful man in realm after the Elector himself. He returned to England in 1798 and was instrumental the following year in founding London’s Royal Institution, still today a premier place for science education and research. In 1804 he moved to Paris and soon after lost his heart to Marie Lavoisier after her husband lost his head to the guillotine. In a letter to England’s Lady Palmerston, Rumford wrote, “I think I shall live to drive caloric off the stage as the late Lavoisier drove away phlogiston. What a singular destiny for the wife of two philosophers!” But the marriage

2 The Nature of Heat

17

generated more heat than the boring of Bavarian cannon and did not last. A creative and imaginative man, his long list of inventions includes thermal underwear, a drip coffee maker, the pencil eraser, the candlepower unit for measuring the intensity of light, the modern kitchen range, the convection oven (the “Rumford Roaster”), the pressure cooker, and the double boiler. Not a bad count for “the most successful Yank abroad, ever” according to a feature article in the December 1994 Smithsonian Magazine. The Count’s careful observations, published in the London Royal Society’s Philosophical Translations in 1798 under the title “An Inquiry concerning the Source of the Heat Which is Excited by Friction,” clearly contradicted the caloric theory of heat: Being engaged, lately, in superintending the boring of cannon, in the workshops of the military arsenal at Munich,2 I was struck with the very considerable degree of Heat which a brass gun acquires, in a short time, in being bored; and with the still more intense Heat (much greater than that of boiling water as I found by experiment) of the metallic chips separated from it by the borer.

The quantity of heat in a bored barrel seemed inexhaustible. The Count continues:

2 “It was by accident,” Thompson admitted, “that I was led to make the experiments” he reported to London’s august Royal Society. The pleasure and excitement he took in doing science, and the hope he expressed for understanding “the mechanism of Heat,” the effects of which, he predicted, “are probably just as extensive, and quite as important, as those” of gravitation, are very apparent: “The result of this beautiful experiment was very striking, and the pleasure it afforded me amply repaid me for all the trouble I had had in contriving and arranging the complicated machinery used in making it…. … although the mechanism of Heat should, in fact, be one of those mysteries of nature which are beyond the reach of human intelligence, this ought by no means to discourage us or even lessen our ardor, in our attempts to investigate the laws of its operations. How far can we advance in any of the paths which science has opened to us before we find ourselves enveloped in those thick mists which on every side bound the horizon of the human intellect? But how ample and how interesting is the field that is given us to explore! … there is no doubt but its operations are, in all cases, determined by laws equally immutable [as those governing gravity].” Many others before him expressed the intense pleasure and excitement—prime motivational movers—in doing science. Long ago, the Alexandrian astronomer Claudius Ptolemy (ca. 90–170) expressed the joy of contemplating the heavens: “Mortal as I am, I know that I am born for a day, but when I follow the serried multitude of the stars in their circular course, my feet no longer touch the Earth; I ascend to Zeus himself to feast me on ambrosia, the food of the gods.” More recently, the American theoretical physicist and Nobel laureate Richard Feynman reportedly reminded the world why scientists do science: “Physics is like sex: sure, it may give some practical results, but that’s not why we do it.” Calling science “the great adventure of our time,” Feynman reminds us that it “is not done for the sake of an application. It is done for the excitement of what is found out” [2, p. 174].

18

R. Fleck

By meditating on the results of all these experiments, we are naturally brought to that great question which has so often been the subject of speculation among philosophers; namely, What is Heat? Is there any such thing as an igneous fluid? Is there anything that can with propriety be called caloric?

“Anything which any insulated body … can continue to furnish without limitation,” the Count concluded cannot possibly be a material substance; and it appears to me to be extremely difficult, if not quite impossible, to form any distinct idea of anything capable of being excited and communicated in the manner the Heat was excited and communicated in these experiments, except it be Motion.

Indeed, there seemed to be no end to the amount of heat produced by friction. (Rub your hands together and notice the heat produced. You’re either full of caloric or there’s something else going on in the palms of your hands.) Heat, he surmised, was “nothing but a vibratory motion taking place among the particles of the body.” Furthermore, although not appreciated at the time by Thompson, his observation of the ease of producing prodigious amounts of heat from the mechanical grinding of metal portends the inherent directionality of energy transformation in nature—from a more ordered form (mechanical grinding) to a more disordered form (heat)—a directionality later appreciated and finally formulated as the Second Law of Thermodynamics. But if heat is motion, what is moving and how does it move? Thompson didn’t know and, sounding much like the great Newton a century earlier when queried on the ultimate cause of gravity, he did “not presume to trouble the [reader] with mere conjectures.” Of course, he couldn’t know: his work preceded the acceptance of the idea that matter is composed of atoms. Nevertheless, for making the connection between work (produced by boring cannon) and heat, he anticipated by half a century what became known as the First Law of Thermodynamics (discussed in detail in Chap. 3), and he is thus rightly recognized as one of the principal prophets of the science. And so, for lack of an alternative conceptual scheme—not to mention inherent problems such as explaining the transfer of heat from the Sun to Earth on the assumption that heat is a mode of motion of the corpuscles of gross matter—caloric persisted even as British chemist Humphrey Davy’s electrochemical experiments challenged the fluid theory of electricity early in the nineteenth century, and nearly half a century passed before a consensus was reached regarding heat as a mode of motion on the molecular microscale. Caloric would eventually pass into history just as its much-toosimilar predecessor, phlogiston, had previously vanished into the nothingness

2 The Nature of Heat

19

it was, when it was finally appreciated that heat is not a specific chemical substance but rather a general physical process; in this case, a form of energy. Just as a century earlier Boyle had been frustrated in his attempt to explain chemical phenomena in terms of his physical corpuscular theory, Lavoisier, in turn, although immensely successful in analyzing chemical processes by the methods of experimental physics, as was the style in eighteenth-century science in the manner set out in Newton’s Opticks, was immanently unsuccessful when he tried to force a chemical explanation onto a purely physical process. Only after the discovery of the electron and the development of a quantum theory of chemical bonding in the twentieth century would a reconciliation of chemistry and physics become possible. In the meantime, as we shall see, the study of the relation between heat and motive power advanced under the then-popular caloric theory of heat. Our early ancestors stumbled easily on to fire and the heat it produced—the unrestrained and uncontrollable tumbling out of energy due to random, chaotic motion at the molecular level—but we needed many millennia to harness its power in heat engines. “Energy,” the English Romantic poet, painter, and printmaker William Blake famously declared in the late eighteenth century, “is Eternal Delight.”

References 1. H. C. von Baeyer, Warmth Disperses and Time Passes: The History of Heat (Modern Library, New York, 1999; orig. publ. as Maxwell’s Demon, Random House, 1998) 2. M. Feynman (ed.), The Quotable Feynman (Princeton University Press, Princeton and Oxford, 2015)

3 The Laws of Thermodynamics

Summary Using a minimum of mathematics, the science of thermodynamics is developed in its historical context, focusing on the concepts of energy and its availability in thermodynamic processes. Whereas the First Law of Thermodynamics, a statement of energy conservation, tells us what’s possible, the Second Law of Thermodynamics, governing energy transformation (“entropy”) and availability, tells us what’s probable. The contributions of Julius Robert Mayer, Hermann von Helmholtz, and Rudolf Clausius in Germany, James Prescott Joule and William Thomson, later Lord Kelvin, in Britain, Sadi Carnot and Émile Clapeyron in France, and J. Willard Gibbs in America, all working in the nineteenth century, are developed with an eye to understanding the various manifestations and intricacies of the Second Law, including the inherent inefficiency of converting heat into useful mechanical energy, and noting, in particular, a natural direction for change in the universe—an “arrow of time”—marked by an ever-decreasing availability of useful energy as measured by ever-increasing entropy. Although the total amount of energy remains fixed (First Law), energy tends to transform itself—to dissipate—into less useful forms (Second Law). Quantitative examples and a discussion of the “heat death” of the universe illustrate these important thermodynamic principles.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Fleck, Entropy and the Second Law of Thermodynamics, https://doi.org/10.1007/978-3-031-34950-8_3

21

22

R. Fleck

Die Energie der Welt ist constant. Die Entropie strebt einem Maximum zu. —German physicist Rudolf Clausius (1865)

With this catchy couplet, the German physicist Rudolf Clausius (1822–1888; Fig. 3.1) closed his 1865 “Ninth Memoir: On Several Convenient Forms of the Fundamental Equations of the Mechanical Theory of Heat,” summarizing what he had learned about the new science of thermodynamics—and what we today refer to as the Laws of Thermodynamics: the energy of the universe is constant (a generalization of the First Law), and entropy (his term, a measure of the availability and usefulness of energy) tends towards a maximum (Second Law), which together he labeled “fundamental laws of the universe.” Actually, there are two other laws of thermodynamics: the Third Law, which states that a system’s entropy approaches a constant value as the temperature approaches absolute zero; and the Zeroth Law, an afterthought named after the first three were established to indicate its fundamental status, forming the basis for a definition of temperature by stating that if two systems are each in thermal equilibrium with a third system, then they are in thermal equilibrium with each other, meaning all have the same temperature. The Zeroth Law essentially empowers the principle of a thermometer, an instrument used to measure temperature.1 Some have argued that the Third Law, which, unlike the other three laws, doesn’t introduce a fundamental thermodynamic quantity (Zeroth Law, temperature; First Law, energy; and Second Law, entropy), is not a law of 1

Though Galileo Galilei (1564–1642; born the same year as William Shakespeare, Snow’s representative of the humanities side of the “two cultures”; recall Fig. 1.1) is often credited with the invention of the thermometer, in reality several individuals contributed to the development of this basic but important scientific instrument over time. I have a so-called “Galileo thermometer”—a glass tube filled with a clear liquid (mainly water) and four different colored bulbs having slightly different weights that rise and fall as the temperature (and hence density) of the water changes—a pretty device more common today than it is accurate. Although named after him, Galileo didn’t invent this thermometer, but he very likely did invent the so-called thermoscope, a similar but simpler device consisting of a glass tube in which liquid rises or falls as the temperature (and hence volume) of the air contained in a large glass bulb at the top of the tube decreases or increases. He did, however, discover the principle on which his bulb thermometer is based: that the density of a liquid—and hence the upward buoyant force on the bulbs—changes as its temperature changes. The seventeenth century witnessed the invention and implementation of six new and important scientific instruments—the telescope, the microscope, the pendulum clock, the air pump, the barometer, and the thermometer—each of them traced in some way to Galileo, the telescope, of course, being the most notable. All of them helped quantify and extend our qualitative and limited senses and surroundings, bringing science into a new and robust “instrumental” phase. An excellent recent review of the life and science of Galileo is John Heilbron’s Galileo (Oxford University Press, 2010).

3 The Laws of Thermodynamics

23

Fig. 3.1 German physicist Rudolf Clausius, one of the founders of the science of thermodynamics. (Wikimedia Commons, public domain)

thermodynamics at all. In any case, it constrains only the low-temperature behavior of thermodynamic systems. Phenomenologically, it implies that it is not possible to reach the absolute zero of temperature in a finite number of steps, as this would require an ever increasing, and ultimately infinite, amount of work to remove energy from an object as its temperature gets ever so closer to absolute zero, much like, as Albert Einstein showed in his theory of relativity, an ever increasing, and ultimately infinite, amount of work would need to be done on an object to accelerate it up to light speed, the absolute speed limit in the universe just as zero is the absolute temperature limit. (Note that infinite implies endless and limitless in size and amount: however big you imagine it to be, it’s much, much bigger than that.) In any case, these two laws are qualitative statements only and hence of less importance in thermodynamical calculations—“much ado about nothing,” Shakespeare would say (to tie that “other” culture into our discussion)—than

24

R. Fleck

are the first and second laws, both of which, as we shall see, are expressed quantitatively.2 Unlike many laws in science, which are often named for their discoverer (“Newton’s Laws” comes immediately to mind), none of the laws of thermodynamics are so named, a situation that betrays the multitude of individuals involved in their formulation. The science of thermodynamics, the study of the relations between heat and work—the “power of heat”—was one of two new fields of physics introduced in the nineteenth century, the “century of physics” when, to paraphrase the title of a recent book on the subject [1], physics was king of the sciences.3 The other new guy on the physics block was electromagnetism, the study of electricity and magnetism, now recognized as one of the four fundamental forces of nature along with the gravitational force and the weak and strong nuclear forces. Elements of Romanticism, an intellectual movement originating in Europe toward the end of the eighteenth century as a reaction against the heartless materialism of the Age of Reason and Enlightenment, and the industrialization that followed, were central to the birth of both of these new disciplines, particularly the “feeling” that an underlying unity pervaded all of nature. (The Romantics, with hearts turning against heads, stressed feeling over coldhearted calculation.)4 The words themselves betray 2 And so, to put them in proper chronological—if confusing—order, the Second Law was first, and the Zeroth Law was last. The First Law was second, and the Third Law might not even be a law. Got it? 3 Motivated by the rapid advances that are occurring in the related fields of genetics and molecular biology, former U.S. President Bill Clinton remarked at the end of the twentieth century that whereas the past 50 years had been the age of physics, the next will be “very likely characterized predominately as the age of biology.” Indeed, according to a U.S. Department of Energy Office of Science website on the Human Genome Project, “Rapid progress in genome science and a glimpse into its potential applications have spurred observers to predict that biology will be the foremost science of the twentyfirst century.” Already in 1963, the American Nobel-laureate physicist Richard Feynman declared “the science of biology” to be “the most active, most exciting, and most rapidly developing science today in the West.” Science historian Vassiliki Betty Smocovitis, writing early in the new century in the New Dictionary of the History of Ideas (Charles Scribner’s Sons, 2004, vol. 2, pp. 220–226), summarized the situation:

Whereas the physical sciences and their applications dominated science for much of the history of science, the biological sciences now dominate both popular and scientific discussions, especially after the discovery of the structure of DNA in 1953. Viewing the revolution precipitated by the applications of biology to society at the closing of the twentieth century, many commentators anticipate that the new century will be the century of biology.

4 Projected to the distant future in a galaxy far, far away, Luke Skywalker is encouraged by ObiWan Kenobi to “stretch out with your feelings”—to “feel the force,” not to calculate it. “Feel it; don’t think,” Qui-Gon Jinn advised pod racer Anikan Skywalker in the prequel Star Wars I: The Phantom Menace, all reactions against the cold, calculated logic of Star Trek’s Mr. Spock. “I feel ; therefore, I am,” proclaimed the eighteenth-century French-Swiss moralist and political philosopher

3 The Laws of Thermodynamics

25

a unity between heat and motive power in the case of thermodynamics, and between electricity and magnetism in the case of electromagnetism.5 And so, associated by the ancient Greeks with the element of fire, one of four fundamental substances along with earth, water, and air thought to compose the terrestrial realm (leaving Aristotle’s quintessence as the “fifth essence” comprising all things celestial), and more recently believed to be the imponderable fluid called caloric, in the nineteenth century the concept of heat was finally fitted into the theoretical framework of thermodynamics as a form of energy transferred due to a difference of temperature or a change of phase (for example, as liquid water freezes to solid ice or evaporates to gaseous steam, or as ice melts or steam condenses).6 Although scientific theories come and go as we get better at understanding the world around us, Einstein believed that thermodynamics, as it was originally conceived a century earlier, “is the only physical theory of a universal content which I am convinced that within the framework of the applicability of its basic concepts, it will never be overthrown” [2, p. 33], a sentiment ˇ shared by nearly all scientists today (but see, however, V. Cápek and D. P. Sheehan, Challenges to the Second Law of Thermodynamics: Theory and Experiment, Springer, 2005).

Jean-Jacques Rousseau, in a deliberate play on the seventeenth-century French philosopher René Descartes’s famous rationalist maxim, Cogito ergo sum (I think; therefore I am). For more about the influence of Romanticism on the sciences, see, for example, Cunningham and Jardine’s Romanticism and the Sciences (Cambridge University Press, 1990) and Tim Fulford’s comprehensive five-volume edited series Romanticism and Science (London: Routledge, 2002). 5 More recently, continuing the Romantic quest for unity, the weak nuclear force, an interaction between subatomic particles responsible for the radioactive decay of atoms, and the electromagnetic force have been unified into one force, the electroweak interaction, a label that, again, reveals the underlying unification (see, for example, Helge Kragh, Quantum Generations: A History of Physics in the Twentieth Century, Princeton University Press, 1999, pp. 339–344). The grand goal of physics is a grand unification of all known forces. Einstein spent the last 30 years of his life trying (unsuccessfully) to unify gravitation and electromagnetism. The quest continues. 6 Thus our word “calorie” as a measure of the energy content of food, which, to be precise, should in that case be written with an uppercase “C” to distinguish it from its lowercase cousin, one thousand of which equal one Calorie (i.e., 1000 cal = 1 Cal). The typical dietary energy intake is about 2,000 Cal, although this is almost always (wrongly!) written as 2,000 cal (no one could survive on a mere 2,000 cal, which are only 2 Cal of food energy). For a concise survey of the history of these developments, see the referenced works in “Further Reading” by Chang and by Darrigol and Jenn in The Oxford Handbook of the History of Physics (Oxford University Press, 2013). Concerning the ancient Greek belief that all things are made of some particular combination of four terrestrial and one celestial fundamental substance, it is of interest to point out that each of those has its modern correspondent in the so-called states of matter: solid = earth; liquid = water; gas = air; plasma (a highly energized gas) = fire; and dark matter/energy (not yet understood, but together making up 96% of the universe) = quintessence.

26

3.1

R. Fleck

Energy and the First Law of Thermodynamics: Energy Conservation, or What’s Possible

… the phenomena of nature, whether mechanical, chemical, or vital, consist almost entirely in a continual conversion of attraction through space, living force, and heat into one another. Thus it is that order is maintained in the universe—nothing is deranged, nothing is ever lost…. —English physicist and brewer James Prescott Joule (1847)

Joule’s conclusion is one of the earliest statements of one of the most important ideas in physics—indeed in all of science—the conservation of energy: the total energy in an isolated system remains constant.7 (Never mind that he mistakenly thought that “order is maintained in the universe”!) Others, including the German physician Julius Robert Mayer (1814–1878), a man with neither formal training in physics nor academic connections, and that giant of German physiology and physics, Hermann von Helmholtz (1821–1894; Fig. 3.2), were interested in the relationship between physiological processes and heat, the latter being the hot topic of the day. Wondering specifically how food is processed to enable the body to do work, both arrived at the same conclusion that energy (called “force” at the time), as Mayer reported in 1842, “once in existence cannot be annihilated; it can only change its form…. Energies are … indestructible, convertible entities.” Searching for a causal connection between motion and heat, his was the first articulation of the conservation and equivalence of all forms of energy following his analysis of “the quantity of heat which corresponds to a given quantity of motion or falling force.” “If, for example,” he pointed out, “we rub together two metal plates, we see motion disappear, and heat, on the other hand, make its appearance, and we have now only to ask whether motion is 7 Note that “conservation of energy” as used here is a fundamental principle of physics, a law of nature, not to be confused with the laudable practice of conserving energy for the benefit of the Planet—and one’s bank account—in a quest for energy efficiency in the face of energy shortages and expenses. In 1905, as part of his theory of relativity, Albert Einstein (1879–1955) demonstrated the equivalence of energy and matter, expressed in what is arguably the most famous and recognizable equation in all of science: E = mc 2 , the energy of a body is equal to its mass times the square of the speed of light, at 300,000 km/sec (186,000 miles/sec), a very large number, so that even a very small amount of mass contains a substantial amount of energy (see, again, for example, Helge Kragh, Quantum Generations: A History of Physics in the Twentieth Century, Princeton University Press, 1999, pp. 90–93). Thus, we now speak of “conservation of mass-energy,” a generalization of the principle of conservation of mass (as in chemical reactions) first established in the eighteenth century by the French chemist Lavoisier (recall Fig. 2.2), and thus yet another milestone in the ongoing Romantic pursuit of unity in the universe.

3 The Laws of Thermodynamics

27

the cause of heat.” “The vibratory hypothesis of heat,” he concluded, “is an approach towards the doctrine of heat being the effect of motion.” Nevertheless, Mayer’s work, awash with archaic language and metaphysical musings, was ridiculed or simply ignored, and he sunk into insanity from his extreme frustration. In 1847 Helmholtz, just 26, demonstrated mathematically in the language of the lissome differential equations of classical mechanics—something Mayer had not quite done, and Joule had never attempted—the extent of the conservation law in a variety of scientific disciplines including mechanics, heat, electricity, magnetism, physical chemistry, and astronomy:

Fig. 3.2 Marble statue of Hermann von Helmholtz outside Berlin’s Humboldt University. (Photograph by the author)

28

R. Fleck

From … investigation of all the … known physical and chemical processes, we arrive at the conclusion that nature as a whole possesses a store of [energy] which cannot in any way be either increased or diminished; and that, therefore, the quantity of [energy] in nature is just as eternal and unalterable as the quantity of matter. Expressed in this form, I have named the general law ‘The principle of the conservation of [energy].’

(I have replaced Helmholtz’s word “force” with our modern word “energy.” The two are related but different: forces do work when they move things, and energy is the ability to do work.) Here was the mathematical foundation to complement Joule’s empirical findings. Where Joule had been concerned with understanding the production of work in the steam engine, Helmholtz and Mayer focused on the work done by the human engine. That all arrived at the same conclusion of energy conservation—the Englishman through characteristically British Baconian experimentation (see Fig. 6.1); the Germans, particularly Mayer, by drawing universal conclusions from sometimes slender evidence—speaks to its universality and hence fundamental importance. At the 1854 Liverpool meeting of the British Association for the Advancement of Science, William Thomson (1824–1907; later Lord Kelvin; Fig. 3.3), Victorian Britain’s most honored scientist,8 declared that Joule’s discovery a decade earlier of the conversion of mechanical energy into heat by fluid friction, the empirical foundation of the new energy physics, had “led to the greatest reform that physical science has experienced since the days of Newton.” In a fitting tribute to Thomson after his death in 1907, Sir Joseph Larmor, the Cambridge Lucasian Professor of Mathematics, a titled shared by the likes of Isaac Newton and Stephen Hawking (1942–2018), pronounced energy “the most far-reaching achievement of nineteenth-century physical science.” And in his sweeping survey of the history of nineteenth-century science appearing at the beginning of the twentieth century, Theodore Merz concluded that one “of the principal performances of the second half of the

8 And engineer. At midcentury he played an instrumental role in the first successful laying of the Atlantic cable that allowed telegraphic communication between Britain and the United States, for which he was knighted in 1866. A child prodigy, in the course of his long and productive career he wrote 661 papers and was granted 69 patents (from which he became abundantly wealthy) for various gadgets including signal-boosting devices that allowed telegrams to be sent across oceans, and for special compasses suitable for iron ships. In 1892 he became the first British scientist to be raised to the peerage, becoming Lord Kelvin of Largs, after a small river that runs beside Glasgow University—this, along with the honor of his Westminster Abbey entombment next to famed physicist Isaac Newton, not for his scientific achievements, although he dominated British physics in the second half of the nineteenth century, but rather for his impact on Victorian Britain through his successful application of science to technology, including the technology behind home appliances: through most of the twentieth century, a leading refrigerator brand was “Kelvinator.”

3 The Laws of Thermodynamics

29

nineteenth century has been to find … the greatest of all exact generalisations—the concept of energy.” The famous French mathematician Henri Poincaré suggested that, rather than give up the principle of energy conservation, we should invent new forms of energy to save it! Indeed, Austrian physicist Wolfgang Pauli’s “invention” of the neutrino in 1930 to “save” energy conservation during beta decay—a type of radioactive decay that converts a proton into a neutron or vice versa—is a good example of this modern version of “saving the phenomena.” In our time, science historian Robert Purrington pronounced energy conservation “[s]upreme among the global principles of physics … a sine qua non of virtually every area of physical science in the twentieth century” [3, p. 102]. Clearly, a fundamental reformulation of science occurred which redefined physics itself as the study of energy and its protean transformations. Conservation laws, science’s most fundamental statements—and there are many, including conservation of electric charge—are, in a very real sense, nature’s balance sheet, a universalization of accountancy in the Book of Nature.9 A wealthy brewer’s son from industrialized Manchester, Joule (1818–1889; Fig. 3.4) turned from his early interest during the late 1830s in designing and building electric engines to more general issues concerning the relationship between heat and work. With his background in the brewing business, Joule had access to just the kind of sophisticated thermometric apparatus required to quantify this relationship (Fig. 3.5)—the “mechanical value of heat,” with “value” understood in both its numerical and economic sense, emphasizing the economic motivation behind Joule’s experiments during this period of industrialization. Joule’s original interest in heat was fired by the hope, realized after his lifetime, that electric motors would replace the steam engines

9

The principle of energy conservation is rooted in earlier appreciations of conserved or interconverted physical quantities. In the seventeenth century, the German polymath Gottfried Wilhelm Leibniz, co-discoverer with Newton of the calculus, was discussing the conversion of vis viva (motion energy— literally “living force”—defined as mass times the square of speed, twice what we today call kinetic energy) into what would later be called gravitational potential energy and, in the case of impacting bodies, into heat; the seventeenth-century Dutch mathematician, physicist, engineer, astronomer, and inventor Christiaan Huygens recognized the conservation of vis viva in elastic (perfect “bounce”) collisions. Descartes considered conservation of momentum (mass times velocity) as a reflection of God’s role as Sustainer. The Italian physicist Alessandro Volta’s invention of the battery (“voltaic pile”) in 1799 demonstrated the conversion of chemical energy into electricity, and Humphrey Davy’s follow-up experiments with electrolysis established the reverse transformation. In 1800 the Germanborn English astronomer William Herschel, discoverer of the planet Uranus in 1781, illustrated the relation between light and radiant heat (invisible infrared radiation), both of which were known to accompany electricity. And the mutual interconvertibility—the “correlation of forces”—of electricity, magnetism, and motion was confirmed early in the nineteenth century by a handful of investigators including the French physicist André-Marie Ampère, the Danish scientist Hans Christian Oersted, and the English chemist and physicist Michael Faraday, the latter two heavily influenced by the Romantic notion of an underlying unity in all of nature.

30

R. Fleck

Fig. 3.3 Photograph of William Thomson, Baron Kelvin, later in life. (Wellcome Collection, attribution 4.0 International CC BY 4.0)

powering England’s factories, although his early experiments, showing that electrical energy is dissipated as heat in conductors—“Joule heating,” we call it, and you can see it in the glowing heating elements of toasters and ovens—suggested otherwise. Today we use, as Joule did in an 1850 paper titled “On the Mechanical Equivalent of Heat,” the word “equivalent” in place of “value” to emphasize the equivalence of the two forms of energy, mechanical and heat. Adopting the mean result of thirteen experiments, Joule announced in 1843, just one year after a similar investigation by Mayer, that the quantity of heat capable of increasing the temperature of a pound of water by one degree of Fahrenheit’s scale [later defined as one British Thermal Unit or Btu of heat energy] is equal to, and may be converted into, a mechanical force

3 The Laws of Thermodynamics

31

Fig. 3.4 James Prescott Joule at age sixty-four. (Wikimedia Commons, public domain)

capable of raising 838 lb [refined in his 1850 paper to 772 lb, close to the modern value of 778 lb] to the perpendicular height of one foot.

As a basic measure of engine performance, work was defined as the product of weight through lifted height; it did not appear as the independent dynamical quantity we know today—the integral of force over displacement, essentially force times distance over which the force in that direction acts—until later in the century. Joule’s experiments convinced him of the interconvertibility and conservation of “force”—what we today call energy; energy, not force, is the conserved quantity, and the gradual appreciation of the difference between the two was itself important to the discovery of conservation of “energy.”10 Arguing for the reality of conversion and conservation processes in nature, Joule was wholly “satisfied that the grand agents of nature are, by the Creator’s 10 Thomson in 1849 was the first to use the term energy in the new and precise mathematical sense as we know it today: a conserved quantity denoting the ability to do work (which, in turn, is done when a force moves something). Over the next several years, he and his energetic collaborators, in particular the Scottish mathematical physicist and early pioneer in thermodynamics, Peter Guthrie Tait (1831–1901), developed a completely new way of doing physics based on the concept of energy, not force, as a generalization of Thomson’s dynamical theory of heat. First-year physics students today learn to appreciate the importance of energy conservation in understanding the operations of nature ranging from the subatomic to the universe at large.

32

R. Fleck

Fig. 3.5 Joule’s famous “paddlewheel experiment” apparatus used to determine the mechanical equivalent of heat, the foundational principle of the First Law of Thermodynamics, on display in London’s Science Museum, in which a paddlewheel placed in a thermally insulated container of water is made to rotate by falling weights, thereby increasing the temperature of the water as motive force is transformed into heat. With thermometer in hand while on honeymoon in the French Alps, Joule reportedly attempted (unsuccessfully—and, one would think, unromantically) to measure the expected slightly elevated water temperature at the base of waterfalls, a skill honed years earlier when working at his father’s brewery. In a “Letter to the Editor: On the Existence of an Equivalent Relation between Heat and the ordinary Forms of Mechanical Power” (Philosophical Magazine, 3rd ser., 27, 1845, p. 205), he suggested that “Any of you readers who are fortunate as to reside amid the romantic scenery of Wales or Scotland could, I doubt not, confirm my experiments by trying the temperature of the water at the top and at the bottom of a cascade. If my views be correct, a fall of 817 feet will … generate one degree of heat, and the temperature of the river Niagara will be raised about one fifth of a degree by its fall of 160 feet.” Joule is honored in having his name attached to the metric measure of energy: 1 joule (abbreviated J), defined as the amount of work done by a force of 1 newton (about a quarter of a pound) acting over a distance of one meter (1 J = 1 N·m), is equivalent to 0.738 ft·lb (i.e., raising 0.738 lb through 1 ft, or, equivalently, raising 1 pound through 0.738 ft) or 0.239 calories, roughly equal to the work done by each beat of the human heart (but just thinking about it requires a lot more energy—brains are prodigious consumers of energy, making it a good idea to discard them when found no longer useful!). The Joule mechanical equivalent of heat, the price of heat in units of energy—set so high by nature that it is utterly amazing that Joule, a master of thermometry, could measure it (one joule being the energy required to heat up one pound of water by barely one thousandth of a degree Fahrenheit)—is usually given as 4.186 J/cal. The use of different units for work and heat concealed their interchangeability. (Photograph by the author)

3 The Laws of Thermodynamics

33

fiat, indestructible; and that whenever mechanical force is expended, an exact equivalent of heat is always obtained.” It was, he and many others believed, “manifestly absurd to suppose that the powers with which God has endowed matter can be destroyed.” Thomson associated energy conservation with God’s immutability, and many others interpreted it as a sign of a divine creator maintaining the order in the universe; Mayer, abhorrent of the doctrine of materialism, was convinced that the immortality of the soul followed from energy conservation. Both men were awarded the Copley medal of the Royal Society of London—as were Joule, Helmholtz, and Clausius, along with other notables from science, including Benjamin Franklin, Charles Darwin, and Albert Einstein—the highest scientific honor bestowed by England and the oldest surviving scientific award in the world, having been given first in 1731. As historians Peter Bowler and Iwan Morus point out, “theological issues to do with God’s place in creation were at stake here as well as the more prosaic concern to build more efficient engines” [4, p. 91].11 Having demonstrated that heat can be created and destroyed by mechanical force, a mutual convertibility that supported the theory of heat as a mode of motion energy, Joule argued that the work W produced by a heat engine is obtained from the supplied heat Q H and is equal to the difference between this supplied heat and the absolute (positive) value of the exhausted heat |Q C | (following the standard sign convention with heat added being a positive 11

As author Hans Christian von Baeyer points out on page 114 of his Warmth Disperses and Time Passes: The History of Heat (and as will make sense once we introduce the Second Law of Thermodynamics), “If the first law was evidence of the permanence of the Creator, Thomson saw the second one as a reminder of the transience of His creations.” Continuing with the social dimension of science, at this time—indeed, through most of time—women were judged incapable of the rigorous mental work required for success in science on the basis that the balance believed necessary between energies devoted to mental and to physical exertion (“a healthy mind in a healthy body”) was impossible in women’s bodies given that their physical energies were meant to be devoted to maintaining their reproductive organs. In particular, the law of conservation of energy was held by many—especially by Victorian English gentlemen—to explain why women should not do science nor indeed should even be educated: only so much energy could be contained in a woman’s body, the proper purpose of which was to be directed towards childbirth and nurturing. “The mould in which Providence has cast the female mind, does not present to us those rough phases of masculine strength which can sound depths, and grasp syllogisms, and cross-examine nature,” declared the nineteenthcentury Scottish physicist David Brewster, whose many achievements included his work on polarized light and his invention of the kaleidoscope. “[T]he ascent up the hill of science is rugged and thorny, and ill-fitted for the drapery of a petticoat,” proclaimed the Cambridge University geologist Rev. Adam Sedgwick, he who took Charles Darwin to Wales for a crash course in geology before the young naturalist embarked on the HMS Beagle for his five-year voyage around the world, during which time he formulated his theory of evolution through natural selection to explain the wonderous variety of life on Earth. Although for most of its history, science has been essentially a men’s club, having evolved in a “world without women” (to borrow the title of David Noble’s book which traces the male dominance of science to our male-dominated Christian clerical heritage that denied women the right to holy orders and hence also to the educational prerequisite for ordination), we’ve come a long way, baby, from these misogynistic days of old: today a scientist is just as likely to be a woman as a man.

34

R. Fleck

quantity and heat extracted being negative—heat being a “signed” quantity— where “| |” is the symbol for absolute value): W = Q H – |Q C | = Q H + Q C (Fig. 3.6). “[I]n the steam engine it will be found that the power gained is at the expense of the heat of the fire.” Consequently, the process conserves total energy with the input energy Q H equal to the output energy W + |Q C |. For example, if 100 units of heat (pick your favorite energy units) is supplied and 80 units of heat is exhausted, 20 units of work will be performed; that is, for Q H = W + |Q C |, 100 = 20 + 80. Thus did economic principles and concerns inspire physics at about the same time cutthroat Victorian capitalism influenced biology with Charles Darwin’s “economy of nature,” a dog-eat-dog theory of natural selection explaining the diversity of life in a world where nature is “red in tooth and claw” (to borrow from the English poet Alfred Lord Tennyson’s influential In Memoriam), spelled out in his magisterial 1859 On the Origin of Species. In refrigerators and heat pumps (Fig. 3.6), essentially heat engines run in reverse, the direction of energy flow is reversed as work is done on the engine (to run the compressor) and heat is moved from the cold reservoir—the inside of the refrigerator or the outside air for a heat pump—to the hot reservoir— the heat exchange coils outside the refrigerator or the inside of a building for a heat pump. (As Clausius reminded his readers, “the bodies between which the transfer of heat takes place are to be viewed merely as heat reservoirs, of which we are not concerned to know anything except the temperatures.”) A room air conditioner operates on exactly the same principle as a refrigerator: in this case, the refrigerator box becomes a room or an entire building. In each case, you must pay to move (“pump”) heat from a lower temperature medium to one of higher temperature—an external agent must do work (you’ve got to supply energy to the refrigerator, air conditioner, or heat pump to run the compressor)— because, as demonstrated quantitatively later, heat does not flow spontaneously—that is, naturally without having to be driven by an external agency—from low to high temperature. This directionality in nature is a feature of the Second Law of Thermodynamics. Clausius expressed the First Law of Thermodynamics (he called it “the first fundamental theorem”), a special case of energy conservation, in differential form (where the “d ” denotes the mathematical operation known as d ifferentiation, which takes very small changes—differences—of the quantity following it) by the equation d Q = dU + d W

(Fir st Law o f T her modynamics).

3 The Laws of Thermodynamics

35

Fig. 3.6 (left) A schematic diagram for a heat engine showing the mechanical work W performed by the engine as a fraction of the input heat QH from a hot reservoir at temperature T H , with the remaining energy QC being the heat rejected as exhaust heat to a cold reservoir at temperature T C . Note that total energy is conserved: QH = W + |QC |; energy in equals energy out. (right) Refrigerators and heat pumps are essentially heat engines run in reverse, with the direction of energy flow reversed as work is done on the engine (to run the compressor) and heat is moved from the cold reservoir—the inside of the refrigerator or the outside air for a heat pump—to the hot reservoir—the heat exchange coils outside the refrigerator or the inside of the building for a heat pump. Note that heating with a heat pump is more efficient than most other forms of heating (e.g., electric resistive heat strips or fired furnaces), all of which provide at best a one-for-one conversion efficiency (QH = W ), whereas a heat pump provides more heat energy (QH = W + |QC |) than is produced by the work W required to run it, the additional heat |QC | coming from the outside environment. Also note that standing in front of an open refrigerator to cool yourself down on a hot day actually heats the room you’re in, as the heat delivered to the cooling coils |QH | on the bottom or back of the refrigerator is greater than the heat removed from the inside of the refrigerator |QC | by an amount equal to the work done to run the compressor W. Of course, if the wall behind the refrigerator is cut out, allowing the heat |QH | to flow outdoors, you’ll now have an air conditioner—which, if you turn the refrigerator around during winter, becomes a heat pump to heat the house! An air conditioner operates on the same principle as a refrigerator with the refrigerator box replaced by a room or an entire building

Here, U denotes the internal thermal energy of the working substance, typically a gas—the total, random, motion energy of all its “internal” particles—which, for an ideal gas (common gases under ordinary conditions), is directly proportional to its absolute (kelvin) temperature, so that an increase

36

R. Fleck

in thermal energy is marked by a corresponding increase in temperature.12 Easily recognized by today’s students of thermodynamics, the First Law states that if you add heat Q to a system, it can change the system’s thermal energy U and/or perform work W , with the sum of the change in thermal energy and work done equaling the heat added, as required by energy conservation (Fig. 3.7): Q = ΔU + W , recalling that the Greek letter Δ (“delta”) denotes the mathematical operation “change” (difference). For example, if 10 units of heat energy are added to a system which then does 3 units of work, the system’s thermal energy increases by 7 units.

12 Heat is energy transferred by virtue of a difference in temperature or change of state of matter (solid, liquid, gas); thermal energy, also called internal energy, is the energy of molecular motion, the “ultimate particles of matter”: indeed, one can define temperature as a measure of the intensity of this internal energy. Thermal energy, like a noun, is energy contained within an object by virtue of its temperature; heat, like a verb, is the flow of thermal energy. An object can transfer heat, but it cannot contain heat; it contains thermal energy. The two quantities are related but different. In our discussion of thermodynamics, unless it is necessary to distinguish the two, we will use the two terms interchangeably as so many loosely but formally incorrectly do in everyday conversation. And just as there is no such thing as heat contained in an object, there is no such thing as work contained in an object: they are both ways of transferring energy, the former via a temperature or phase difference, the latter via the action of forces moving things. Pressure, volume, temperature, and thermal energy all have well-defined values for each state of the system and are therefore called state variables, thermodynamic quantities interrelated by equations of state, such as that for an ideal gas: PV = nRT , where n is the molar abundance and R = 8.31 J/K·mol is the molar gas constant. Heat and work, on the other hand, are quantities that generally depend on the particular way they are transferred (for example, first adding heat to a gas at constant pressure and then extracting heat at constant volume, versus first extracting heat at constant volume and then adding heat at constant pressure to bring the system to the same final state), and are therefore “path dependent”—dependent on the particular thermodynamic process—unlike a state variable, the change of which is “path independent.” For this reason, only the change in internal energy dU in the First Law is an exact differential as the term is used in mathematics; dW and dQ are not exact differentials: the “d ” here merely denotes very small changes in the quantity whose symbol follows it. The change of a state variable for a cyclical process—one that returns the system to its initial state—is zero since its value is the same at the initial and final states. Thus, for one complete cycle of operation, ΔU = 0 and the First Law becomes W = Q net = Q H + Q C = Q H – |Q C |, or Q H = W + |Q C |: energy in = energy out, as shown earlier (recall Fig. 3.6). In his 1854 “Fourth Memoir: On a Modified Form of the Second Fundamental Theorem in the Mechanical Theory of Heat,” Clausius is very clear about what we today call state variables, pointing out that whereas internal energy (he calls it “interior work”) is a state variable, work (Clausius’s “exterior work”) is not:

… if at every return of [a system] to its initial condition the quantity of interior work is zero, it follows, further, that the interior work corresponding to any given change in the condition of the body is completely determined by the initial and final conditions of the latter, and is independent of the path pursued in passing from one condition to the other…. It is otherwise with the exterior work. With the same initial and final conditions, this can vary just as much as the exterior influences to which the body may be exposed can differ.

3 The Laws of Thermodynamics

37

Fig. 3.7 Illustration of the energies involved in the First Law of Thermodynamics. When heat Q is added to a cylinder of gas, the internal thermal energy of the gas increases by an amount ΔU and the gas pushes on the piston doing work W. Note that, in this ideal case (no friction between the piston and the cylinder, no heat radiated from the cylinder, etc.), energy conservation implies Q = ΔU + W. If the piston is prohibited from moving—and hence from doing any work—all of the heat energy is converted into the thermal energy of the gas: Q = ΔU when W = 0. Because the gas particles are moving randomly in a disordered fashion (arrowed dots in the diagram), at any given time only a small fraction will actually push against the piston to do work, with most of the motion energy of the gas “wasted” as gas particles collide with the cylinder wall. As we shall see, the Second Law of Thermodynamics tells us that it is highly unlikely for all of the gas to be moving in an orderly manner directed precisely against the piston, hence the inherent inefficiency of converting (disordered) heat energy into (ordered and hence useful) mechanical energy that can perform work

The First Law, to take just one example many of us have experienced, explains why a bicycle tire pump gets hot when pumping up a tire. In this case, work, a signed quantity like heat, is done on the pump as you compress the air in the cylinder and is therefore negative (work done by a system being positive), and because the compression is rapid there is no time for heat exchange so Q = 0. (A process characterized by no heat exchange is called adiabatic , from the Greek adiábatos meaning “impassable.”) Thus, from the First Law, 0 = ΔU + W , so ΔU = – W is positive (the negative of a negative), which means the internal energy—and hence the temperature–of

38

R. Fleck

the system increases: the pump gets hot. The opposite effect occurs when a gas—like that in a pressurized CO2 cartridge (or inside a refrigerator)— undergoes a rapid expansion, in which case Q is still 0 but W is now positive as work is done by the expanding CO2 gas pushing on the surrounding air, so that ΔU is negative, meaning the temperature drops and the CO2 cartridge becomes noticeably cooler. Large air masses driven aloft by surface features such as mountains or by buoyancy as they heat up and become less dense and therefore rise, expand against the lower atmospheric pressure at altitude and, according to the First Law, therefore cool (like the gas expanding from the CO2 cartridge), often reaching the dew point resulting in cloud formation. Indeed, manifestations of the First Law are all around us.

Example 3.1 Working with theFirst Law of Thermodynamics Calculate the work done and the change in internal energy of an ideal gas when 500 J of heat is added to the gas, doubling the pressure while holding the volume constant. What happens to the temperature of the gas? Because the gas is held at constant volume, no work is done (W = 0) because the gas doesn’t move anything—recall that work is done only when forces move things. From the First Law, Q = ΔU + W = ΔU + 0 = ΔU , so the change in internal energy ΔU = Q = 500 J. Finally, because the internal energy of the gas increases, increasing here by 500 J, its temperature also increases because temperature is proportional to internal energy, which is what “common sense” tells us: just think of heating a gas while keeping the volume constant—it gets hot.

According to the First Law of Thermodynamics, you can’t get something for nothing: if you want a heat engine to perform work, you’ve got to feed it energy in the form of heat. As they say, there’s no such thing as a free lunch. This law of conservation of energy precludes the possibility of ever producing a perpetual motion machine, a device that produces work without consuming energy.13 More generally, the total energy E of an isolated system always remains 13

In his book on the ten great ideas of science (conservation of energy being number three), author Peter Atkins comments on the timeless scientific folly of the perpetual quest for perpetual motion, the physicist’s equivalent of the alchemist’s futile search for the “philosopher’s stone” to change base metals to gold [5, p. 101]: The energies of fraudsters, however, do seem perpetual, and all manner of weird machines are still being exhibited and invariably, when analysed or simply dismantled, shown to be fraudulent. We are so confident that energy is conserved that scientists (and patent offices) no longer take claims of its overthrow seriously, and the search for perpetual motion is now regarded as the occupation of cranks.

3 The Laws of Thermodynamics

39

constant: ΔE = 0, one of the simplest but most powerful equations in all of physics, as even first-year physics students appreciate. A falling object gains speed and hence kinetic (motion) energy as it loses potential (stored) energy stored by virtue of its height in a gravitational field, always keeping the sum of the two constant (in the absence of air drag, which, if present, also enters into the energy balance sheet). Energy conservation tells us what’s possible: any imagined scenario for which energy is not conserved is simply not possible in nature. As author Peter Atkins puts it in his book Galileo’s Finger: The Ten Great Ideas of Science [5, p. 105], “Energy is truly the universal currency of cosmic accountancy.” Mayer, Helmholtz, and Joule discovered a fundamental truth about heat, but it was not yet the whole truth. As we shall soon see, the Second Law of Thermodynamics, warns that you can’t even break even: the output work will always be less than the heat energy invested because of the inherent inefficiency of converting (disordered) heat energy into (ordered) mechanical energy; the degree of “disorderliness” of energy lies, as we shall see, at the very heart of the Second Law, nature’s “tax” on the First. Like the animals in George Orwell’s Animal Farm, where “all animals are equal, but some animals are more equal than others,” all forms of energy are equal (First Law), but some forms are more equal than others (Second Law). And so, motivated mainly by a concern with engines in a period of industrialization, the principle of energy conservation, developed largely independently and nearly simultaneously by several people working from different “ends of the stick” ranging from British engineering interests to German physiology, entered physics formally through the new science of thermodynamics (a term introduced by Thomson in 1849), the science of the motive power of heat. Like the major rivers of the world, thermodynamics is a confluence of many tributaries, one of which, to which we now turn, discharged that imponderable fluid, caloric.

The French military engineer Sadi Carnot (see Fig. 3.8) was very clear on the impossibility of perpetual motion. Writing in his 1824 pioneering book on the subject of thermodynamics, he addressed the possibility of “an unlimited creation of motive power without consumption either of caloric or of any other agent whatever. Such a creation,” Carnot insisted, “is entirely contrary to ideas now accepted, to the laws of mechanics and of sound physics. It is inadmissible.” Nevertheless, the quest for unlimited power continues—even if the search for the "philosopher’s stone” has not (as a scientist, I’ve received my share of unsolicited perpetual motion proposals to review). While I don’t know how much money gullible investors have wasted on promised perpetual motion machines, it is certain that many practitioners of the aurific art were undoubtedly more adroit at extracting money from the wealthy than gold from earth, perhaps partly explaining why, in the late Middle Ages, Dante put chymists into the eighth circle of Hell alongside counterfeiters and forgers! Fraud was one reason why alchemy—and, more generally, magic—had such a problematic reputation throughout history.

40

R. Fleck

Fig. 3.8 Artist Louis-Léopold Boilly’s portrait of Sadi Carnot in 1813 at age 17 in the traditional uniform of a student of Paris’s l’École Polytechnique. Like many major men of science, including Galileo and Newton, Carnot never married. (Wikimedia Commons, public domain)

3.2

Entropy and the Second Law of Thermodynamics: Energy Availability, or What’s Probable

Poet:… how goes the world? Painter: It wears, sir, as it grows. Poet: Ay, that’s well known…. —Shakespeare, opening exchange in Timon of Athens Even Shakespeare in his time had some appreciation for the tendency for the world to run down, an idea at the core of the Second Law of Thermodynamics, “arguably the most important and fundamental idea in the whole of science” according to science writer John Gribbon [6, p. 388]. Writing

3 The Laws of Thermodynamics

41

in his 1928 The Nature of the Physical World , the British astronomer Sir Arthur Eddington proclaimed that the Second Law held “the supreme position among the laws of Nature” [7, p. 74]. The great Isaac Newton realized this too, writing in the final Query of his 1702 Opticks in reference to the general inelasticity (stickiness) of collisions, which converts motion energy into heat, that “Motion is much more apt to be lost than got, and is always upon the Decay.” Nevertheless, the Second Law and the related concept of entropy is still one of the most misunderstood concepts in physics—and, perhaps not surprisingly, is included as one of the “seven ideas that shook the universe” in a book bearing that title [8]. The first appreciation of what became known as the Second Law of Thermodynamics came out of France, England’s perpetual adversary, fully two decades before the First Law was established. In 1824 the French physicist and military engineer Nicolas Léonard Sadi Carnot (1796–1832; pronounced CAR-NO, as in doesn’t have a car—final consonants are usually silent in French; Fig. 3.8), a graduate of the prestigious Paris École Polytechnique, published at his own expense a nearly unnoticed slim volume titled Reflexions sur la puissance motrice du feu (full translated title: Reflections on the Motive Power of Fire, and on Machines Fitted to Develop that Power ), his only scientific work. Carnot’s investigation of the efficiency or “duty”—the amount of work performed for a given amount of fuel—of steam engines, the power plants of industrialization that were spreading from England across Europe, was, of course, motivated by the desire to get as much work as possible out of these engines as cheaply as possible. France, in particular, was hampered by its lack of accessible coal. “Everyone knows that heat can produce motion.” With this opening sentence of his book, Carnot defines the purpose of a heat engine: making heat do work (recall Fig. 3.6). Carnot continues: “That [heat] possesses vast motive-power no one can doubt, in these days when the steam engine is everywhere so well known.” But, lamenting the fact that thus far all improvements of the steam engine occurred in a tinkering, haphazard way, Carnot continues: Notwithstanding the work of all kinds done by steam engines, notwithstanding the satisfactory condition to which they have been brought today, their theory is very little understood, and the attempts to improve them are still directed almost by chance.

Carnot proposes to investigate the science behind the device—analytical French theory following pragmatic British empiricism—inspiring the renowned remark that science owes more to the steam engine than does

42

R. Fleck

the steam engine to science.14 Increased efficiency was both an economic and, inasmuch as the Creator certainly would have designed the natural economy as efficiently as possible, a moral imperative: working towards a better understanding of nature’s economy might therefore provide, many believed, a profitable means to improve society’s economy. This resonated particularly strongly with the industrial culture of Victorian Britain, especially that of Scottish Presbyterianism which valued hard work and thrift and abhorred dissipation and waste, as well as with the French Republic’s technocratic vision of science at the service of the State, all of which sought to maximize efficiency and minimize waste. Convinced that England’s superior steam technology contributed to Napoleon’s defeat (Carnot mentions England several times throughout his Reflections)—and the consequent loss of the Carnot family’s fame and fortune—Carnot threw himself into developing a robust theory for steam engines “[i]n order to consider in the most general way the principle of the production of motion by heat.” His basic findings apply to a variety of types of heat engines, “all imaginable heat engines, whatever the working substance and whatever the method by which it is operated,” including internal combustion and jet engines—and even to living organisms including ourselves.15 14

For most of human history, the scientist followed in the footsteps of the technologist: until relatively recently, the device typically preceded any underlying understanding—we typically knew “what” before we understood the “how” and the “why.” The steam engine predated the science of thermodynamics by more than a century, and the only significant contribution of science to technology before the nineteenth century was, with the possible exception of a nascent chemical industry, Benjamin Franklin’s lightning rod. Through the course of the nineteenth century, however, this longstanding temporal ordering of technology typically preceding the associated sciences was inverted, as underlying principles more often came to be understood before they were applied. To take just one example, principles of electromagnetism and optics, discovered in the early decades of the nineteenth century, were applied in the electrical and lighting industries where theory was always far ahead of practice. Today, theory and practice are commensally locked in a symbiotic relationship of mutual interdependence as the boundary between science and technology wears thin: science feeds technology, and technology changes how we do science. 15 Carnot emphasizes the importance of the steam engine and the desire to understand its operation in the opening page of his book, some of which is worth repeating here: … Nature, in providing us with combustibles on all sides, has given us the power to produce, at all times and in all places, heat and the impelling power which is the result of it. To develop this power, to appropriate it to our uses, is the object of heat engines. The study of these engines is of the greatest interest, their importance is enormous, their use is continually increasing, and they seem destined to produce a great revolution in the civilized world. Already the steam-engine works our mines, impels our ships, excavates our ports and our rivers, forges iron, fashions wood, grinds grains, spins and weaves our cloths, transports the heaviest burdens, etc. It appears that it must some day serve as a universal motor, and be substituted for animal power, waterfalls, and air currents.

3 The Laws of Thermodynamics

43

Carnot was guided by a water wheel analogy. Sadi’s father, the French engineer and aristocrat, and Napoleon’s minister of war, General Lazare Carnot (1753–1823; Sadi’s nephew, Marie François Sadi Carnot, was President of France later in the century), had demonstrated that the amount of work produced by a water wheel is a function of the amount of water and the distance the water fell in making the wheel turn. Just as water is conserved while performing work in a water mill, Sadi argued (incorrectly) that caloric—what we today recognize as heat energy—was conserved, not lost, in a steam engine as it moved from high temperature (as steam) to low temperature (as condensed water), doing work in the process: “The production of motive power [work] in steam engines,” Carnot concluded, “is due not to an actual consumption of caloric, but to its transportation from a warm body to a cold body.”16 And just as water does work by falling from a higher to a lower level in a water wheel, caloric, Carnot proposed, does work in a heat engine by “falling” from a high temperature to a lower temperature: The motive power of a waterfall [une chute d’eau] depends on its height and on the quantity of the liquid; the motive power of heat depends also on the quantity of caloric used, and on what may be termed, on what in fact we will call, the height of its fall (the matter here dealt with being entirely new, we are obliged to employ expressions not in use as yet, and which perhaps are less clear than is desirable), that is to say, the difference of temperature of the bodies between which the exchange of caloric is made.

In the spirit of Platonic idealism (see Fig. 6.1), Carnot argued that the efficiency of a hypothetical idealized heat engine—one that is characterized by reversible changes, that is, infinitesimally slow and small quasistatic Over the first of these motors it has the advantage of economy, over the two others the inestimable advantage that it can be used at all times and places without interruption. If, some day, the steam engine shall be so perfected that it can be set up and supplied with fuel at small cost, it will combine all desirable qualities, and will afford to the industrial arts a range the extent of which can scarcely be predicted…. Little did Carnot realize that steam engines would, in turn, be joined—and more often replaced— later in the century by electric motors and internal combustion engines (the steam engine being an external combustion engine), and, more recently, by solar power—and by a return to “air currents” with wind energy now again becoming an important source of renewable, environmentally friendly energy. 16 Carnot can’t be blamed for believing in the conservation of heat : as first-year physics students know—and as Example 2.1 illustrates—calorimetry experiments involving the mixing of materials of different temperatures confirm that, as a special case of total energy conservation, heat energy is indeed conserved in a thermally insulated environment.

44

R. Fleck

changes that can be reversed to go the other way back to the same initial state along the same path of intermediate, well-defined equilibrium states (like a motion picture that can be run forward or backward with equal plausibility)—depends only on the difference in the temperatures of the (H ot) heat source T H and the (C old) heat sink T C , independent of both the pressure and the particular working substance, and, in what is called “Carnot’s Theorem,” that the efficiency of any real engine operating between these same two temperatures could never attain this ideal (“Carnot”) efficiency e C = C (T ) (T H – T C ), where C (T ) is the “Carnot function” (which later was found to equal 1/T H ): In the waterfall the motive power is exactly proportional to the difference of level between the higher and lower reservoirs. In the fall of caloric the motive power undoubtedly increases with the difference of temperature between the warm and the cold bodies; but we do not know whether it is proportional to this difference.

Just as more work is done by water falling through a greater height differential, more work is done by heat—the “fall of caloric”—when it is moved across a greater temperature differential. Note that the Carnot efficiency is proportional to the difference in temperature between the hot and cold reservoirs, T H – T C : as Carnot realized, “wherever there exists a difference of temperature, motive power can be produced,” and the greater the temperature difference, the greater the power produced, all else being the same. Importantly—and this point lies at the very foundation of the Second Law of Thermodynamics, which addresses the limited availability of energy to do work—even an ideal (Carnot) engine cannot be a perfect (100% efficient) engine because the source of heat can never be infinitely hot: comparing “the motive power of heat to that of a waterfall,” Carnot noted that “[e]ach has a maximum that we cannot exceed.” Furthermore, Carnot showed that (italics in original): “The motive power of heat is independent of the agents employed to realize it; its quantity is fixed solely by the temperatures of the bodies between which is effected, finally, the transfer of the caloric .” Believing heat (caloric) to be transported undiminished in magnitude from high to low temperature as work is done—believing caloric to be an imponderable, conserved substance (and he could not have believed otherwise in his time)—Carnot did not yet appreciate the energy transformation actually taking place in heat engines as heat is only partially converted into useful mechanical energy having the ability to do work. Nevertheless, and most remarkably, this error leaves unchanged Carnot’s conclusions which remain today the foundation

3 The Laws of Thermodynamics

45

for the science of thermodynamics. Carnot did eventually commit himself to energy conservation by the time of his untimely death, but these “Posthumous Remarks” did not come to light until 1878, much too late to influence ensuing events. Elsewhere in French matters of heat early in the century, the French physicist Jean-Baptiste Joseph Fourier (1768–1830), who had served as scientific adviser to Napoleon in Egypt (and died of a disease contracted there), continued the French predilection for theory, publishing in 1822 his Théorie analytique de la chaleur (Analytical Theory of Heat ) which, among other things, introduced important mathematical tools into physics: partial differential equations and their solutions in terms of so-called orthogonal functions such as the eponymous Fourier series. Fourier used his theory of heat to describe—not explain; Fourier’s theory was non-hypothetical, postulating neither atoms nor imponderable fluids—the physics of heat conduction, an early example of continuously mediated contact action versus Newtonian action-at-a-distance as displayed so mysteriously by gravity. With this he was able to explain the well-documented increase in temperature with depth in mines as a geothermal gradient produced by Earth’s reservoir of thermal energy (another source of renewable energy today). Significantly for our story here, his heat conduction equation introduced time-irreversibility into the laws of physics: heat conduction, unlike Newton’s laws of motion, is irreversible with respect to time, as heat always flows in one direction from high to low temperatures, never spontaneously (on its own) flowing from low to high temperatures, a feature that would later be recognized, as we shall see, as a manifestation of the Second Law of Thermodynamics. In 1833, the year after Carnot succumbed to a cholera epidemic ravaging Paris, another graduate of l’École Polytechnique, Émile Clapeyron (1799– 1864), came across Carnot’s book and reproduced its essential parts the following year in the Journal de l’École Polytechnique after it had been rejected by other journals; it was Clapeyron who introduced the letter “Q ” for quantity of heat. Clapeyron illustrated Carnot’s ideal cycle in a now-famous and still-used “pV-diagram” of pressure versus V olume (Fig. 3.9). It was from Clapeyron’s paper that Thomson became aware in 1849 of Carnot’s work, recognizing its importance to the new science of thermodynamics which he himself had just recently begun to investigate, publishing between 1851 and 1855 a series of important papers titled “On the Dynamical Theory of Heat” based on Joule’s energy conservation principle (First Law) and Carnot’s investigations of the inherent inefficiency of heat engines (Second Law), using the rigor of mathematics to render these earlier ideas more quantitative and persuasive.

46

R. Fleck

Fig. 3.9 A representation of Carnot’s cycle in a pV-diagram of pressure (ordinate) versus V olume (abscissa), a diagram familiar to students of thermodynamics still today, and essentially that which the Scottish inventor James Watt (1736–1819) had called an “indicator diagram.” (For more on Watt, who, considered by many to be the “greatest inventor of his age,” did more than anyone else to improve the efficiency of heat engines, see, for example, Ben Marsden’s Watt’s Perfect Engine: Steam and the Age of Invention [Columbia University Press, 2002] or Ben Russell’s James Watt: Making the World Anew [Reaktion, 2014].) In this ideal cycle, a quantity of heat QH is absorbed at constant temperature T H as the system evolves along the AB isotherm, followed by an adiabatic (no heat exchange) expansion along the adiabat BC. Heat is then exhausted at constant temperature T C as the system cycles back along the lower CD isotherm, before returning to its original state (at point A) by a final adiabatic compression along the adiabat DA. The mechanical work done in the cycle is represented by the area of the curvilinear rectangle ABCD. Watt’s new and improved steam engine was so efficient he gave them away and requested from the user only the money saved on fuel costs for the first three years of operation. He became very wealthy, and the Industrial Revolution in England profited from a new and improved source of cheap energy. (Wikimedia Commons, Cristian Quinzacara, CC BY-SA 4.0)

After studying Clapeyron’s paper, Thomson demonstrated for the Carnot cycle that the ratio Q /T of heat exchanged to the temperature at which the exchange occurs during the reversible, isothermal (constant temperature) portions is constant, a discovery that became the key to the quantification of thermodynamics; no heat is exchanged during the adiabatic segments (see Fig. 3.9). Here, the temperature is measured in kelvins, abbreviated K, an absolute temperature scale proposed in 1848 by Thomson (later Lord Kelvin) with a zero point at –273.15 °C (“justly ... termed an absolute scale, since its characteristic is quite independent of the physical properties of any specific substance," being instead based on Carnot’s analysis). Using Joule’s energy relationship W = Q H – |Q C |, the thermal efficiency—the ratio of output

3 The Laws of Thermodynamics

47

work to the input heat energy—the return on your investment—can be written as e ≡ W/Q H = (Q H − |Q C |)/Q H = 1 − |Q C |/Q H , so that the Carnot (maximum) efficiency, for which Q/T = constant = Q H /TH = |Q C |/TC , becomes eC = 1 − TC /TH . Here, one can immediately see that the maximum efficiency in converting heat into useful mechanical energy to do work for even an ideal (Carnot) engine can never be 100%: such a device would require either taking in heat at an infinitely high temperature or exhausting it to a reservoir at absolute zero, neither of which is possible (with absolute zero prohibited by the Third Law of Thermodynamics, and theories of quantum gravity predicting a maximum absolute temperature of 1032 K just after the Big Bang some 13.8 billion years ago—hot enough, but not infinitely hot). Sadly, even the ideal is not perfect. Clearly, an engine will work only if some energy is wasted: there is no such thing as a perfect engine; it is a mythical monster, like the Cyclops and the free lunch (this is the so-called Kelvin statement of the Second Law). Furthermore, due to various, inherent, irreversible, energydissipating processes, the efficiency of any real engine will always be less than its Carnot efficiency: significantly, Carnot’s ideal engine considers, as the ideal case, only the required heat release and ignores the merely annoying sundry losses inherent in real engines. For example, the Carnot efficiency for a gasoline-powered automobile engine can approach 60%, but taking into account the friction of moving parts, incomplete fuel combustion, heat loss from the combustion chamber and radiator, etc., its actual operating efficiency is, at best, about half that; diesel engines are somewhat more efficient because of their higher compression ratios and hence higher operating temperatures (notice that the Carnot efficiency increases with higher T H ). Rather disturbingly, most of the fuel we buy and burn goes into heating —and polluting—the atmosphere: only a fraction of the (chemical) energy content of the fuel is converted into useful mechanical energy with the ability to do work, a losing proposition—for the consumer and for the Planet. And there’s nothing we can do about it. It’s nature. And that’s just part of why things seem to go wrong. Nevertheless,

48

R. Fleck

for a given temperature differential, the Carnot efficiency is useful for setting a maximum efficiency target for designers of real heat engines.

Example 3.2 Calculating Thermal and Carnot Efficiencies For example, the thermal efficiency of a device that performs 3 units of work from an investment of 10 units of heat energy is 3/10 or 30%. And if the operating temperatures for this device are, for example, T C = 300 K (approximately 27 °C, room temperature) and T H = 500 K, the Carnot efficiency e C = 1 – 300/500 = 2/5 or 40%. Note that the actual (thermal) efficiency is less than the ideal (Carnot) efficiency as it always must be.

Example 3.3 Heat Pumps Move More Heat Than Can Be Delivered by Electrical Resistance Heating Suppose the outside temperature in winter is 23 °F (= −5 °C = 268 K, using the temperature conversion equations TCelsius = 5/9 [T Fahrenheit – 32] = T – 273, where, again, T is the absolute temperature in kelvins) and the desired inside temperature is 68 °F (= 20 °C = 293 K). Let’s calculate how many joules of heat an ideal (Carnot) heat pump will deliver to the inside for each joule of electrical energy required to run the unit. Referring to Fig. 3.6, we can easily see that the energy flow in a heat pump (or a refrigerator) gives Q C + W = Q H (energy in = energy out; where to simplify the notation, we have dropped the absolute value signs and therefore take all quantities to be positive). We can already see that however much work W is done by the unit, the heat delivered to the inside Q H is going to be greater than that by an amount equal to the heat brought in from the outside Q C . From an economic standpoint, the best heat pump cycle is one that pumps the greatest amount of heat from outside for the least amount of work. The ratio of these two quantities, Q C /W , is called the coefficient of performance (COP ≡ Q C /W ) and is a measure of efficiency for heat pumps (and refrigerators). Rewriting the energy equation as W = Q H – Q C and substituting this into the formula for the COP gives COP = Q C /(Q H – Q C ) = T C /(T H – T C ) = 1/[(T H /T C ) – 1] = 10.7, where we have used the (ideal) Carnot ratios Q H /Q C = T H /T C = 293/268 presented earlier to write the heat energies in terms of their corresponding absolute temperatures. Solving the COP equation for Q C = (COP)·(W ) and substituting this into the energy equation and factoring out the W gives Q H = W + Q C = W (1 + COP) = 1 J (1 + 10.7) = 11.7 J of heat transferred to the inside at a cost of only 1 J. This is 10.7 J of heat more than the 1 J that would be generated by electrical heat strips (as found in electric toasters and ovens), which is at best one-for-one for a total heat delivered to the inside Q H equal to the input electrical energy W . With the heat pump, 10.7 J are brought inside as heat energy from the outside. As long as the outside temperature is above absolute zero, the temperature at which the thermal energy is zero (a very likely possibility anywhere in the universe), there will be heat available to pump inside (COP > 0 so Q C > 0 and therefore Q H > W ). And so, even real heat pumps will be more efficient than straight, electrical resistance heating (which many heat pumps activate when rapid heating is desired, thereby lowering their operating efficiency). Note that the COP decreases when the temperature difference T H

3 The Laws of Thermodynamics

49

– T C between the inside and outside increases: unlike heat engines, which are more efficient when the temperature difference between the hot and cold reservoirs is large (which is easy to see when writing the Carnot efficiency as e C = (T H – T C )/T H ; note that the temperature difference here is in the numerator, whereas it appears in the denominator for heat pumps), heat pumps are not as efficient when there is a large inside-outside temperature difference, that is, when it is very cold outside in winter or very hot outside in summer when the cycle is reversed. (Note that writing the Carnot efficiency in this form, it is easy to see that the Carnot function C(T ) = 1/T H ). We can also appreciate the higher efficiency of geothermal heat pumps (and air conditioners) because heat is exchanged with ground water that is almost always warmer than the outside air in winter and cooler than the outside air in summer. I know: I used to have one. Importantly, you have to pay (literally!—but not as much as with strip heating) to do work against nature when moving heat “uphill” from low to high temperatures, much like water can be forced uphill using a water pump. In this universe, there’s no free lunch—and no free heat.

In an 1854 lecture “On the Interaction of Natural Forces” delivered two years after citing “a universal tendency in nature to the dissipation of mechanical energy,” Helmholtz addressed the problem of the dissipation and availability of energy, pointing out that the universal tendency of energy to dissipate as heat, which flows naturally (that is, spontaneously on its own) from high to low temperature, means that all the energy in the universe would eventually be transformed into heat at a uniform, low temperature with the resulting cessation of all natural processes, a most depressing scenario. (Recall that only when there exists a temperature difference can heat move to do work.) Given the fixed total amount of energy in the universe as supported by the First Law, if portions of that total are increasingly become less available to do work, then the day will surely come, however far into the future that might be, when all the energy in the universe will be unavailable and no more work could be done. “[T]he universe from that time forward would be condemned to a state of eternal rest,” Helmholtz wrote, warning of the impending “heat death” of the universe—ironically, a death by freezing (Fig. 3.10). Two years earlier, in his 1852 article titled “On the Universal Tendency in Nature to the Dissipation of Mechanical Energy,” Thomson similarly concluded that Within a finite period of time past, the earth must have been, and within a finite period of time to come the earth must again be, unfit for the habitation of man [and, one would assume, woman] as at present constituted, unless operations have been, or are to be performed, which are impossible under the laws to which the known operations going on at present in the material world are subject.

50

R. Fleck

Fig. 3.10 “La miserable race humaine périra par le froid” (“The miserable human race will perish by the cold”), an apt depiction of the (ironically) deep-freeze heat death of the universe depicted in French astronomer and popular science author Camille Flammarion’s 1893 La fin du monde (The End of the World), a science fiction novel about a comet colliding with Earth followed by several million years leading up to the gradual end of the world, an icy end when all the fires go out—all to the disappointment of American poet Robert Frost who sided with the majority in his 1920 bleak poem “Fire and Ice” favoring an end in fire from what he’s “tasted of desire” over destruction from ice from what he “know[s] … of hate.” In his Warmth Disperses and Time Passes: The History of Heat (p. 119), author Hans Christian von Baeyer describes the dreadful scene on a desolate sheet of ice surrounded by what looks like a wall of towering ocean waves frozen solid. In the foreground a bearded old man in tattered rags, flat on his belly on the ice, is caught in the act of raising his head for a last, hopeless look around. Next to him a younger man with gaunt eyes and Christlike demeanor stands tall and barefoot, desperately clutching the remnants of a threadbare tunic to his body in a futile attempt to ward off the inevitable end, while his scarf flaps in the wind like a broken black wing. The most pathetic member of the little group is the mother with long dark hair who is sitting on a rock behind the men, huddling to protect her son and the baby she clutches in her bare arms. The old man pictured here brings to mind our constant and unavoidable irreversible process of aging, yet another consequence of the Second Law of Thermodynamics. (The Gutenberg Project, public domain)

3 The Laws of Thermodynamics

51

“All the sounds of man, the bleating of sheep, the cries of birds, the hum of insects, the stir that makes the background of our lives—all that was over,” bemoaned H. G. Wells in a chilling description of a desolate future Earth in the closing pages of his 1895 novel, The Time Machine. It would be an ending in line with T. S. Elliot’s “way the world ends/Not with a bang but a whimper.” Science fiction novels about the end of the world, such as Wells’s and Camille Flammarion’s 1893 international best-seller La fin du monde (The End of the World ; recall Fig. 3.10), were common at this time. The title of the 1881 poem by the Franco-Uruguayan poet Jules Laforgue, “Funeral March for the Death of the Earth,” says it all: “Oh, what a drama you lived, fast-cooling ashes!/But sleep; it’s over now. Eternally, sleep.” Fifteen years earlier, the English poet and novelist, Algernon Swinburne, expressed a similar sentiment: Then star nor sun shall waken, Nor any change of light: Nor sound of waters shaken, Nor any sound or sigh: Nor wintry leaves nor vernal, Nor days nor things diurnal Only the sleep eternal

Noted British philosopher Bertrand Russell summed up the pessimism in his 1903 book Why I Am Not a Christian [9, p. 107]: … all the labours of the ages, all the devotion, all the inspiration, all the noonday brightness of human genius, are destined to extinction in the vast death of the solar system, and the whole temple of Man’s achievement must inevitably be buried beneath the debris of a universe in ruins….17

17

Russell, however, was not himself worried. Rather, this ultimate extinction meant we should take a shorter-term view of life [9, p. 11]: I am told that that sort of view [i.e., heat death] is depressing, and people will sometimes tell you that if they believed that, they would not be able to go on living. Do not believe it; it is all nonsense. Nobody really worries much about what is going to happen millions of years hence…. Therefore, although it is of course a gloomy view to suppose that life will die out—at least I suppose we may say so, although sometimes when I contemplate the things that people do with their lives I think it is almost a consolation—it is not such as to render life miserable. It merely makes you turn your attention to other things.

Until the advent of relativistic cosmology early in the twentieth century, most scientific discussions of the evolution of the universe were evolving models based on the notion of a heat death, which, many pointed out, had a superficial similarity with the apocalyptic passages in the Bible. Even as late

52

R. Fleck

(Not to worry: recent calculations [11] indicate that the universe will not be heat dead for some 10100 years—that’s 1 followed by 100 zeros!— long after all black holes, remnants from the collapse of massive stars at the end of their life cycles where gravity is so strong that nothing, not even light, can escape, have evaporated [see Note 3 of Chap. 5] and nothing that coheres is left except for the evanescent, non-interacting waste products from previous eras, all in complete thermal equilibrium at the same temperature—and far longer than the little time we have left on Earth before the changing climate makes life here uncomfortable, if not impossible).18 Darwin despaired at the cosmic pessimism embodied in the Second Law, a doctrine of inorganic evolution that changed our conceptions of the material universe, dooming humanity, Darwin lamented, “to complete annihilation after such long-continued slow progress.” Decay—not progress—was the thermodynamic message that flew in the face of nineteenth-century optimism and belief in unrestrained progress and the perfectibility of the human race (thus did the revolutionary socialist Friedrich Engels, believing human thought to be the necessary end-product of cosmic progress, deny the Second Law). It undermined the very concept of the Newtonian World Machine running in perpetuitatem. And so, things are getting worse—everywhere, all across the universe—even if it will take time to notice. as 1951, in an encyclical address delivered to the Pontifical Academy of Sciences, Pope Pius XII made clear that he considered the heat death to be an additional argument for a universe subordinated to the will of God, concluding that “This fatal destiny … postulates eloquently the existence of a Necessary Being” [10]. Interestingly, the prediction of a final cosmic heat death addresses not only the future of the universe but also implies that it had a beginning a finite time ago, because if the universe, irreversibly running down, were infinitely old, it would have died already. Remarkably, this profound conclusion was not grasped by scientists in the nineteenth century, and the idea of a universe beginning abruptly with a “Big Bang” had to await astronomical observations in the 1920s (see, for example, Helge Kragh, “Cosmologies and Cosmogonies of Space and Time,” in The Modern Physical and Mathematical Sciences, ed. Mary Jo Nye, Cambridge University Press, 2003, pp. 522–537). Interestingly, the Belgian astronomer-priest Georges Lemaître (of the famous HubbleLemaître law relating the recessional velocity of a galaxy to its distance) tells us he “was led to formulate [a Big Bang] hypothesis … from thermodynamic considerations while trying to interpret the law of degradation of energy [i.e., the Second Law of Thermodynamics] in the frame of quantum theory.” Of course, a universe having a beginning in time—“entropic creation”–resonates with the biblical Genesis account of Creation: “Let there be light!” 18 See, also, their August 1998 article in Scientific American, and the earlier account in the May-June 1997 issue of American Scientist. Adams and Laughlin comment ([11], pp. 162–63): Without a temperature difference, no heat engine can operate and no work can be done. Without the ability to do physical work, the universe ‘runs down’ and becomes a rather lifeless and inactive place…. Interesting processes in the universe will thus shut down if thermodynamic equilibrium [everything at the same temperature] is attained…. Interesting processes, like biological evolution would no longer take place.

3 The Laws of Thermodynamics

53

Thomson’s work in thermodynamics motivated his thinking about Earth’s thermal history, and he later estimated a thermodynamic age for Earth. Using Fourier’s theory of heat conduction, one of the first in Britain to do so (he read and mastered Fourier’s Théorie analytique de la chaleur in his seventeenth year of life), together with data on the rate of increase of temperature with depth in Earth’s crust, Thomson announced in his 1862 paper “On the Secular Cooling of the Earth” that it took 100–200 million years for Earth to cool down to its present temperature, a timescale he lowered to a mere 20 million years in his paper published three years later, “The ‘Doctrine of Uniformity’ in Geology Briefly Refuted.” In his 1868 address “On Geological Time” to the Glasgow Geological Society, Thomson warned that A great reform in geological speculation seems now to have become necessary …. [I]t is quite certain that a great mistake has been made—that British popular geology at the present time is in direct opposition to the principles of Natural Philosophy.

Thomson’s relatively young Earth did not allow enough time for the “denudation of the Weald” and biological evolution, and, as Darwin reluctantly admitted, cast “an odious spectre” over the theory of natural selection. Such was the prestige of Thomson—and of physics, the “king of the sciences” [recall Ref. 1] after two centuries of triumphantly explaining the workings of the world. (Thomson later revealed the religious roots of his opposition to Darwin’s theory.)19 The discovery of radioactivity at the end of the century, and the appreciation early in the twentieth century of its dual importance both as a source of heat within Earth and, significantly, as a clock to actually determine Earth’s age, rendered Thomson’s analysis invalid. Earth’s currently accepted age, based on radiometric dating of terrestrial, lunar, and meteoritic rocks, is about 4.56 billion years, plenty of time to explain the planet’s diverse and ancient geological features as well as its marvelous diversity of life. 19

In his Warmth Disperses and Time Passes: The History of Heat [12, p. 118], Hans Christian von Baeyer notes the contemporary social implications of the Second Law: The principle of dissipation of energy conformed not only with religious beliefs, but with the Victorian social order as well. The population was sharply divided between the hereditary upper class called “quality” and a vast throng of lower creatures. While the fall from up high through depravity and dissipation was a common theme in literature, the opposite journey, from low to high, was exceedingly rare and [like moving heat uphill] required special, artificial effort. (Social climbing was easier in America than in England.) Thomson viewed energy as stratified in the same way as the society in which he lived.

54

R. Fleck

Carnot initiated the science of thermodynamics, discovering what would eventually become known as the Second Law of Thermodynamics, addressing the availability of energy to do work and the inherent inefficiency of heat engines (which in Carnot’s time had not reached beyond a few percent: no wonder he thought the torrent of caloric cascading through these machines seemed undiminished); Joule, Mayer, and Helmholtz formulated the principle of energy conservation, a special case of which would become the First Law of Thermodynamics. The synthesis and systematization of both into the mature science of thermodynamics was due to both Thomson and Clausius. Clausius’s first paper on the subject, “Über die bewegende Kraft der Wärme” (“On the Motive Force of Heat”), published in the prestigious Annalen der Physik und Chemie (Journal of Physics and Chemistry) in 1850, included the first formal statement of the Second Law and dropped Carnot’s assertion that heat was conserved during the production of work: “the production of work is not only due to an alteration in the distribution of heat, but to an actual consumption thereof ” (italics in the original).20 Thomson, too, argued the same in his 1851 paper, but the German method, which would become the model of mathematical physics, was, not surprisingly, much more mathematical than the British style. Fifteen years later in 1865 Clausius gave the name entropy to an important thermodynamic quantity he had introduced eleven years earlier, the infinitesimal (exceedingly small) change—the closest you can get to no change at all and still have change—of which was motivated by Carnot’s work, and is

20

“I am not … sure that the assertion, that in the production of work a loss of heat never occurs,” Clausius admits. “It may be remarked … that many facts have lately transpired which tend to overthrow the hypothesis that heat is itself a body, and to prove that it consists in a motion of the ultimate particles of bodies.” “In the production of work,” he continues, using this phrase over and over again, “a certain portion of heat [Q H ] may be consumed, and a further portion [|Q C |] transmitted from a warm body to a cold one; and both portions may stand in a certain definite relation [Q H = W + |Q C |, as noted earlier] to the quantity of work [W ] produced.” In his Seventh Memoir of the series, this one published in 1863, Clausius reiterates his position: … heat is not invariable in quantity; but that when mechanical work is produced by heat, heat must be consumed, and that, on the contrary, by the expenditure of work a corresponding quantity of heat can be produced…. I did not think that Carnot’s theory, which had found in Clapeyron a very expert analytical expositor, required total rejection; on the contrary, it appeared to me that the theorem established by Carnot, after separating one part and properly formulizing the rest, might be brought into accordance with the more modern law of the equivalence of heat and work [as established by Joule and others], and thus be employed together with it for the deduction of important conclusions.

3 The Laws of Thermodynamics

55

defined as d Q/T

(entropy change)

for any reversible (ideal) heat exchange: the incremental change in entropy is equal to the energy supplied as heat divided by the temperature at which the energy transfer occurs. Operationally, if the temperature changes during heat transfer, one simply integrates—sums up—all the small, incremental heat exchanges occurring over each incremental temperature, and if the process is irreversible, one calculates the entropy change for any reversible process between the same initial and final states of the system because entropy, like internal thermal energy U , is a state variable and thus is independent of the particular way the change takes place (recall Note 12). Note that energy exchanged at a higher temperature is associated with a lower entropy change. A mathematical construct with no simple physical analogy, the word itself is taken from the Greek words ενεργεια (energy) and τρoπη (trop¯e, transformation); hence “energy transformation”—“designedly coined,” Clausius admitted, “to be similar to ‘energy,’ for these two quantities are so analogous in their physical significance.” Most importantly, Clausius showed that entropy, which he denoted with the letter S —because, as he noted, all the other nearby letters of the alphabet relevant to thermodynamics, such as P (P ressure), Q (Q uantity of heat), R (molar gas constant), T (T emperature), U (internal energy), V (V olume), and W (W ork) were already taken—always increases for any irreversible (real) process and remains unchanged for any reversible (ideal) process. Thus, Clausius could state the Second Law of Thermodynamics—C. P. Snow’s exemplar of science, sharing with Shakespeare center stage in the “two cultures”—in an exact and concise mathematical form: ΔS ≥ 0

(Second Law o f T her modynamics),

where the inequality holds for irreversible processes (those actually occurring in nature), and the equality for reversible (ideal) ones such as those taking place in a Carnot engine. All real physical processes are irreversible and therefore produce entropy. As Clausius put it: the energy of the universe is constant, but its entropy always increases. A rise in entropy implies that some irreversible process prevents the maximum amount of work predicted by the

56

R. Fleck

(ideal) Carnot condition from being produced. All natural processes proceed with ever-increasing entropy towards a state of thermodynamic equilibrium (i.e., temperature equality), which, because no heat can then be exchanged, is characterized by unchanging—and hence maximum—entropy. Taken together, as Clausius’s couplet introducing this chapter so succinctly announces, the energy of the universe is constant (First Law) and the entropy of the universe is rising (Second Law). The First Law states an ideal equality, while the Second Law relates a real inequality. The First Law expresses the constancy of the quantity of energy in an isolated system; the Second Law addresses the quality and hence the availability of energy: the total amount of energy remains fixed, but energy tends to transform itself—to dissipate—into less useful forms, namely, heat (or, more precisely, thermal energy). Energy stored at a higher temperature is more useful, higher-quality, lower-entropy energy, more available to do work, but the availability of energy in the real world is constantly decreasing as energy is continually degraded, making it hard to even break even in the energy transformation game. The energy that escapes as heat from a hot cup of coffee into its surroundings loses its quality, as that energy cannot easily be reused. Like two stores stocked with the same quantity of items, one store orderly and the other disorderly, and thus differing in the quality of the shopping experience they can provide, so, too, energy has a qualitative face that affects its usefulness, with higher entropy forms being of lower quality and hence less useful. And as if things weren’t bad enough, the Second Law reminds us that things are getting worse all the time, as Helmholtz understood in predicting the heat death of a universe in thermal equilibrium with no longer any flow of energy, all of which is then unavailable to do work. A “law of dissipation,” as Thomson called it, the Second Law states that, in a closed system, the amount of work-available (useful) energy moves down a gradient from usefulness to useless. This is why restoring a closed system to a higher state of order requires an outside input of energy. Taking the universe as a closed system, this is why the universe naturally drifts towards a “heat” death of cold, workless equilibrium. All in all, a pretty pessimistic picture: ya can’t get somethin’ for nothin’, ya can’t even break even, and things are gettin’ worse all the time. In Joule’s paddlewheel experiment, a real—and hence irreversible—system, the (ordered) mechanical energy of the falling weights is converted into the (disordered) thermal energy of the water, raising its temperature. The total energy of the system consisting of the weights and the water is conserved— remains the same—as the (thermal) energy gained by the water equals the (stored gravitational potential) energy lost by the falling weights, but the

3 The Laws of Thermodynamics

57

entropy of the system increases as heat is added to the water while the entropy of the weights is unchanged because no heat is added to or taken from them. The energy exchange here is indeed an irreversible, one-way process, with energy being “downgraded” from ordered, mechanical energy to disordered, thermal energy: it is not at all likely that the energy exchange will run in reverse with the weights gaining energy by rising higher at the expense of the water cooling and losing thermal energy. The Second Law thus enforces a directional constraint on energy transformation, imposing a natural direction on change in the universe, a direction on cosmic history that increases its entropy and therefore decreases the quality of its energy, preventing the universe from reversing course. Unlike the laws of mechanics and electrodynamics which are time-reversible—invariant (unchanging) with respect to the algebraic sign of the variable representing time in the equations, and thus not distinguishing between the historical past and the imagined future—the Second Law of Thermodynamics, like Fourier’s equations of heat flow, imposes a direction to time and does distinguish between time past and time future. As summarized by the physicists Wolfgang Panofsky and Melba Phillips [13, p. 560], this asymmetry in time, the mystery that makes the past different from the future, … makes it reasonable to assume that the second law of thermodynamics can be used to ascertain the sense of time independently in any frame of reference; that is, we shall take the positive direction of time to be that of statistically increasing disorder, or increasing entropy….21

If left to itself, nature will, in the course of time, transform all motion (useful, higher-quality, ordered energy) into heat (less useful, lower-quality, disordered energy), never all heat into motion, just as Newton had long ago surmised. It was soon realized that “time’s arrow” (a concept introduced nearly a century ago by the English astronomer Arthur Eddington in his popular 1928 book The Nature of the Physical World [7, p. 68]), being directional rather than reversible and potentially cyclical, applies to all natural processes, and the Second Law, which therefore brings the history as well as the structure of nature into the realm of physical description, became the new metaphor for historical change.

21

If an isolated glass of water warmer than its surroundings were to spontaneously warm up, not cool down, time would be going backward. For more on the arrow of time, see, for example, David Layzer’s essay “The Arrow of Time” (Scientific American, December 1975).

58

R. Fleck

Indeed, it explains why anything happens at all, and why when things do happen, they generally don’t happen for the better. Entropy is, as author Peter Atkins points out, the “spring of change.” “All change,” Atkins emphasizes, “is the consequence of the purposeless collapse of energy and matter into disorder…. When we strip away the iron to leave the abstraction of a steam engine, we obtain a representation of the spring of all change” [5, pp. 109– 110]. Whereas the steam engine originally epitomized economic power and wealth, today we appreciate that, in essence, it embodies one of the most powerful ideas in science: the natural direction of change in the universe is towards increasing disorder, uselessness, and corruption of quality. Thomson appreciated the inane unlikeliness of reversing the course of nature in a world run in reverse: The busting bubble of foam at the foot of a waterfall would reunite and descend into the water; the thermal motions would reconcentrate their energy, and throw the mass up the fall in drops reforming into a close column of ascending water. Heat which had been generated by the friction of solids and dissipated by conduction, and radiation with absorption, would come again to the place of contact, and throw the moving body back against the force to which it had previously yielded. Boulders would recover from the mud the materials required to rebuild them into their previous jagged forms, and would become reunited to the mountain peak from which they had formerly broken away. And [as in ‘the curious case of Benjamin Button’] … living creatures would grow backwards, with conscious knowledge of the future, but no memory of the past, and would become again unborn.

Had he lived to see motion pictures, Thomson surely would have been amused to watch a movie run in reverse of a bursting bubble or of the shattering of a dropped glass when it hits the floor, each case seeming to unnaturally—indeed, oddly and surprisingly, and, one might add, laughably—increase order. (The Nobel Prize-winning American theoretical physicist Richard Feynman’s test for irreversibility was a laugh from the audience when a motion picture runs backwards.) And no one unages like Benjamin Button. Indeed, the unavoidable and irreversible process of aging is yet another outcome of the Second Law of Thermodynamics—and an unpleasant consequence of evolution which acts to preserve those traits beneficial for survival until successful reproduction, after which there is no longer any survival advantage, and after which there could, in fact, be a distinct disadvantage given limited food and other resources. Because throughout most of history, few people lived beyond the age of forty, aging wasn’t considered a

3 The Laws of Thermodynamics

59

problem until relatively recently. Whether the result of genetic planned obsolescence or just due to general wear and tear, aging is a fact of life—and of the Second Law. This directionality in nature, evident only on the macroscale as countless perfectly reversible atomic processes entangle us in the irreversible unfolding of events, explains why it’s much easier to bake a cake than to unbake it, much easier to scramble an egg than to unscramble it (which explains why “all the king’s horses and all the king’s men couldn’t put Humpty Dumpty together again”), and why you can never unexplode an exploded bomb or put the heat and ashes from burnt wood back together again to make unburnt wood. Ice melts upon absorbing heat, whereas warm water will never spontaneously give up heat and freeze. In every case, it’s easier to go from a more ordered state (separated egg white and yolk, or solid ice with its molecules all neatly arranged in a very definite and fixed regular geometric pattern) to a more disordered state (mixed egg white and yolk, or melted ice—liquid water—with molecules now moving about at random): entropy increases as the thermal—and hence positional—disorder of a substance becomes more vigorous, a disorder that increases even more for the freely flying, highly chaotic molecules of a gas, a word that, not surprisingly, comes from the same root as “chaos.” A cup of hot tea or coffee cools, but will never spontaneously heat up on its own, just as a glass of ice-cold water gradually warms, and an egg will not spontaneously cook when placed on a cool plate. Nor will a container of warm water spontaneously separate into the hot and cold water that was mixed together to make warm water (recall Example 2.1). Heat tends to spread out uniformly. The directionality in nature associated with entropy and the Second Law accounts for the characteristic features of turbulent fluid flow—the apparently random and chaotic (but statistically ordered) variation in pressure, energy, and flow velocity within a fluid disturbed from a state of rest or from a more regular and ordered laminar (layered) flow pattern. Turbulence commonly occurs in a variety of everyday phenomena such as in the wake of an airplane flying through the air or a boat moving through water, in ocean waves breaking at sea or on a shore, in the rapids often observed in fastflowing rivers, in clouds billowing up into the sky, or, on a smaller scale, in a freshly stirred cup of coffee. It is also ubiquitous on cosmic scales ranging from planetary atmospheres—the beautifully intricate and chaotic flow in the clouds of Jupiter’s atmosphere is especially notable here—to the largescale structure of galaxies.22 Leonardo da Vinci’s sketches of turbulent fluid 22

See, for example, R. Fleck, Astrophys. J ., 270, 507–510 (1983); Astron. J ., 89, 506–508 (1984); Astron. J ., 97, 783–785 (1989); Nature, 583, E24 (2020).

60

R. Fleck

flow as recorded in the Codex Leicester now owned by Bill(ionaire) Gates were among the first accurate renditions of the characteristic swirling motions often observed in fast-moving water. More recently, artists such as Vincent van Gogh, whose swirling skies dominate his 1889 painting The Starry Night, one of the most recognizable paintings in Western art, and Katsushika Hokusai in his earlier (ca. 1830) and arguably equally well-known woodblock print of The Great Wave off Kanagawa, have been equally enthralled, as have so many scientists and engineers, in the chaotic beauty of turbulence. One of my former students and former NASA astronaut Nicole Stott recorded her impression of the turbulent flow of water off the coast of Venezuela while orbiting high above Earth in the International Space Station (Fig. 3.11). Turbulence is caused by some type of disturbing force, such as planes or boats moving through air or water, wind blowing over water, a paddle stirring the water behind a canoe, or a spoon stirring a cup of coffee, that injects excessive motion energy, typically at large spatial scales in a fluid, that overwhelms the damping effect of the fluid’s viscosity (a type of internal friction or “stickiness” in the fluid). In turbulent flow, unsteady swirling vortices of many sizes appear, the largest normally comparable to the scale at which energy is input to the flow (as with planes, boats, or spoons) extending all the way down to the smallest of scales below the level of human vision, as energy cascades to ever-smaller scales under the dissipative action of viscosity.23

Fig. 3.11 The Wave (2009) by former NASA astronaut Nicole Stott, the first watercolor painted in space, this one aboard the International Space Station, inspired by the turbulent flow of water at Isla Los Roques, Venezuela, as seen from space. Scientists can now use physics-inspired metrics to determine the entropy and complexity of paintings; not surprisingly, Jackson Pollock’s “drip” paintings have a high degree of entropy. (Courtesy of the artist. Used with permission)

23

A century ago, in a play on Jonathan Swift’s

3 The Laws of Thermodynamics

61

Significantly, the flow pattern of turbulence evolves from a more ordered swirling pattern that reaches all the way down to the lowest level of random molecular motion. Surely disorder is increased in this natural and irreversible process: random molecular motion will generally not self-organize to generate the order inherent on the largest scales of the flow. Interestingly, the energy spectrum—the energy at various length scales—and the spatial structure of turbulent flows are now being actively investigated from the perspective of the maximum entropy principle [15]. We can illustrate the directionality of nature indicated by the Second Law with a simple, qualitative example of heat transfer. Direct heat transfer between a hot reservoir at temperature T H (the hot steam delivered to a steam engine, or the ignited fuel mixture in an internal combustion engine, or the core of a nuclear reactor) and a cold reservoir at temperature T C (of the surrounding environment) is an irreversible process because of the temperature differences and hence unspecified thermodynamic states. To make the heat transfer reversible, place a very large hot reservoir—an “inexhaustible” source so large that its temperature T H remains constant during the heat transfer—into thermal contact with a cylinder of gas (such as shown in Fig. 3.7) so that the gas slowly expands and does reversible work on the

Great fleas have little fleas Upon their backs to bite ‘em, And little fleas have lesser fleas And so ad infinitum, the scientist and pioneering fluid dynamicist Lewis Fry Richardson put the process of turbulent energy cascade to verse [14, p. 184]: Big whorls have little whorls Which feed on their velocity, And little whorls have lesser whorls And so on to viscosity. Although we can model turbulence in an approximate, statistical sense, it is a complex, stochastic phenomenon characterized by highly nonlinear interactions and remains an outstanding problem yet to be solved in detail. Physics Nobel laureates Werner Heisenberg, who pioneered the study of quantum mechanics early in the twentieth century (giving his name to the popularly recognized Heisenberg uncertainly principle) and Richard Feynman, arguably the greatest physicist of twentiethcentury America, both men recognized as two of the greatest physicists of all time, both described turbulence as the most important unsolved problem in classical physics.

62

R. Fleck

friction-free piston as it absorbs heat Q H . Then remove the heat source and let the gas continue to expand (adiabatically) until its temperature drops to T C . Finally, place the cylinder in thermal contact with a very large cold reservoir at temperature T C and allow the piston to reversibly compress the gas while transferring heat Q C , equal in magnitude to Q H , to the cold reservoir. Remembering that heat lost is intrinsically negative (that is, Q H < 0 in this case, since it is heat extracted from the hot reservoir), the entropy change of the system is ΔSsystem = ΔSH + ΔSC = −|Q H |/TH + Q C /TC , where |Q H | is the absolute (positive) value of Q H . Since T H > T C , and |Q H | = Q C , ΔS system > 0, as required by the Second Law: overall, the entropy of the system increases because the decrease in entropy of the heat source is less than the increase in entropy of the heat sink.24 Note that if the heat source and sink are at the same temperature—that is, in thermal equilibrium (as will be the case when the universe suffers its heat death)—no heat will flow and there will be no change in entropy, which is therefore at a maximum: the state of thermal equilibrium is a state of maximum entropy. However, if the direction of heat flow were to reverse with heat flowing spontaneously from low to high temperatures (as would have to happen to unmix the mixed water in Example 2.1), so that now Q H > 0 and Q C < 0, the total entropy of the system would then decrease: ΔSsystem = Q H /TH − |Q C |/TC < 0, in violation of the Second Law. Thus, heat does not spontaneously flow from low to high temperatures, meaning there is no such thing as a perfect refrigerator , another mythical monster like the perfect engine (this is the so-called Clausius statement of the Second Law: “Heat cannot of itself pass from a 24

Another way to transfer heat reversibly from a hot object to a cooler one is to first place the hot object in contact with a large heat reservoir initially at the same temperature, and then slowly (reversibly) lower the reservoir temperature to T H , the average temperature of the hot body during the transfer of heat Q H to the reservoir. Then adjust the reservoir temperature to match that of the cold object, which is similar in every respect (mass, composition, etc.) except temperature to the hot one, and slowly (again, reversibly) raise the reservoir temperature to T C , the average temperature of the cold body as it absorbs heat Q C which, because the two objects are otherwise identical, is equal in magnitude to Q H (i.e., Q C = – Q H ), all of which results in a net increase in entropy as found previously. Of course, because the temperature of each object changes during the heat transfer processes, using the average temperature change of each is not strictly correct—as noted previously, one must integrate dQ /T to find the entropy change for each object—but, importantly, the entropy change will still be positive (total entropy increases) as required by the Second Law.

3 The Laws of Thermodynamics

63

colder to a warmer body,” in his words, translated from his German). The asymmetry of time’s arrow measured by changing entropy ensures the irreversible character of natural processes. (The Kelvin statement of the Second Law introduced earlier also follows from the condition ΔS ≥ 0: a perfect engine would convert all of the heat removed from the source, which therefore suffers a decrease in entropy, into work, which doesn’t change in entropy, so the net effect would be an unallowed decrease in entropy.)25 Indeed, to move heat from the cold contents of a refrigerator to the warmer outside, or from the cold outside to inside a house with a heat pump, work must be done by the compressor (recall Example 3.3 and Fig. 3.6)—a price must be paid—to move the heat against its natural tendency to go from high to low temperatures. A motion picture showing heat flowing spontaneously from cold to hot, as in warm water spontaneously freezing to solid ice, would look just as silly as Humpty Dumpty spontaneously coming together again; one must pay to forcefully extract heat from water to make ice, and eggs break, they do not unbreak. Watching ice melt, eggs break, or bombs explode is the natural direction of time’s arrow in an inherently irreversible universe. The condition ΔS ≥ 0 applied to a heat engine drawing heat Q H from a hot reservoir at temperature T H , converting some of that heat to useful mechanical energy in the form of work W , and discarding (waste) heat to a cold reservoir at temperature T C (recall Fig. 3.6) gives the same result as found earlier for heat transfer because the entropy of the engine itself remains unchanged after it cycles back to its initial state and no entropy change is associated with any work done: ΔS = ΔSH + ΔSC = Q H /TH + Q C /TC = −|Q H |/TH + Q C /TC ≥ 0,

25

Professor Atkins comments [5, p. 119]: We see that the degree of abstraction represented by Clausius’s introduction of entropy neatly captures the two empirical laws that seemingly portrayed two different aspects of the world: the statement of the Second Law in terms of entropy is like a single cube that rotates to appear as a square, representing Kelvin’s statement, or a hexagon, representing Clausius’s statement. Clausius’s statement that entropy never decreases is a succinct summary of experience and is the more sophisticated, more abstract statement of the Second Law.

The various versions of the Second Law, including Carnot’s original formulation, attest to its generality, a desirable feature in any scientific principle, and moved the American Nobel physicist and philosopher of science Percy Bridgman to remark in his 1941 book The Nature of Thermodynamics that “There have been nearly as many formulations of the second law as there are discussion of it” [16, p. 46].

64

R. Fleck

remembering that here Q H is intrinsically negative (Q H < 0) since it represents heat extracted from the hot reservoir, so that Q C /TC ≥ |Q H |/TH . Here, the equality gives the minimum amount of heat that can be wasted to guarantee a maximum efficiency for converting source heat into work, which reproduces the constant Kelvin ratio Q /T for the Carnot cycle mentioned previously. Note that to minimize wasted heat, the cold sink should be as cold as possible and the hot source as hot as possible, the latter case typically being the best choice inasmuch as cold sinks are generally rare (which is why, for example, modern power plants, even though typically located near a natural source of cooling water, use superheated steam). From the First Law and energy conservation, the work done by the heat engine is the difference between the heat absorbed and the heat exhausted: W = |Q H | – Q C . Solving this equation for Q C and substituting the result into the above inequality gives, after a little algebra, an equation for the thermal efficiency of the engine in terms of the operating temperatures: e ≡ W/|Q H | ≤ 1 − TC /TH , which is a maximum for an ideal (Carnot) engine, as we’ve noted previously, and the inequality ensures that the efficiency of a real engine will always be less than the Carnot efficiency (“Carnot’s Theorem”), dropping to zero when there is no temperature difference between the two reservoirs (T C = T H ) to move heat to do any work. In a perfect engine converting all of the input heat into work (|Q H | = W and Q C = 0), ΔS = −|Q H |/TH = −W/TH ≥ 0, or W/TH ≤ 0, a condition that cannot be satisfied for a positive amount of work done by the engine, confirming, again, the Kelvin statement of the Second Law. Of course, the inverse process where W < 0 is possible: there is no objection to converting any amount of work into heat, order into disorder, as occurs, for example, when the (negative) work done by the force of friction generates

3 The Laws of Thermodynamics

65

heat when bringing a moving object to rest, or when a bicycle tire pump warms up after work is done on it to compress air.

Example 3.4 Directionality of Heat Transfer: a Quantitative Illustration for Body Heat As another example of the directionality of heat transfer, this one quantitative with representative numerical values instead of qualitative as in the previous discussion, consider body heat. Our bodies produce the energy equivalent of a 100-W light bulb, 100 J of energy every second (1 W = 1 J/s), from the food we metabolize (equivalent to about 2,000 Cal of food energy every day). Most of this energy is released as heat to our surroundings—a good thing! (see Example 3.5 below)—which are typically at a lower temperature than our body, increasing its entropy at the rate of about 100 watts/293 K = 0.34 watts/K, assuming a temperature of 20 °C (= 293 K) for our environs. For a body temperature of 37 °C (= 310 K), our body’s entropy decreases (because body heat is lost to our surroundings, so the heat transferred from our body is negative) at a rate of about 100 watts/310 K = 0.32 watts/K, an entropy loss rate a little less than the entropy gain rate for our surroundings. Thus, in this Example, the total rate of entropy change for the system, which in this case is our body and its surroundings, is ΔS system /time = 0.34 watts/K – 0.32 watts/K = 0.02 watts/K > 0, as required by the Second Law.

Example 3.5 Counting Calories: How Much Energy is in the Food We Eat? Most people don’t appreciate the amount of energy contained in the food we eat. As mentioned in Example 3.4 above, a typical dietary energy intake is about 2,000 Cal, although very active people may require nearly twice that much, and more sedentary people can get by with somewhat less than that. Since the human body is mostly water, we’ll use the specific heat (discussed in Example 2.1) of water, which is 1 Cal/kg·K = 4186 J/kg·K (recall from Note 6 that 1 Cal = 1000 cal, which in turn equals 4186 J when converted to the metric unit of energy). Looking at the units used for specific heat, one can readily appreciate that it is a thermodynamic quantity that tells us how much heat energy is required to change the temperature of a given mass of material: for water that would be 4186 J of energy to change the temperature of each kg of mass by one degree K (or one degree C since a degree of temperature is the same on each scale, which differ only in their zero point). Putting 2,000 Cal (=8,372,000 J) of energy into a 80-kg person (=176 pounds of weight here on Earth) would increase the person’s body temperature by 8,372,000 J/[(80 kg)·(4186 J/kg·K)] = 25 K, which converts to a whopping 45 °F! (Note how the units for energy and mass, J and kg, cancel out, leaving only units of temperature, 1/[1/K] = K.) Readers familiar with physics will recognize our temperature-increase equation as Q = m c ΔT , the quantity of heat Q required for a temperature change ΔT of mass m, where c denotes the specific heat of the material. Were it not for the body’s temperature regulating systems, which include radiating heat away from the body to the surroundings (as in Example 3.4), eating a day’s supply of food would produce terribly uncomfortable changes in body temperature. Had we used the actual specific heat of the body (= 3480 J/kg·K, somewhat lower than that of water due to the

66

R. Fleck

presence of protein, fat, and minerals, all of which have specific heats lower than that of water), the increase in temperature would have been greater: 30 K or 54 °F. Of course, not all the energy content in the food we eat is converted into body heat: some of it becomes stored as chemical energy through metabolic processes, and, of course, some of what we eat passes through the body without being digested. Still, this example illustrates the relatively large amount of energy in the food we eat. Think of that the next time you sit down with a bowl of ice cream! If you’re still not convinced that we take in a lot of energy in the food we eat, we can calculate how high an 80-kg person could climb if that person had 2,000 Cal of food energy to expend doing work climbing against the force of gravity. In this case, the 2,000 Cal of available energy are set equal to the gravitational potential energy we’ve gained after using that energy to do an equal amount of work against gravity, an amount of work equal to the product of our weight mg (the force of gravity acting on our body having a mass m) and the vertical distance, or height h, we climb (recall that work equals the product of force times distance): that is, Q = mgh, another basic formula familiar to physics students, where g = 9.8 m/s2 is the acceleration due to gravity here on Earth. The reader should recognize that in both cases considered here—using food energy to increase either body temperature or body height—we are invoking that all-important principle of conservation of energy. Solving this energy equation for the height h gives h = Q /mg, the food energy consumed divided by the person’s weight mg. Note that mass and weight are not the same: an object’s weight is equal to its mass times the local acceleration due to gravity—astronauts in Earth orbit have mass but are weightless because they are “falling” with gravity as they orbit Earth. Here on Earth, a mass of 1 kg equals a weight of about 2.2 pounds, or mg = (1 kg) x (9.8 m/s2 ) = 9.8 newtons of weight in the metric system where one newton (1 N) of force is defined as the force required to accelerate a mass of 1 kg by 1 m/s2 (thus 1 N = 1 kg·m/s2 ). Recalling the story—and it is most likely only a story—about the apple falling from a tree inspiring Newton to think about gravity, it is rather appropriate that an apple weighs about 1 N—about a quarter of a pound. Solving the energy equation for height, and recalling the definition of a joule as equal to 1 N·m, gives h = Q /mg = 8,372,000 J/[(80 kg)·(9.8 m/s2 )] = 10,700 m (10.7 km), which is 6 miles high! Of course, we’ve assumed 100% efficiency in the conversion of food energy into mechanical work, a highly improbable situation according to the Second Law of Thermodynamics, so the actual distance you could climb will be quite a bit less. Nevertheless, think about that the next time you sit down with a bag of potato chips! And so, the energy content of the food we eat is quite high. But, of course, the energy available to raise the temperature or the height of an object can be much higher than what we find in food. To take just one example, consider that the energy of a satellite in low-Earth orbit, moving at an orbital speed of about 8 km/s (5 miles/s or 17,000 mph—fast enough!), has nearly 50 times the motion energy as would be required to raise its temperature to the point of melting (assuming the satellite is mostly aluminum which melts at 660 °C). Thus the problem of the reentry of crewed spacecraft through Earth’s atmosphere: this energy must be dissipated—converted to heat and quickly radiated away—or the astronauts will most certainly not survive (as unfortunately occurred in 2003 when the Space Shuttle Columbia reentered Earth’s atmosphere and the heat of reentry penetrated a portion of the heat shield that was damaged during launch, destroying the internal wing structure and causing the orbiter to become unstable and break apart with all lives lost). Most meteoroids from space, typically tiny stones the size of a pea, burn up in the atmosphere (we see them as meteors, sometimes called “shooting stars”) due to the heat produced by friction from the air molecules they encounter, but bigger ones survive passage through the atmosphere and strike Earth’s surface

3 The Laws of Thermodynamics

67

creating impact craters such as Meteor Crater near Winslow, Arizona, a nearly one-mile wide hole in the desert. But the most efficient source of energy in the universe is mass itself, as Einstein pointed out in 1905 as a consequence of his theory of relativity (recall Note 7): E = mc 2 , where here c represents the speed of light, 300,000,000 m/s or 186,000 miles/s, a big number however you measure it, so that even a very small mass contains a very large amount of energy. The energy contained in an apple (to take an example in honor of Newton) having a mass of 100 g (a weight of about 1 N) is (0.1 kg)·(300,000,000 m/s)2 = 9 × 1015 J, the explosive energy equivalent of 2,250 kilotons of TNT, about the same as what would be released by 150 Hiroshima atomic bombs. And just as the Second Law tells us that trying to put Humpty Dumpty back together again is unlikely, putting all those bombs back together again is even more unlikely. As amazing as all this may seem, as an astrophysicist familiar with the very large energies here and there throughout the universe, I can safely say that the mass-energy contained in an apple is just a drop—a very tiny drop—in a very big bucket. A lot of calculations here concerning a lot of energy, William Blake’s “Eternal Delight.” But, along with the direct applications to thermodynamics, I’m guessing that because you’re reading this book, you’re hoping to increase your science literacy with the hope of feeling more comfortable on the science side of the “two cultures.” And I hope you will be.

Finally, we mention the thermodynamic quantity known as the Gibbs free energy G = U + pV – TS, introduced by the pioneering American physical chemist and thermodynamicist J. Willard Gibbs (1834–1903), the “intellectual link between the steam engine and chemical reactions” [17, p. 169]. This is a measure of the “free” or useful energy available for doing work; it also determines the direction of chemical reactions, particularly those involving energy utilization in living systems (a field of study known as bioenergetics), all of which always proceed toward a minimum of G, which, of course, corresponds to a maximum entropy at equilibrium (note the negative sign subtracting the entropic energy term TS ). Directionality on the macroscale was soon shown to be rooted in the microphysics of thermodynamic principles: entropy, a rather protean concept even from the beginning, was identified with a measure of disorder and the likelihood of a system to be in a given state, with increasingly disordered states being the most probable states simply because there are more ways to be disordered than ordered. Probability was thereby introduced into physics creating a new type of statistically based mechanics tied to thermodynamics, a “mathematics of molecules” which for a gas interprets the pressure, temperature, and other macroscopic properties of the gas in terms of the average values of the speed, momentum, and energy of its countless constituent particles. Unlike the First Law, which is absolute, the Second Law, as we shall see, is purely statistical in nature.

68

R. Fleck

References 1. I. R. Morus, When Physics Became King (University of Chicago Press, Chicago & London, 2005) 2. P. A. Schilpp (ed.), Albert Einstein: Philosopher-Scientist (Open Court, La Salle, IL, 1949) 3. R. D. Purrington, Physics in the Nineteenth Century (Rutgers University Press, New Brunswick, 1997) 4. P. J. Bowler, I. R. Morus, Making Modern Science: A Historical Survey, 2nd edn. (University of Chicago Press, Chicago and London, 2020; orig. publ. 2005) 5. P. W. Atkins, Galileo’s Finger: The Ten Great Ideas of Science (Oxford University Press, Oxford & New York, 2003) 6. J. Gribbin, The Scientists: A History of Science Told through the Lives of Its Greatest Inventors (Random House, New York, 2004) 7. A. Eddington, The Nature of the Physical World (Cambridge University Press, Cambridge, 1928) 8. N. Spielberg, B. D. Anderson, Seven Ideas that Shook the Universe (John Wiley & Sons, New York, 1985) 9. B. Russell, Why I Am Not a Christian (George Allen & Unwin, New York, 1957) 10. http://www.academyofsciences.va/content/accademia/en/magisterium/piusxii/ 22november1951.html 11. F. Adams, G. Laughlin, The Five Ages of the Universe: Inside the Physics of Eternity (The Free Press, New York, 1999) 12. H. C. von Baeyer, Warmth Disperses and Time Passes: The History of Heat (Modern Library, New York, 1999; orig. publ. as Maxwell’s Demon, Random House, 1998) 13. In D. Halliday, R. Resnick, Physics, 3rd edn. (John Wiley & Sons Inc., New York, 1978) 14. In I. Stewart, Does God Play Dice: The New Mathematics of Chaos (Blackwell Publishing, Oxford, 1989) 15. T.-W. Lee, J. E. Park, "Entropy and Turbulence Structure," Entropy 24, 11 (2022) 16. P. Bridgman, The Nature of Thermodynamics (Harvard University Press, Cambridge, MA, 1941) 17. P. W. Atkins, The Second Law (W. H. Freeman & Co., New York, 1994; orig. publ. 1984)

4 Statistical Interpretation of the Second Law of Thermodynamics

Summary Directionality on the macroscale was soon shown to be rooted in the microphysics of thermodynamic principles when entropy was identified with a measure of disorder and the likelihood of a system to be in a particular state or arrangement, with increasingly disordered states being the most probable states simply because there are more ways to be disordered than ordered—more ways to go wrong than to go right, more ways to fail than to succeed. It’s much easier to scramble an egg than to unscramble it simply because there are more ways—more possible states—to scramble an egg than there are states for the more ordered unbroken egg. As shown by the new statistical mechanics—a “mathematics of molecules”—developed in 1877 by the Austrian physicist Ludwig Boltzmann, it’s easier—because it’s more probable—to transform ordered mechanical energy into heat than to transform heat, a form of disordered molecular motion, into ordered mechanical energy: the natural direction is from order to disorder. Thus the natural tendency for things to become disordered. Various examples including coin tosses, the distribution of air molecules in a room, and metabolic processes within living organisms illustrate this tendency for disorder in nature.

Order to Disorder It’s the way we all fly… —singer-songwriter Jimmy Buffet, Einstein Was a Surfer

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Fleck, Entropy and the Second Law of Thermodynamics, https://doi.org/10.1007/978-3-031-34950-8_4

69

70

R. Fleck

The development of the new science of thermodynamics spurred a revival of interest in understanding the relationship between the observable (macroscopic) thermodynamic properties of a gas, such as pressure, volume, and temperature, and the motion of individual (microscopic) gas particles. A new statistical mechanics, using probability and statistics to understand the mechanical properties of a collection of a large number of particles, was born of the new atomic theory of matter and the old kinetic theory of gases which modeled a gas as a collection of many tiny particles—atoms or molecules—colliding with each other and with the walls of a containing vessel but otherwise moving freely through space, although not everyone at the time believed that atoms, even those of a gas, could move freely, and far fewer believed in atoms themselves. In turn, its success provided compelling evidence for atomic theory, and, in treating atoms quantitatively, provided for the first time, reliable estimates of the sizes of atoms and molecules (of order 10–10 m, so small that you could line up 10 billion of them, nearly the number of people in the world today, end to end, along a meter stick). Not unexpectedly, a deeper understanding of the Second Law emerged, as so often happens in science, when the molecular basis of the law was established, providing a “ground floor” foundational understanding at the most basic level. “The explanation of the complete science of thermodynamics in terms of the more abstract science of statistical mechanics is one of the greatest achievements of physics,” the twentieth-century American mathematical physicist and physical chemist Richard Tolman, himself a contributor to the science of statistical mechanics, pronounced [1, p. 9]. “In addition, the more fundamental character of statistical mechanical considerations makes it possible to supplement the ordinary principles of thermodynamics to an important extent.” Not surprisingly, in view of his important contributions to thermodynamics, one of the principal players was Clausius, whose 1857 “Über die Art der Bewegung, welche wir Wärme nennen” (“On the Type of Motion We Call Heat”) established modern kinetic theory on a firm mathematical basis. Clausius correlated the (microscopic) average speed and mean free path—the average distance traveled between collisions—of an individual gas particle with the directly observable (macroscopic) properties of a gas such as temperature and pressure, an exercise first-year physics students easily reproduce. The Scottish physicist James Clerk Maxwell (1831–1879; Fig. 4.1), certainly one of the greatest theoretical physicists in the time between Newton and Einstein, read Clausius’s paper shortly after his study of the stability of Saturn’s rings which he showed, in a prize-winning mathematical tour de force analysis, must be comprised of loose, orbiting material rather than

4 Statistical Interpretation of the Second Law …

71

being a solid sheet. This work, together with his reading of books on probability and statistics, prepared his mind for statistical problems, and he very soon derived the now-famous Maxwell distribution law for molecular speeds, which showed that some molecules move very fast and some move very slow but most move at intermediate speeds clustered around a most probable speed depending on the temperature of the gas and the mass of the molecule. (An air molecule, for example, moves at an average speed of about one-fifth of a mile per second, during which time it suffers a few billion collisions with other air molecules.)1 This new probabilistic characterization of the system is forced by the impossibility—in practice but not in principle—of following the chaotic motions of countless molecules (a teaspoon of air contains about 1019 molecules—another huge number: 1 followed by 19 zeros!). Emulating the origins of statistics in the study of populations of people, it reflects a shift in focus from the individual to the population, implying, as a result, as Maxwell noted, a contentment with “a new kind of regularity, the regularity of averages.” Here, in drawing a comparison between the molecules in a container and the demographic makeup of a country, Maxwell was inspired by the introduction of statistical methods into the social sciences—a “social physics” of sorts—by the nineteenth-century Belgian astronomermathematician Adolphe Quetelet. For both people and molecules, overall regularity—broad characterizations of ensemble averages—is all that can be known for sure. Statistics had proved its effectiveness in the demographic case, and Maxwell demonstrated that it could do the same in the field of thermodynamics. Maxwell also derived equations for various transport processes in gases, as well as a formula for viscosity which, with the help of his wife, he was able to test experimentally. And he “was a modestly accomplished author of satirical verse” [2, p. 9]. 1 In an attempt to understand the microscopic meaning of entropy, Maxwell, playing the devil’s advocate, imagined a minuscule and “very observant and neat-fingered being” able to follow and even manipulate individual molecules in a gas. “Maxwell’s Demon,” Kelvin called it, bested only by Schrödinger’s fictional feline in more modern times as a creature of notable scientific pedigree. In this thought experiment, a demon controls a small, massless door between two chambers of gas. As individual gas particles approach the door, the demon quickly opens and closes the door to allow only fast-moving molecules to pass through in one direction, and only slow-moving molecules to pass through in the other. Because the temperature of a gas depends on the average speed of its constituent particles, the demon’s actions cause one chamber to warm up and the other to cool down, having the effect of heat flowing uphill, as it were, from a cold body to a warmer one, in clear violation of the Second Law. Alas, for several reasons (see, for example, chapter 18 of Hans Christian von Baeyer’s Warmth Disperses and Time Passes: The History of Heat ), the demon’s actions do not violate the Second Law. Enzymes, life’s chemical catalysts, function in the manner of Maxwell’s Demon, creating order at the expense of chemical energy at the cellular level. Interestingly, molecular-sized mechanisms, specially built for a variety of purposes, have been developed and employed in the emerging field of nanotechnology—none of which, I’m happy to report, violates the Second Law.

72

R. Fleck

Fig. 4.1 The James Clerk Maxwell Monument in Edinburgh, Scotland, commissioned by The Royal Society of Edinburgh and unveiled in 2008. Maxwell was, with Kelvin, one of the two giants of nineteenth-century physics, and, like Kelvin, he descended from a prominent Scottish family—and, again like Kelvin, he also finished second “wrangler” in the Mathematical Tripos at Cambridge University. The two, both shining stars in the firmament of physics, contributed to a distinctive style of mathematical physics in the second half of the century, making foundational contributions to the new sciences of thermodynamics and electromagnetism, the first significant new fields of physics since the development of mechanics by Isaac Newton nearly two centuries earlier. These subjects together constitute what is now known as “classical” physics, thought by everyone at the time to be all there is to know in the field—until the arrival of quantum physics and relativity theory early in the twentieth century. Maxwell, who published his first paper (on a geometrical way of tracing spirals) at the remarkably young age of fifteen, was born in Edinburgh in 1831, the year the English scientist Michael Faraday discovered electromagnetic induction, and he died of cancer at age 48, the same age his mother died of the same disease, in 1879, the year Albert Einstein was born. Thus were these three giants of physics closely linked chronologically—and intellectually, as Maxwell’s work in electromagnetism was deeply rooted in Faraday’s and, in turn, was an important ingredient in Einstein’s theory of relativity. Maxwell, rightly recognized as the “Newton of electromagnetism” for his well-known eponymous equations unifying, as the label suggests, the physics of electricity with that of magnetism (and, as a bonus, both with that of light as well), was, in the opinion of physicist and science historian Robert Purrington, the only “physicist whose name is likely to be mentioned in the same breath as Newton’s and Einstein’s.” (Wikimedia Commons, public domain)

4 Statistical Interpretation of the Second Law …

73

In 1877 the Austrian physicist Ludwig Boltzmann (1844–1906; Fig. 4.2), who, like Maxwell, believed that atoms were more than convenient fictions, developed a precise statistical interpretation of the Second Law of Thermodynamics, “one of the most profound, most beautiful theorems of physics, indeed of all science,” proclaimed the editor of Boltzmann’s collected works, with entropy as a statistical entity measuring the disorder of a system. Boltzmann connected the macroscopic thermodynamic quantity, entropy, with a microscopic statistical quantity, probability, in a paper titled (in translation) “On the Relation between the Second Law of Thermodynamics and the Theory of Probability,” defining the entropy of a state to be S = k ln w, where k is now called the Boltzmann constant (= 1.38 × 10–23 J/K, relating energy in joules to absolute temperature), “ln” is the natural logarithm,2 and w is the number of possible arrangements (“microstates”) of a particular (macro)state, and is thus a measure of the probability that a system will exist in the state it is in relative to all the possible states it could be in. Note that more probable states are higher entropy states. For example, there are two ways (microstates) to get the macrostate “one heads, one tails” when tossing two coins: first coin heads and second coin tails, or first coin tails and second coin heads. But there is only one way to get the more ordered (and hence lower entropy) states “both heads” and “both tails.” Outside the world of coin tosses, as the number of possible arrangements of constituent atoms increases, as in going from solid to liquid 2 In mathematics, the logarithm is the inverse operation to exponentiation, which means the logarithm of a given number x is the exponent to which another fixed number, the base, must be raised, to produce that number x. For example, the common (decimal, or base ten) logarithm of a number x, written log x, is the power to which the number 10, the (decimal) base, must be raised to equal x; e.g., log 1000 = 3 because 1000 = 103 (= 10 × 10 × 10, the base multiplied by itself 3 times). The decimal log of a number is, very roughly, the number of digits in the number. The natural logarithm of a number x, ln x, is the power to which the mathematical constant e, which is an irrational and transcendental number approximately equal to 2.718…., would have to be raised to equal x. For example, ln 1000 is 6.907…., because e 6.907…. = 1000. The natural logarithm of e itself, ln e, is 1, because e 1 = e, while the natural logarithm (and the common decimal log) of 1 is 0, since e 0 = 1 (indeed, any number raised to the first power is equal to the number itself, and any number raised to the zero power equals 1). Logarithms, tools for turning multiplication into addition, and division into subtraction, were introduced in the early seventeenth century as a means of simplifying calculations, and are common in mathematics, science, and engineering. Examples include the Richter scale for earthquakes (a magnitude 4 shake, for example, being not 2 but 102 times more powerful than a barely detectable magnitude 2 quake; 104 = 102 times 102 ); the decibel scale for sound intensity (because the range of energy from barely audible to the threshold of pain varies by a huge factor of a trillion, so that a normal conversation that seems three times louder than a whisper is actually a thousand [103 ] times greater in intensity); and the pH acid–base scale for an aqueous solution. The binary logarithm, using base 2, is common in computer science.

74

R. Fleck

Fig. 4.2 Boltzmann’s grave in Zentralfriedhof, Vienna, with his famous formula S = k log w which provides a molecular version of the Second Law of Thermodynamics, establishing a precise mathematical connection between entropy and disorder, between the macroscopic world of appearances and the microscopic world of the atom. An accomplished pianist and renowned lecturer, he was at heart a theoretical physicist but also an able experimentalist. An archetypal atomist born the same year the pioneering English atomist John Dalton, Joule’s private tutor, died, this neurasthenic and anxiety ridden genius was prone to depression throughout his life and, after earlier unsuccessful attempts, committed suicide while on holiday in 1906 near Trieste, Italy, despondent over the “dominantly hostile mood” towards atomism, sadly, just one year after Einstein’s convincing proof of the existence of atoms. Speaking of Einstein, one wonders whether both his hair and attire were victims of nature’s tendency toward randomness. (Photograph by the author)

4 Statistical Interpretation of the Second Law …

75

(melting) or from liquid to gas (evaporation), the entropy and hence disorder increases: in the solid state, atoms are relatively fixed in position and thus highly ordered, but “slide” past each other when a liquid, and move freely and randomly when a gas. We shall learn to appreciate w as a “disorder parameter,” a measure of the degree of disorder in a system, which increases as the number of possible arrangements w increases. Note that Boltzmann’s statistical (microscopic) definition of entropy is an absolute entropy, whereas the thermodynamic (macroscopic) definition (dQ /T ) is for changes in entropy. Example 4.1 Entropy and Coin Tosses Looking more closely at coin tosses, tossing N coins (or, equivalently, tossing one coin N times) results in 2N microstates for a total of two microstates (one for heads and one for tails) for one coin (21 = 2), and four microstates (HH, HT, TH, and TT, writing H for heads and T for tails) for two coins (22 = 4). These small-N situations are characterized by relatively low entropy and hence a high degree of order, with heads or tails being equally likely for a single coin toss (even if the loser of the coin toss to begin overtime in the National Football League is most certainly not equally likely to win the game!), and equal probabilities (1 in 4) for each of the mixed—and hence more disordered—microstates HT and TH and the ordered microstates HH and TT for two coin tosses. Increasing the number of coin tosses by just one to N = 3, we now have 23 = 8 microstates, only two of which are ordered: the 3H and 3T macrostates, HHH and TTT, each one being the sole microstate, for which w = 1 and hence zero entropy, as expected for a completely ordered arrangement; the other six microstates, HHT, HTH, and THH corresponding to the 2H/1T macrostate (with w = 3 for the three different outcomes), and HTT, THT, and TTH for the 2T/1H macrostate (again with w = 3), are mixed (disordered) and are therefore higher entropy—and hence more likely—states. It is easy to show that for N = 4 coins, there are 14 mixed microstates, macrostates 3H/1T (with w = 4: HHHT, HHTH, HTHH, THHH), 2H/2T (with w = 6: HHTT, HTHT, HTTH, THHT, THTH, TTHH), and 1H/3T (with w = 4: TTTH, TTHT, THTT, HTTT), and only two ordered all-head (HHHH) or all-tail (TTTT) macrostate/microstate outcomes (with w = 1 and hence again S = 0 for each completely ordered state; see Fig. 4.3). Clearly, variety and diversity are more likely than uniformity and sameness, and disorder—and therefore high entropy—is more likely for more complex, larger-N systems.

Note in Example 4.1 that the least probable outcomes for a coin toss are those that are the most ordered: either all heads or all tails. Order is, in general, not likely: disorder is the rule. Although it is certainly possible to throw 100 heads in a row, don’t bet on it: the probability of doing this is only 1 in 1.27 × 1030 , all heads being just one of the 2100 possible outcomes and therefore not very probable at all (Rosencrantz and Guildenstern’s run of 76 heads notwithstanding!). The most probable outcome, half heads and half tails, is the macrostate with the greatest number of corresponding microstates and hence greatest entropy and greatest disorder. The connection between entropy, order, and probability—with high entropy, less ordered states being

76

R. Fleck

Fig. 4.3 The five macrostate and corresponding sixteen microstate outcomes of four coin tosses, an illustration of the microscopic interpretation of entropy (see text for details). Lopsided distributions (in the extreme, all heads or all tails) are called more orderly than even (50–50; two heads and two tails) distributions because there are fewer ways to realize them. A highly ordered condition is one in which there are relatively few ways (“states”) to achieve it; a more disordered condition is one in which there are many ways to realize it, one with more “mixedupness.” An ordered system is one arranged in a regular pattern, as is the case here for all heads or all tails (or, to take another example, all the atoms in a crystalline solid)

more likely, and low entropy, more ordered states being less likely—is fundamental to understanding why things go wrong in the world and are likely to get worse. In making the connection between entropy and disorder, it is important to note that coin tosses that are either all heads or all tails constitute a completely ordered state in the sense that the orientation of each one of the coins is uniquely specified. On the other hand, the macrostate “half heads, half tails,” for example, by itself gives us very little information about the state (heads or

4 Statistical Interpretation of the Second Law …

77

tails) of each individual coin, and in that sense is said to be “disordered,” or, to use the technical term for entropy introduced by the American thermodynamicist J. Willard Gibbs, in a state of increased “mixedupness.” Compared to the least likely states “all heads” or “all tails,” the most probable state “half heads, half tails” has the maximum number of possible microstates—which is why it is most likely—and therefore the maximum amount of disorder and hence the maximum of entropy, the quantitative measure of disorder. Similarly, shuffling an ordered deck of cards, for which the suits and numerical values follow each other in an “orderly” methodical sequence, will most certainly take it to a disordered, random arrangement of cards, but randomly shuffling a disordered deck to achieve order is exceedingly improbable. (Note, however, that the probability of a chance drawing of any specific, predetermined sequence from a shuffled deck is the same—approximately 1 in 1068 .) To summarize: for any system, the most probable macrostate, which is also the macrostate with the greatest disorder and hence greatest entropy, is the one with the greatest number of corresponding microstates—and these maximally disordered states provide the least information about the system. The inverse correlation between entropy and information—entropy being a thermodynamic framing of ignorance—is significant, and will be developed further throughout our story. Example 4.2 Likely (or not) Distributions of Air Molecules in a Room As another simple example relating disorder to entropy, consider an otherwise empty room containing just one air molecule (not a good room to be in) moving around the room, as air molecules do, bouncing off the walls, floor, and ceiling, equally likely to be in any part of the room at any time, sometimes located on one side of the room, sometimes on the other side. Imagining the room divided exactly in half, at any given time this molecule has a 1 in 2 (50% or “50–50”) chance, an equal likelihood, of being on any side of the room, just like the equal probabilities of heads or tails coming up for a single coin toss. Now introduce a second, identical air molecule. Just like tossing two coins, there are now four equally probable microstates characterizing the location of the two molecules: RR, RL, LR, and LL, where R denotes the right side of the room, and L denotes the left side. And just like tossing N coins and having them come up all heads or all tails, the probability of finding all N molecules on one particular side of the room at once—the most ordered state—is (1/2)N , very low indeed for the very large number of air molecules in a typical room: already only a 1 in 1.27 × 1030 chance for just 100 molecules, and recall that just a teaspoon of air contains about 1019 molecules—and there are a lot of teaspoons of air in a room full of air. (Chemistry students will recall that a mole of air, containing—as a mole of anything always does—Avogadro’s number of approximately 6 × 1023 molecules, occupies 22.4 L, about three-quarters of a cubic foot, of volume at standard temperature and

78

R. Fleck

pressure.)3 The probability is not zero, but it is so small that you’d certainly never have to worry about gasping for air in a suddenly and spontaneously evacuated part of your room: indeed, the likelihood of such an event taking place is so vanishingly small that it has almost certainly never occurred anywhere in the universe since the beginning of time. That’s about as unlikely as things can get. Note the natural direction—time’s arrow—here from ordered past to disordered future. Again, ordered states are less likely than disordered states, with the likelihood of order decreasing rapidly with increasing N. Like the coin toss, compared to the state with all air molecules on one side of the room or the other (all heads or all tails for the coin toss), the state with half the molecules in one side of the room and half in the other side (half heads and half tails for the coin toss) has a much greater number of possible microstates and hence is much more probable—and the laws of statistical mechanics can be used to predict precisely the probabilities of the various, possible distributions of molecules in the room.

For very small “rooms” of particles on the scale of atoms and atomic nuclei, quantum mechanics, the physics of the very small, has shown that the positions and energies of the particles are quantized—limited only to specific, distinct values (as is the case with integer-only values for numbers) as opposed to being spread over a continuous range of values for macroscopic systems like air in a room—and that these depend on the temperature and size of the confining region. In this case, as the region expands, quantum physics shows that the allowed energies are lowered and are spaced closer together, so, for the same temperature, and hence same total energy of the system, the allowed distribution spans more energy levels, with the result that blindly choosing one particular particle from one particular energy level becomes less probable. This increased uncertainty in the precise energy level a particle occupies is what is meant by disorder at the quantum scale, and corresponds to an increased entropy and, importantly, a decrease in information about the system. At absolute zero (T = 0), all the particles are in the lowest energy (“ground”) state with 100% probability, so the entropy of the system at absolute zero is zero, in agreement with Boltzmann’s equation for the case of a totally ordered system with a single arrangement (w = 1; recall that ln 1 = 0).

3

It’s not easy to appreciate how big a number Avogadro’s is. Avogadro’s number of popped popcorn, distributed uniformly over Earth’s surface would reach over a mile high! And yet, less than half an ounce of charcoal (mainly carbon, a mole of which has a mass of exactly 12 g, the mass in grams of a mole of anything being equal to its atomic mass) contains Avogadro’s number of carbon atoms— which, when you think of it, means atoms are very, very small indeed. And just to insert a measure of the “other culture” here, this one a bit of science history, Amedeo Avogadro (1776–1856) was an Italian scientist who discovered in 1811 that equal volumes of gases under the same conditions of temperature and pressure will contain equal numbers of molecules, a result now known in his honor as “Avogadro’s Law”.

4 Statistical Interpretation of the Second Law …

79

As Example 4.2 illustrates, all of this is consistent with the Second Law of Thermodynamics which states that the entropy—and hence disorder—of an isolated system can never decrease: an isolated system (like a closed room full of air) can never spontaneously undergo a process that decreases its occupied volume and hence the number of possible microstates, thereby increasing the order of the system, as would be the case if all the air in a room suddenly rushed to one side of the room—or even worse, to one small corner of the room—another (Feynman) laughable situation if it ever occurred simply because we’ve never seen it happened—and most likely never will. However, the reverse process, a free expansion of air originally confined to half a room by a partition suddenly removed, is very likely, illustrating, again, the directionality of change in nature.4 In that case, and in the same sense that disorder increases if the litter on one vacant lot is spread over two lots, the room’s order (the confinement of all the air to one side of the room, with no air in the other side)—and useful (expansion) energy—decreases as the air is less confined: we have less information and knowledge of the exact location of individual air molecules when they occupy the entire room, a macrostate now characterized by a greater number of possible microstates and hence greater entropy. It is important here to emphasize again that states characterized by less information—those we know less about—are states of greater entropy, a most unfortunate state of affairs given that entropy is always on the rise. The very large entropy of a black hole is a result of lost information once the black hole forms.

4

It’s easy to prove this quantitatively. Removing the partition doubles the volume of space occupied by the air, giving each molecule twice as many possible locations (states), and thus 2N more states for a room filled with N molecules. The change in entropy when the partition is removed is therefore ) ( ΔS = Sfinal − Sinitial = k ln wfinal − k ln winitial = k ln 2 N winitial /winitial = N k ln 2, which is a (large!) positive number, in agreement with the Second Law. A spontaneous halving of the volume would give a negative entropy change of the same magnitude, in violation of the Second Law, and therefore would be very unlikely to occur. Here we have used two properties of logarithms, namely ln a – ln b = ln (a/b) and ln ax = x ln a. We can understand why, as Boltzmann discovered, entropy is proportional to ln w: because ln w 1 + ln w 2 = ln (w 1 w 2 ) and because entropy is additive (S total = S 1 + S 2 + ...) and joint probabilities are multiplicative (w 1 w 2 : that is, the probability of a particular outcome depending on a number of factors is the product of the probabilities for each factor), entropy must then be proportional to ln w. Somewhat related to a free expansion, but importantly different, is what is known as a reversible (i.e., undergoing very, very, slow changes), adiabatic (i.e., thermally insulated) expansion. In this case, no heat enters or leaves the system, so Q = 0. And because this process is reversible, there is no change in entropy: ΔS = 0. Every reversible adiabatic process is a constant-entropy process, a process called isentropic . Here, the increase in disorder due to the greater volume occupied by the gas is precisely balanced by a decrease in disorder associated with the lower temperature and hence lower molecular speeds.

80

R. Fleck

This molecules-in-the-room example exhibits identical statistical properties to a coin toss because there are only two possibilities available in each case: either one or the other side of the room for air molecules, and either heads or tails for tossed coins. If there were more than two possibilities, like, for example, the eleven different numbers that can come up on a pair of thrown dice, the small cubes used in games of chance, the statistics would be more complicated, but the relation between order–disorder and entropy would still hold. If the dice are thrown a great many times, the most probable value—the number that comes up most often—is 7, the number with the most combinations (six in all: 1 + 6, 2 + 5, 3 + 4, 4 + 3, 5 + 2, 6 + 1). The next most probable values are 8 and 6, each arising from five different combinations, and the least probable values are 2 (1 + 1, having the appearance of “snake eyes”: [·] [·]) and 12 (6 + 6, looking like a pair of “boxcars”: [:::] [:::]), each arising from just one possible combination. Again, mixed—and hence disorderly—arrangements are more probable because there are more ways to be mixed and disordered than to be sorted and ordered . Successful gamblers using “honest” dice are well aware of this. And so, back to thermodynamics, the entropy increase accompanying natural processes implies an increase in disorder or “mixedupness.” It is therefore easier—because it is more probable—to transform ordered mechanical energy into heat than to transform heat, a form of disordered molecular motion, into ordered mechanical energy. The natural direction is from order to disorder, from a single degree of freedom—as in the coordinated, single direction of uniform motion undertaken simultaneously by all the atoms in a wooden block sliding along a rough surface, or in the organized, coordinated motion of an ocean wave—to many degrees of freedom—as in the random motion of the thermal (“heat”) energy produced in bringing the block to rest, or in the disordered, random, turbulent motion of the water after the wave breaks on the shore: you’re certainly not very likely to see water along the shore rise up in ordered unison to make a perfectly formed wave, just as you’re much more likely to see a wave wash Jimi Hendrix’s sandcastle into the sea than you are to see a sandcastle form spontaneously in the wake of a wave crashing on the shore. A heat engine—even an ideal heat engine—can therefore never transform (disordered) thermal energy into (ordered) mechanical energy with 100% efficiency, as Carnot realized two centuries ago in his pioneering investigations. Understood at the microscopic molecular level, the initial sorting of heat energy into hot and cold reservoirs, an ordered and hence low-entropy arrangement, is progressively lost as heat is transferred, resulting in a mixed (more disordered) state, a sorting that would be completely lost for heat

4 Statistical Interpretation of the Second Law …

81

transfer from a hot to a cold body when they reach a common, equilibrium temperature and hence maximum entropy. Microscopically, even within the working substance of the engine (typically hot gas), not all of the disorderly moving atoms push against a piston (recall Fig. 3.7) or a turbine blade in ordered unison. What are the chances of every one of the multitude of atoms within the working substance of the engine to be moving together in the same, common direction—to have a single degree of freedom—directly against the piston to do work, converting all of their motion energy into useful mechanical energy with 100% efficiency? While energetically possible, it is fantastically improbable. A simple analogy illustrates the connection between Clausius’s (macroscopic, device-reliant) definition of entropy change as “energy supplied as heat divided by the temperature at which the energy transfer occurs” and Boltzmann’s (microscopic, molecular-reliant) statistical interpretation of entropy. Compare a firecracker exploding on a noisy street to one going off in a quiet room. An exploding firecracker is like an input of disorderly energy, much like energy transferred as heat. The bigger the bang, the greater the disorder, whether on the street or in the room. Thus, we can understand why “energy supplied as heat” appears in the numerator of Clausius’s expression for entropy change: the greater the energy supplied/extracted as heat, the greater the increase/decrease in disorder and therefore the greater the increase/decrease in entropy. Recall that the presence of the temperature in the denominator of Clausius’s definition of entropy implies that for a given supply/extraction of heat, the entropy increases/decreases more if the temperature is low than if it is high (which is why, as we have seen, heat flows spontaneously from high to low, never low to high, temperature). Likewise, for the same size firecracker, one exploding in a quiet room, which corresponds to a cool object with very little thermal commotion, will be a greater disturbance—a greater increase in entropy—than one exploding on a busy street, which corresponds to a hot object in which there is a lot of disorderly thermal commotion already present. The quiet room is analogous to the cold sink in a heat engine: without it there can be no increase in entropy—indeed, no viable heat engine. More obviously, without a heat source—the firecracker in our analogy—there would be no engine. The process of combustion also captures the connection between the micro and macro versions of entropy and the Second Law. Take fuel oil, for example, a mixture of hydrocarbons, compounds consisting of long chains of hydrogen and carbon atoms. When oil burns, oxygen atoms from the air react violently with the hydrocarbon molecules, breaking them into carbon dioxide (CO2 ) and water (H2 O), releasing a large amount of heat when the

82

R. Fleck

weaker bonding of carbon and hydrogen atoms in the fuel are replaced by stronger, lower-energy—and hence more favorable—bonding in the combustion products. Indeed, hydrocarbons burn because the natural direction for change is to lower-energy, higher-entropy states. One can easily appreciate the two-fold, macro–micro contribution to the increase in entropy here: the release of energy into the environment, which raises its entropy (macro); and the dispersion of matter into more disorganized configurations, as long, orderly chains of atoms are broken up into smaller, more numerous molecules (micro). The web of metabolic processes taking place within living organisms such as ourselves is thermodynamically similar to the rapid oxidation that takes place during combustion. But in the case of organisms, the food we eat is the source of energy that stokes the hot reservoir of the biological heat engines distributed throughout our cells, making the highly orderly process called life possible as some of the ingested energy is dissipated as waste, generating enough disorder for the world to grow a little more disordered overall: the universe grows more disordered as life—local pockets of ordered complexity—keeps on living. Thus does life seem to stand apart from inert matter: life revolves around organization—that is, purposeful order. In fact, natural and spontaneous local reductions in entropy are commonplace in the natural world, in living and nonliving systems alike. Take, for example, the formation of a snowflake, a remarkably symmetrical and highly organized beautiful structure that forms naturally and spontaneously from a totally disorganized ensemble of airborne water vapor. When water in the air freezes to form a snowflake at 0 °C (T = 273 K), latent heat energy Q L is released from the freezing snowflake (in this case about 333 J/g, the energy per mass given up) into the surrounding air which is assumed to be slightly below freezing (T < 273 K) in order to drive the heat exchange. Thus, the Second Law tells us that the total entropy change for the snowflake and the ambient air is ΔS = ΔSSNOWFLAKE + ΔSAIR = −Q L /(T = 273K) + Q L /(T < 273K) ≥ 0,

where the minus sign denotes heat energy lost by the snowflake as the moisture in the air freezes during its formation. Thus ΔS SNOWFLAKE < 0—the entropy of the snowflake decreases as the water becomes more organized upon freezing into its highly ordered, geometrically regular, solid crystalline configuration—but the total entropy for the process including the surrounding air increases, in agreement with the Second Law. Snowflake formation is a special case of a more general “uphill” process of spontaneous self-organization—sometimes referred to as “emerging complexity”—that takes place in nature. The Belgian physical chemist Ilya

4 Statistical Interpretation of the Second Law …

83

Prigogine was awarded the 1977 Nobel Prize in chemistry for his work in non-equilibrium thermodynamics demonstrating that naturally occurring gradients in temperature, pressure, or composition can drive a system into highly nonuniform configurations that eventually become highly organized dissipative structures that may even result in abiogenesis, the genesis of life from nonliving matter. The laws of nature as formulated in the physical sciences, such as physics and chemistry, are (believe it or not!) simple; the outcomes of these laws, however, as manifested in many nonliving systems such as turbulent fluid flow—a complex, chaotic motion christened “the oldest unsolved problem in physics” by quantum pioneer Werner Heisenberg—and especially in living organisms, can be exceedingly complex. Complex outcomes really do arise (whence “emerging complexity”) from simple rules, all in accordance with the laws of thermodynamics. Back to life, picking just one of the thousands of different reactions involved in metabolism, consider the molecules adenosine triphosphate (ATP) and adenosine diphosphate (ADP), respectively the hot and cold reservoirs of just one of the many biological heat engines within our bodies. ATP is the “molecular unit of currency” of intracellular energy transfer, providing energy to drive a multitude of life processes, such as muscle contraction (and hence movement), nerve impulse propagation (making it possible for me to write these words and for you to read—and hopefully understand—them), as well as a variety of chemical reactions such as protein synthesis. When energy is required, ATP detaches its terminal phosphate group to become ADP, making energy available to the cell in the process (as the names suggest, AT P has three [“tri-“] phosphate groups, while ADP has only two [“di-“]). An increase in entropy accompanies the increase in thermal and material disorder as heat is produced and the number of molecules is doubled. For the cell to remain viable, a phosphate group must reattach to ADP to reform ATP, a metabolic reaction that dissipates matter and energy even more effectively. And that’s why we have to eat: among other things, the food we ingest provides the fuel for the heat engine that makes ATP from ADP, sustaining life—and thought. (Had Descartes known this, he could just as well have said: “I think, therefore I eat ”!) Because this reaction, like so many in the body, is non-spontaneous, if we don’t eat, life ends, and our bodies decompose and putrefy. It all sounds a lot like the song The Police frontman Sting sings: Every breath you take. Every move you make.

84

R. Fleck

Every bond you break… In turn, the food we eat must itself be produced by even more powerful notional heat engines, the ultimate one being the Sun, that great big heat engine in the sky that drives plant photosynthesis, the first step (known as a trophic level ) in the food chain that starts with the carbohydrates produced by combining water with carbon dioxide from the air, releasing oxygen as a byproduct, the inverse of the oxidizing reactions of combustion and metabolism/respiration—taken together, the essence of the symbiotic relationship enjoyed by plants and animals. Humans have long venerated the Sun as the giver of light and life, but only recently did we appreciate it as the ultimate source of entropy and universal decay in our corner of the universe. And so, whereas the First Law tells us which processes in nature are possible—namely, only those that conserve energy—the Second Law tells us which of these possible processes are probable. You can, for example, readily rub your hands together and create heat, but you’d have to wait a long, long time—nearly forever—for the molecules in the pressed palms of your hands to spontaneously order their random thermal motion and move in unison pushing one palm across the other without any intent on your part. (If you’re not smiling after reading this last sentence and imagining this actually happening, you don’t get it. It is “Feynman” funny because it doesn’t happen! It’s much too unlikely to occur, so if it did, it would look pretty funny just because it would be so unexpected.) Both possibilities conserve energy, but rubbing your hands to generate heat is a much more likely occurrence than the reverse process: there are far too many randomly vibrating molecules making up the palms of your hands to expect coherent, large-scale, ordered motion.5 Evidently, there’s more to physics—and to life—than energy and the First Law. 5 Another example I used in the classroom to emphasize this point is the extremely unlikely possibility of a blackboard eraser, resting on a chalk ledge, to suddenly and spontaneously, on its own, slide along the ledge as the trillions of trillions of atoms of felt along the bottom of the eraser conspire with an equally large number of atoms along the ledge to all together simultaneously push against each other to move the eraser, yet another laughable situation because it would be so unexpected and so unlikely to occur. (The Second Law can be used to show that such an outcome is so rare, you’d have to wait much, much longer than the age of the universe, which is some 13.8 billion years old, to see it happen. Of course, one could argue that we simply haven’t lived long enough to witness such a highly improbable event.) On the other hand, it’s very easy to strike an eraser with your hand to make it slide across the ledge some distance before stopping as its motion energy is dissipated as heat by the friction between the eraser and the ledge. Both processes conserve energy and hence are possible —the former converting (disordered) molecular energy into (ordered) kinetic energy of motion in violation of the Second Law, and the latter converting ordered motion energy into disordered heat energy—but only the latter is probable. As in the very unlikely case of the heat of your hands moving them when placed in contact, there are far too many randomly vibrating atoms to expect a coherent, large-scale, ordered motion of the eraser.

4 Statistical Interpretation of the Second Law …

85

References 1. R. C. Tolman, The Principles of Thermodynamics (Oxford University Press, Oxford, 1938) 2. B. Clark, Energy Forms: Allegory and Science in the Era of Classical Thermodynamics (The University of Michigan Press, Ann Arbor, MI, 2001)

5 Implications of the Second Law of Thermodynamics: Why Things Go Wrong

Summary Although entropy originated as a thermodynamic concept, its reach extends across many other disciplines including biology, economics, and information theory, and indeed across life itself. We call for “order in the court” and despair when our devices are “out of order” or when we face a medical “disorder.” A quantitative example illustrates the tendency for a room to become disordered: there are many more ways to mess up a room than to arrange it in an orderly fashion. Importantly, more effort must be expended—work must be done—to keep a room orderly than to allow it to descend into disorder: an orderly room, an orderly house—order anywhere— all require devoted attention and regular maintenance. The tendency for the universe to slide naturally towards disorder requires the expenditure of energy to create order, stability, and structure. Without effort, things decay and are bound to go wrong. The apparent paradox of the origin and evolution of life, exquisitely ordered complex structures thriving in a background of disorder, dissipation, and degeneration—order in a universe ruled by disorder—is resolved when it is realized that living organisms exchange energy and material with their environment, and thus are not closed systems to which the Second Law applies: the entropy decrease within a particular organism is more than offset by a greater entropy increase in the rest of the universe.

Why the awe for the Second Law? The Second Law defines the ultimate purpose of life, mind, and human striving: to deploy energy and information to fight back

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Fleck, Entropy and the Second Law of Thermodynamics, https://doi.org/10.1007/978-3-031-34950-8_5

87

88

R. Fleck

the tide of entropy and carve out refuges of beneficial order. An underappreciation of the inherent tendency toward disorder, and a failure to appreciate the precious niches of order we carve out, are a major source of human folly. —Harvard psychologist, Steven Pinker [1, p. 19]

So, what does all this physics have to do with our everyday world—with life, the universe, and everything (to borrow a memorable phrase from the Hitchhiker’s Guide series)? In particular, why do things always seem to go wrong and typically tend to get worse? Why do we live in a “Murphy’s Law” universe where if something can go wrong, it will? Why do things rarely seem to work out in life—and why do our lives tend to become more and more complicated and disordered rather than remaining simple and structured? In short, why does sh!t happen? The answers to these questions—to why life, the universe, and everything are so full of so many annoying tendencies— can be found in the concept of entropy embedded in the Second Law of Thermodynamics: quite simply (which itself is paradoxical), there are so many more ways for things to go wrong than to go right. Like it or not, the world, overall, is running down. Order naturally decays to disorder. Wherever we find order, we find linked to it a greater amount of disorder: any increase in order is always driven by a greater increase in disorder elsewhere. Significantly, although entropy was originally a thermodynamic concept, it has been adapted in other fields of study such as biology, economics, and information theory. In Information Theory, the information entropy—also called Shannon entropy after the American cryptographer and mathematician Claude Shannon (1916–2001) who introduced the concept in his groundbreaking 1948 paper “A Mathematical Theory of Communication”1 in order to quantify the information content of a message—is the average amount of information conveyed by an event when considering all possible outcomes. This is essentially a measure of the degree to which the content of the message is new and surprising, with highly likely events carrying very little information and highly unlikely events carrying much more information. For example, in the case of two (fair) coin tosses, the information entropy, expressed in bits, is the base-2 logarithm of the number of possible outcomes, which is log2 2N = N = 2 bits for the four possible outcomes (HH, HT, 1 Shannon’s paper pioneered essentially everything that would happen in communication technology, including data compression, channel capacity (which we know as “download speed,” measured in bits per second), channel coding and error detection, and cryptography and codebreaking. Indeed, when his paper was reissued the following year, the only significant change made to the original 1948 paper was replacing the indefinite article “A” with the definite article “The” in the title: “The Mathematical Theory of Communication” (emphasis added; for more on Shannon’s remarkable contributions to information theory, see chapter 8 of Michael Brooks’s 2021 The Art of More: How Mathematics Created Civilization).

5 Implications of the Second Law of Thermodynamics …

89

TH, and TT, or in binary code, where each bit is either 0 or 1: 00, 01,10, 11). Rolling a six-sided die has a higher entropy than tossing a coin because each outcome of a die toss has a smaller probability (1 in 6) than each outcome of a coin toss (1 in 2). When information is maximized, entropy—and uncertainty—are at their very minimum; in the opposite extreme, maximum ignorance is characterized by maximum entropy. Author Hans Christian von Baeyer [2, pp. 150–151]: The acquisition of information corresponds to a decrease in entropy. The more we know, the more order we impose on our universe. Conversely, the loss of information inherent in dissipative processes like the shuffling of a deck of cards, the diffusion of a drop of ink in a glass of water, or the shattering of a pane of glass entail an increase in entropy. Boltzmann’s formula measures not only disorder, but also missing information.

Thermodynamically, an equilibrium—and hence maximum entropy— distribution is the one characterized by the least amount of information (as was discussed in Example 2.1 when mixing hot and cold water and in Example 4.1 for coin tosses). Information theory is useful in calculating the minimum amount of information required to convey a message, as in data compression. The topical topics of artificial intelligence (AI) and machine learning techniques, all tasked with minimizing uncertainty, also rely on information theory. In biology, information entropy can be used to measure biodiversity and ecological richness, establishing a diversity index as a quantitative statistical measure of how many different types exist in a database, such as species in a community. Today, scientists believe information theory may be the key to understanding the complexity of the universe, quite a stretch from the steam engine. Stay tuned for further information. And so, when “we look at the essence of a steam engine,” Peter Atkins reminds us [3, pp. 110–111], “we find a concept that applies across the range of all events…. [T]he roots of thermodynamics lie deep and their ramifications spread through the structure of the modern world.” Indeed, the Second Law and its thermodynamic imperatives apply across the universe to a wide variety of non-thermodynamic situations.2 For 2

To mention just one example from history, the German pioneer sociologist Max Weber (1864 – 1920), widely regarded as among the most important theorists of the development of modern Western society, appropriated the Second Law to explain the tendency of organizations and bureaucracies towards bloating and stultification. Today, thermodynamics helps us understand a variety of fundamental systems ranging from weather and climate to black holes and bacterial colonies and even the human brain itself.

90

R. Fleck

example, the reason it’s much easier to bake a cake than to unbake a cake, or to scramble an egg than to unscramble it, is because there are more ways— possible states—to bake a cake or scramble an egg than there are states for the unbaked cake and unbroken egg. In 1870 Maxwell shared with his physicist friend Lord Rayleigh the “moral” of the Second Law: “The 2nd law of thermodynamics has the same degree of truth as the statement that if you throw a tumblerful of water into the sea, you cannot get the same tumblerful of water out again.” The likelihood of recovering all the molecules of tumbler water from the vast quantity of water filling the sea, and hence reordering the system and lowering its entropy, is infinitesimally small, essentially zero. It’s the same with a drop of perfume that will quickly fill a room with scent: your chance of gathering all those perfume molecules into a single drop to be put back in a bottle is very, very small. Like the cake and the egg, mixing (disordering) initially separate (and hence ordered) ingredients increases entropy. The same with pouring cream into a cup of coffee: starting with a highly ordered state—cream and coffee separate—mixing the two produces a mixed up and hence disordered state—no more separate and distinct orderly arrangement of cream here and coffee there—which is very unlikely to spontaneously order itself back into separate cream and coffee. As demonstrated in Chap. 4, it all has to do with disordered states being much more probable because there are more of them. Just as it’s highly unlikely to dump the pieces of a puzzle onto a table and have them all fall perfectly into place, it’s very unlikely to drop the gazillion disassembled parts of a commercial jetliner onto a runway and have them all fall into their correct ordered places to make a flyable plane. While there is only one possible state where every piece is in order, there are many more ways to have them fall into a random, disordered pile of parts: an orderly outcome is incredibly unlikely to happen at random—it takes work and effort to assemble the pieces and parts into a picture perfect puzzle or a flyable jetliner, to “fight back the tide of entropy.” The use of the technical term “gazillion,” a very large number, is significant here: just as we learned in Chap. 4 with the coin tosses, if there were only a very small number of parts to the plane, say only two or three, it would be much more likely for them to fall into place, and such a system would therefore have low entropy. Disordered states of a system, especially a large system, have a large number of possible arrangements of a large number of system components—for example, there are lots of ways to mess up your room because there are lots of things in your room—and hence a large amount of entropy; there are very few ways to arrange your room in an orderly fashion. (I hung a sign

5 Implications of the Second Law of Thermodynamics …

91

over the door to my young daughter’s bedroom warning visitors: “Danger, High Entropy Area!”) This next example quantifies the entropy in a child’s playroom. Example 5.1 Playroom Entropy In discussing the entropy of a black hole (which, it turns out, is very large due to the many different internal states of a black hole)3 in his book Black Holes & Time Warps: Einstein’s Outrageous Legacy, Caltech Nobel laureate physicist Kip Thorne quantifies the entropy in a child’s square playroom filled with 20 toys and having a floor made of 100 large tiles, with 10 tiles running along each side [5, p. 424]. A father cleans the room by randomly placing all the toys onto just one row of tiles at the back of the room, not concerned about which toys land on which one of the 10 tiles. A measure of the degree of randomness is the number of ways—the variety of ways—that the 20 toys can be distributed over the 10 tiles: 10 X 10 X 10 X … X 10, with one factor of 10 for each toy, which is 1020 . The child later enters her room and plays with all the toys, “throwing them around with abandon,” leaving them scattered randomly across the entire floor. The number of ways the 20 toys could be distributed now over all 100 floor tiles is 100 X 100 X 100 X … X 100 = 10020 = 1040 (since 10020 = [10 X 10]20 = 1020 X 1020 = 10[20 + 20] = 1040 ), with one factor of 100 for each toy. Entropy, as we have seen, is proportional to the logarithm of this number—here, the number of ways the toys can be distributed—so the entropy of the messy room is double that of the clean room: ln 1040 /ln 1020 = 40/20 = 2. Just as was the case with coin tosses, when the number of ways a system can be assembled is high, the arrangement is recognized as typical and common; when it is low, the arrangement is viewed as exceptional and rare.

Importantly, more effort must be expended—work must be done—to keep your room orderly than to allow it to descend into disorder. An orderly room, an orderly house—order anywhere—all require devoted attention and regular cleaning and maintenance: in short, an expenditure of work and energy. The tendency for the universe to slide naturally towards disorder requires the expenditure of work and energy to create order, stability, and structure. Whereas

3

It can be shown that the entropy of a black hole, a topical—and topological—topic these days, is proportional to the surface area of its event horizon defined by the distance from the center of the black hole, a distance known as the Schwarzschild radius after the German physicist and astronomer who, in 1915, was the first to solve the Einstein field equations of general relativity. Within this distance nothing can escape its strong gravitational pull because to do so would require a speed greater than that of light, which, as Einstein showed in his theory of relativity, is impossible. In the mid-1970s the British cosmologist Stephen Hawking discovered that black holes do radiate, as particle pairs are created in the strong tidal gravitational field just outside the event horizon. Of course, to conserve energy and mass, this means that a radiating black hole shrinks and thus loses entropy, but one can show that the entropy carried away by this so-called “Hawking radiation” exceeds that lost by the black hole, so that the Second Law of Thermodynamics is satisfied even in this extreme environment. Eventually, black holes “evaporate” in an enormous burst of energy (see, for example, [4]). It is hoped that the study of the thermodynamics of black holes will point the way to the long-sought theory of quantum gravity, the “final theory,” as it is optimistically called.

92

R. Fleck

disorder and associated problems seem to arise naturally on their own, effective solutions—like successfully assembling the pieces of a puzzle or the parts of a commercial jetliner—always require focused attention and effort (i.e., energy input). Without effort, things decay. As the title of a Neil Young album reminds us, “Rust Never Sleeps.” Maintaining cosmos (the Greek word for “order”)4 in the face of chaos is never free and easy. This insight—that disorder has a natural tendency to increase over time and that we can counteract that tendency by expending energy—reveals the underlying purpose of life: we must exert effort to create useful types of order that are resilient enough to withstand the unrelenting pull of entropy. “The ultimate purpose of life, mind, and human striving,” Harvard psychologist Steven Pinker points out in his words that open this chapter, is “to deploy energy and information to fight back the tide of entropy and carve out refuges of beneficial order.” To take one life example, a successful (orderly and happy) marriage requires work. One of the most famous opening lines in literature comes from Russian author Leo Tolstoy’s 1878 novel Anna Karenina, a classic tale of love and adultery set against the backdrop of late nineteenth-century high society in Moscow and Saint Petersburg, considered by many to be the greatest novel ever written. It begins: “Happy families are all alike; every unhappy family is unhappy in its own way.” (Author Don Lemons, in his slim volume Thermodynamic Weirdness, adapts Tolstoy’s statement to mark a difference between ideal and real heat engines: “all perfect heat engines are alike; each imperfect heat engine is imperfect in its own way” [6, p. 39]. And because Carnot demonstrated, as pointed out in Chapter 3, that there is no such thing as a “perfect” [100% efficient] heat engine, Professor Lemons must mean “ideal,” not “perfect” here.) Indeed, there are many ways a marriage can fail and fall into disarray and disorder—infidelity, lack of trust, financial or parenting issues, bizarre in-laws, and so on. To be in a happy marriage, however, requires work (and maybe a good measure of luck): you need to work on many often complex issues that can arise in a marriage, any one of which can ruin a relationship. Disorder, like unhappy marriages, can occur in many ways, but order, like a happy marriage, in only a few. As with most things, there are more ways to fail than there are ways to succeed, making it far more difficult to reach perfection, for the paths to perfection are far fewer than the many paths to failure. 4

This Greek word for order gives us, not surprisingly, our word “cosmetics,” products that produce order from disorder (and, in that sense, seem to fly in the face, so to speak, of the Second Law). The science of cosmology addresses the makeup of the universe, whereas cosmetology is concerned with the universe of makeup.

5 Implications of the Second Law of Thermodynamics …

93

In fact, life is full of everyday examples of our fighting back “the tide of entropy.” Take, for instance, driving a car. It takes effort to drive safely (orderly). A lot of effort. Roads, especially heavily trafficked roads, are high entropy zones (recall that the probability of disorder increases rapidly with an increasing number of system components). Same for a job: it takes effort to do a job well (orderly), avoiding disaster (disorder) in the work environment. There are lots of ways to make mistakes on the job, just as there are lots of ways to crash a car. Just washing dishes after dinner and putting everything back in the proper (orderly) places takes effort to minimize entropy in the kitchen—this place for spoons, this place for plates, that place for this, and this place for that—compared to the many possible out-of-order places. It’s much easier to just throw all the silverware into a drawer helter-skelter, minimizing the expenditure of energy at the expense of increasing entropy. (And if you think silverware thrown helter-skelter into a drawer is not more disordered than putting each piece in its proper place, try the experiment: you’ll have to expend a lot more time and energy to pick out each desired piece when they’re randomly scattered about than if they’re put back orderly.) The same is true for packing and unpacking before and after a trip: it takes time and effort to do both with an orderly result; not much of either to just throw stuff together during packing, and to leave things scattered at random upon return. And what about life itself? It’s often called a miracle, and for good reason. The emergence of the exquisitely ordered complex structures characteristic of life against a background of disorder, dissipation, and degeneration ineluctably dragging the universe towards its final equilibrium state of maximum entropy—order in a universe ruled by disorder—certainly seems nothing short of miraculous. And yet shortly after the conditions necessary for life as we know it were in place, not long after Earth formed some four and a half billion years ago, life did arise, wondrously adaptable and remarkably resilient. However improbable it may seem, life arose and eventually transformed the planet, giving rise to what is called the biosphere, Earth’s life zone—that part of our air, land, and sea infiltrated by an amazing variety of life forms. Although the details are still uncertain, it appears that living organisms arose from pre-biotic organic compounds (recall the discussion in Chap. 4 of abiogenesis), a concept known as the “chemical evolution” theory of the origin of life. All life here on Earth is based on a common carbon chemistry, all the way down to a common genetic code, and although alternative life chemistries—a staple of science fiction, to be sure (such as Star Trek’s Horta, a highly intelligent rock-like lifeform based not on carbon but on silicon, an

94

R. Fleck

element chemically similar to carbon and a major component of rock)—may be possible, life elsewhere in the universe could very well be based on the same chemistry of life found here on Earth. After all, the same chemical elements here on Earth have been detected throughout the universe. And since life has done so well here on Earth, fighting back “the tide of entropy,” the odds that we are alone in the universe—that life is merely a “happy accident”— are pretty slim, particularly given the vastness of space and the abundance of stars now known to harbor planets, many of which are located in the so-called “habitable zone” of their host star—the region around a star where water can exist stably in liquid form, water being the sine qua non of life as we know it. Nevertheless, creationists and other antievolutionists argue that the existence of life on Earth is proof of special and separate creation by a deity such as is portrayed in the biblical book of Genesis: how else explain the order and organization exhibited by life in a universe governed by the Second Law of Thermodynamics where disorder, not order, is the rule? Life processes are invariably irreversible and therefore characterized by increasing entropy.5 But this thermodynamic argument against the natural origin and evolution of life is fundamentally flawed because it mistakenly neglects the important fact that a living organism exchanges energy, material, and information with its environment, and is therefore not a closed system. As Ira Levine notes in his 1978 textbook Physical Chemistry [8, pp. 123-124]; for more on thermodynamics and evolution, see, for example, the essay “Thermodynamics and Evolution” by Prigogine, Nicolis, and Babloyantz in Physics Today, November 1972]: Living organisms are open systems since they both take in and expel matter; further, they exchange heat with their surroundings…. The organism takes in foodstuffs that contain highly ordered, how-entropy polymeric molecules such as proteins and starch and excretes waste products that contain smaller, less ordered molecules. Thus, the entropy of the food intake is less than the entropy of the excretion products returned to the surroundings…. The organism discards matter with a greater entropy content than the matter it takes in, thereby losing entropy to the environment to compensate for the entropy produced in the internal irreversible processes.

The Second Law (as well as the First Law, for that matter) applies only to closed systems, those systems completely isolated from their surroundings for 5

In his 1944 essay What is Life? [7], the pioneering quantum physicist Erwin Schrödinger pointed out the thermodynamic resemblance of living organisms and clocks: like plants converting absorbed low-entropy solar energy–low entropy because this energy is packed into a single quantum of light called a photon–into re-emitted high-entropy thermal energy, energy in the form of a wound-up, spring-driven clock is irreversibly turned into high entropy heat.

5 Implications of the Second Law of Thermodynamics …

95

which no exchanges occur. Taking this into account, the scale of “the system” must be expanded out beyond each individual life form, beyond Earth itself, to include the Sun, the ultimate source of nearly all the energy utilized by life on Earth. (The exceptions here are nuclear energy, which derives from unstable atoms produced by exploding stars long before Earth formed from the debris—the “stardust”—of these element-forming cosmic catastrophes, and geothermal energy originating from heat within Earth produced during its formation together with heat produced by the decay of radioactive mantle and crust material, each source contributing roughly the same but far less than incoming solar radiation.) For the Earth-Sun system, it’s easy to show— as we did in Example 3.4 for the loss of body heat to the surroundings, and as discussed earlier in the example of heat conduction—that the total entropy increases for each quantity of heat radiated from the Sun and absorbed by Earth, because the heat lost from the Sun occurs at a much higher temperature (6000 K) than that at which it is gained by Earth (300 K). As Boltzmann appreciated, “The general struggle for existence of animate beings is … not a struggle for raw materials …nor for energy which exists in plenty in any body in the form of heat, but a struggle for entropy, which becomes available through the transition of energy from the hot sun to the cold earth.” Low-entropy energy in sunlight powers photosynthesis providing energy for plants, which in turn are eaten by animals (herbivores), some of which are eaten by other animals (carnivores) at higher trophic levels in the food pyramid, for their energy requirements. It is important to point out that the efficiency of energy conversion at each trophic level is typically less than 10% (which is why a pound of fresh meat costs about ten times as much as a pound of fresh fruits and vegetables), so less than 1% (i.e., 10% of 10%) of the chemical energy a plant acquires from the Sun ends up as part of an animal that eats plants—and less than 10% of that ends up in animals that eat animals—not a very efficient energy-conversion chain, and entirely consistent with the inherent inefficiency of nature articulated by the Second Law. For each single (and hence low-entropy) photon of visible light a plant absorbs during photosynthesis, a complex biochemical process that increases the order and hence decreases the entropy within the plant as composite organic molecules like glucose are assembled from a disorganized collection of smaller, constituent molecules (in particular, water and carbon dioxide, in the case of glucose production), a disordered and hence higher-entropy multitude of lower-energy photons are thermally re-emitted, guaranteeing a net increase in the entropy of the universe. The energy conversion efficiency of photosynthesis is only a few percent, much less than the roughly 25% efficiency now realized in photovoltaic solar panels. This (chemical) energy

96

R. Fleck

content of food, along with hydroelectric, wind, and, obviously, solar energy, as well as all fossil fuels—the products of once-living plants and animals—all derive from the Sun (which, bright and hot as it seems, is only 0.7% efficient in converting mass into energy as 600 million tons of hydrogen are converted into 596 million tons of helium in its core every second , producing enough energy in that one second to power Planet Earth for half a million years!). The flow of entropy can be regarded as the central organizing principle in the evolution of the universe and in the existence of life. And so, although the Second Law might appear to be violated locally on the scale of an individual organism—an “open” system—it is most certainly obeyed in all closed systems throughout nature: the entropy decrease within a particular organism is more than offset by an entropy increase in the rest of the universe. Whenever a structure or a thought emerges, or any other process accompanied by a reduction of entropy, there is always a greater increase in entropy elsewhere in the universe. (My colleague Itzhak Goldman wonders if it is a coincidence that creative people seem to be less organized in other life aspects which suffer an increase in entropy as a result of the decrease in entropy associated with the creative act.) The “uphill” processes associated with life not only are compatible with entropy and the Second Law—they actually depend on them for the energy flux off of which they feed. The importance of including the entire system in entropy considerations is addressed by Kip Thorne in his comments on the entropy change that occurs when the father cleans his daughter’s room (recall Example 5.1), thereby decreasing the toys’ entropy: The toys’ entropy may be reduced by the father’s cleaning, but the entropy in the father’s body and in the room’s air has increased: It took a lot of energy to throw the toys back onto the northernmost tiles, energy that the father got by ‘burning up’ some of his body’s fat. The burning converted neatly organized fat molecules into disorganized waste products, for example, the carbon dioxide that he exhaled randomly into the room; and the resulting increase in the father’s and the room’s entropy (the increase in the number of ways their atoms and molecules can be distributed) far more than made up for the decrease in the toys’ entropy.

What fathers won’t do for their daughters! Certainly, the chances for complex organisms—life, itself—to arise spontaneously are infinitesimally small. Take, for example, hemoglobin, the iron-containing, oxygen-transport protein found in red blood cells, which consists of four twisted chains of amino acids, the building blocks of proteins. Just one of these chains contains 146 amino acids, 20 different kinds of which

5 Implications of the Second Law of Thermodynamics …

97

are found in living organisms. The number of ways of arranging 20 kinds of things in chains 146 links long is unimaginably large, easy to calculate but impossible to comprehend: 20 X 20 X 20 X … X 20, with one factor of 20 for each of the 146 links in the chain of amino acids, which is 20 times itself 146 times, or approximately 10190 , easily large enough that it would never be expected to assemble by chance (science fiction author Isaac Asimov called this huge number the “hemoglobin number”). And a hemoglobin molecule has only a tiny fraction of the complexity of a complete living organism.6 The improbability of the complexity of the living world vis-à-vis inanimate matter has been argued for some time, and one of the most powerful explanations for life’s complexity long ago evolved into the most influential argument for the existence of a God and for the divinely ordained special and separate creation of each and every specific and varied life form: the “Argument from Design,” more recently referred to as “Intelligent Design.” The argument, elaborated most famously in Reverend William Paley’s 1802 Natural Theology; or Evidences of the Existence and Attributes of the Deity Collected from the Appearances of Nature, a sweeping providential interpretation of the Creation, is that every aspect of every organism had been meticulously and purposely designed for its function by a Creator God. Design implied a designer. “There cannot be design without a designer,” Paley asserted, contrivance without a contriver; order without choice; arrangement, without any thing capable of arranging; subserviency and relation to a purpose, without that which could intend a purpose; means suitable to an end, without the end ever having been contemplated, or the means accommodated to it. Arrangement, disposition of parts, subserviency of means to an end, relation of instruments to a use, imply the presence of intelligence and mind.

Resurrecting the concept of a Divine Watchmaker for a Newtonian clockwork universe, Paley compared the complexity of the eye with the intricacy of a watch: both require a maker; neither, it seemed, could come into existence by chance. 6

This calculation brings to mind the oft-quoted example of the chances that a monkey, given enough time bashing away at random on a typewriter could reproduce the works of Shakespeare (to return once again to the “other” of the two cultures; recall Fig. 1.1). Even if we restrict Shakespeare’s writings to 1 million alphabetical letters and limit the typewriter to 26 keys (not counting the space bar), the chance for this happening is (1/26)1,000,000 , formidable odds, indeed, so that even if a monkey could type 1 million words per second, we could expect such an event to occur only once in 7 X 101,414,965 years. Not very likely at all—but not impossible: given enough time, everything that is possible is certain to occur; even a cup of coffee, sitting on the counter and left to itself long enough—longer than the age of the universe—will one day begin to spontaneously boil over. Although some have doubted the authorship of some of Shakespeare’s works, he clearly didn’t get any help from monkeys.

98

R. Fleck

But Darwin (Fig. 5.1), wondering in response to Paley how a highly complex organ like the vertebrate eye might have evolved, explained the improbable by imagining a sequence leading from a simple eye, perhaps just a few light-sensitive cells, to modern eyes by a gradual series of incremental improvements, each conferring a selective advantage. Indeed, as the British evolutionary biologist Richard Dawkins has beautifully expounded in his 1986 The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe without Design,7 life—and its origin and evolution—are not totally random processes: advantageous mutations are selectively accumulated in an extended process of cumulative versus all-at-once, single-step selection, each step adding order to the organism at the expense of increasing—by a greater amount— the disorder and hence the entropy of the universe. Indeed, as Dawkins points out, if there were some selective advantage to accidentally forming and preserving actual words with the pattern of letters typed at random by a monkey, even a monkey, given enough time, could type all the works of Shakespeare. In fact, if Boltzmann’s interpretation of the Second Law is correct, we are lucky even to be here: the universe must have, in Boltzmann’s words, “started … from a very improbable” unbelievably orderly “initial condition” from which it continues to degenerate into disorder, “and it can be said that the reason for this is just as little known as the reason why the world in

7 Hermann von Helmholtz, a pioneer in modern studies of vision whom we have already met, remarked that had he been the Creator, he would have been ashamed to have produced so faulty and inefficient an instrument as the eye for seeing. Indeed, as the great evolutionist Charles Darwin remarked, “What a book a Devil’s Chaplain might write on the clumsy, wasteful, blundering low & horridly cruel works of nature!” Our upright posture, to take one example of “blundering” design, which is the cause for an array of problems such as degenerative vertebrae and spinal disks, wrenched knees, and aching necks, is another case in point: it was adapted from a body plan that had mammals walking on all fours, whence our evolution from sea to land to the chiropractor’s office. And what good are male nipples? Another design flaw has us eat, drink, and breathe through the same orifice, so that choking and drowning are the fourth and fifth leading causes of unintentional injury death in the U.S. One recent author has asked “what comedian designer configured the region between our legs—an entertainment complex built around a sewage system?” No intelligent designer, given the opportunity to design human beings ab initio, would ever dream of designing a jury-rigged body plan such as ours. Nor would, as historian Peter Bowler points out, a competent Designer have modified the same basic (homologous) structure to serve such highly diverse functions as do the wing of a bat and the paddle of a whale; the wide range of homologies could, however, be understood as the inevitable consequence of evolutionary opportunism slowly and sequentially modifying available body plans. In their seminal 1996 book Evolution and Healing: The New Sience of Darwinian Medicine, Randolph Nesse and George C. Williams describe the design of human bodies as “simultaneously extraordinarily precise and unbelievably slipshod,” concluding that our inconsistencies are so incongruous as to be “shaped by a prankster.”

5 Implications of the Second Law of Thermodynamics …

99

Fig. 5.1 A life-size marble sculpture of the English gentleman-naturalist Charles Darwin made in 1885 by the medalist and sculptor Joseph Boehm. Originally placed on the platform of the grand staircase overlooking the main hall of London’s Natural History Museum, it was later removed to what is now the museum cafeteria to make room for a statue of the famed anatomist and paleontologist Richard Owen, the museum’s first director, only to be returned in 2009 in commemoration of the bicentennial of Darwin’s birth to its place of honor in the main hall where it remains today. (Photograph by the author)

100

R. Fleck

general is precisely so and not otherwise,”8 a situation the late cosmologist Stephen Hawking often worried about. “My goal is simple,” Hawking humbly admitted. “It is a complete understanding of the universe, why it is as it is and why it exists at all.” In any case, the case for Darwinian evolution as an explanation for the wonderous diversity of life on Earth is clear, as reflected, for example, in this statement issued in 1995 by the American Association of Biology Teachers [9, p. 1]: The diversity of life on earth is the outcome of evolution: an unsupervised, impersonal, unpredictable and natural process of temporal descent with genetic modification that is affected by natural selection, chance, historical contingencies and changing environments.

References 1. S. Pinker, "The Second Law of Thermodynamics," in This Idea Is Brilliant: Lost, Overlooked, and Underappreciated Scientific Concepts Everyone Should Know, ed. by J. Brockman (Harper, New York, 2018), pp. 17–20 2. H. C. von Baeyer, Warmth Disperses and Time Passes: The History of Heat (Modern Library, New York, 1999; orig. publ. as Maxwell’s Demon, Random House, 1998) 3. P. W. Atkins, Galileo’s Finger: The Ten Great Ideas of Science (Oxford University Press, Oxford & New York, 2003) 4. F. Adams, G. Laughlin, The Five Ages of the universe: Inside the Physics of Eternity (The Free Press, New York, 1999) 5. K. S. Thorne, Black Holes & Time Warps: Einstein’s Outrageous Legacy (W. W. Norton & Co., New York & London, 1994) 6. D. Lemons, Thermodynamic Weirdness: From Fahrenheit to Clausius (MIT Press, Cambridge, MA, 2019) 7. E. Schrödinger, What Is Life? The Physical Aspect of the Living Cell (Cambridge University Press, Cambridge, 1944)

8

Quoted in [3, p. 143]. Boltzmann’s musings on “why the world in general is precisely so and not otherwise,” have been expanded—in both scope and space—in more modern times to include questioning why the universe at large seems so precisely fine-tuned to the conditions necessary for life “and not otherwise.” Examples of these cosmic fine tunings that make our universe possible include the numerical values of physical constants, such as the elementary unit of charge carried by the electron and the proton, and the Newtonian gravitational constant G which sets the strength of the gravitational force. Commonly referred to as the “anthropic principle,” it is really nothing more than a simple logical requirement: intelligent beings cannot find themselves in a universe uninhabitable by intelligent beings. Although surely self-evident, the reader is encouraged to ponder the cosmic significance–for life, the universe, and everything (borrowing again from the Hitchhiker’s Guide series)–of these finely tuned, “precisely so,” cosmic coincidences. Maybe over a drink–or two.

5 Implications of the Second Law of Thermodynamics …

8. I. Levine, Physical Chemistry (McGraw-Hill, New York, 1978) 9. Statement on Teaching Evolution, The American Biology Teacher 58 (1996)

101

6 So, What’s to Do?

Summary Simplify! Simplifying is the best antidote against the relentless rise in entropy. Simple, less complex systems have fewer components and therefore fewer states and thus fewer ways to go wrong. The disorder and misfortunes tied to the Second Law are no one’s fault: it’s simply an unalterable law of nature. So, simplify. As Oxford chemist Peter Atkins reminds us, “We are the children of chaos, and the deep structure of change is decay. At root, there is only corruption, and the unstemmable tide of chaos…. This is the bleakness we have to accept as we peer deeply and dispassionately into the heart of the Universe.” The ultimate source of our anxiety over uneasily living in a world we can’t control, governed by laws of nature outside our control—including the Second Law of Thermodynamics—can be found in René Descartes’s separation of “soul” from nature and Francis Bacon’s call for the subduction and control of nature, both products of the seventeenth-century Scientific Revolution.

… no change is an island of activity: change is a network of interconnected events. Although drift into degradation might take place in one location, the consequence of that drift might be to ratchet up a structure somewhere else. —Peter Atkins, Galileo’s Finger: The Ten Great Ideas of Science [1, p. 124]

Simplify! Yep, that’s it. It’s as simple as that. Simplifying is the best antidote against the relentless rise in entropy. Despite what the bumper sticker claims,

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Fleck, Entropy and the Second Law of Thermodynamics, https://doi.org/10.1007/978-3-031-34950-8_6

103

104

R. Fleck

the one who dies with the most toys, doesn’t win. More stuff means more entropy. The notion that “simpler is better” was formalized by the fourteenthcentury Oxford scholar, William of Ockham, and is known in his honor as Ockham’s razor (to cut away the superfluous from the essential), a precept embraced in the modern world of industrial design by Apple’s Steve Jobs whose Zen-like minimalist mantra was “Let’s make it simple. Really simple.” This “economy of nature”—the belief that nature operates with the fewest possible causes—is reflected in an economy of thought that can be identified as a principle of parsimony. “Natura nihil facit frustra,” Newton wrote (and that’s in Latin so it must be true!): “Nature does nothing in vain,” a perception shared by so many others throughout the history of science. 1 Einstein believed that, apart from any mathematical formalism, a good theory in science should be simple enough for a child to understand; for the early twentieth-century British physicist Ernest Rutherford, it had to be simple enough for a barmaid. Complicated explanations, in science and elsewhere, are usually too complicated to be true. Of course, simplification also mitigates the universe’s relentless rise in entropy. Simple, less complex systems have fewer components and therefore fewer possible arrangements and ways to go wrong—recall the example (Example 4.1) of tossing only one or two coins versus even just three coins. It’s that simple. Regardless, it is important to realize that the disorder and misfortunes tied to the Second Law are no one’s fault: it’s simply an unalterable law of nature. Here it is worth quoting, at length, Steven Pinker again [2, p. 20]:

1 Long ago, Heraclides of Pontus (ca. 388—310 BC), Plato’s successor at the Academy in ancient Athens, suggested that it would be a lot simpler, in the sense of being easier and more economical, to account for day and night by allowing Earth to rotate once per day rather than moving all the celestial spheres once per day around a stationary Earth. (Turns out, he was right about what’s turning.) Two millennia later, the seventeenth-century English poet John Milton agreed, arguing in his epic poem Paradise Lost a century after Copernicus, who also favored a rotating Earth, against a “sedentary Earth, / That better might with far less compass move ....” Sounding a bit like Hermann von Helmholtz being ashamed to have produced so faulty an instrument as the eye for seeing had he been the Creator (recall Note 7 of Chapter 5), King Alfonso X of Castile, Spain, expressing his frustrations with the complications of contemporary thirteenth-century geocentric cosmological models which required nearly a hundred spheres rolling within spheres to account for the motions of the heavens, is reported, probably apocryphally, to have complained that “If the Lord Almighty had consulted me before embarking on Creation, I would have suggested something simpler!” Although the learned king probably never did sum up his frustrations with contemporary and, in his view, unnecessarily complicated cosmological models in quite these words (which, in any case, would have been blasphemous in the extreme), he nevertheless, royal status notwithstanding, did promote the simple life, claiming that only four things were necessary for good living: good books to read, good wine to drink, good wood to burn, and good friends to share these with. Not a bad combination, even in—or especially in—our modern world of distractions and trivialities.

6 So, What’s to Do?

105

The biggest breakthrough of the scientific revolution was to nullify the intuition that the universe is saturated with purpose: that everything happens for a reason. In this primitive understanding, when bad things happen—accidents, disease, famine—someone or something must have wanted them to happen. This in turn impels people to find a defendant, demon, scapegoat, or witch to punish. Galileo and Newton replaced this cosmic morality play with a clockwork universe in which events are caused by conditions in the present, not goals for the future. The Second Law deepens that discovery: Not only does the universe not care about our desires, but in the natural course of events it will appear to thwart them, because there are so many more ways for things to go wrong than to go right. Houses burn down, ships sink, battles are lost for the want of a horseshoe nail…. More generally, an underappreciation of the Second Law lures people into seeing every unsolved social problem as a sign that their country is being driven off a cliff. 2 It’s in the very nature of the universe that life has problems. But it’s better to figure out how to solve them—to apply information and energy to expand our refuge of beneficial order—than to start a conflagration and hope for the best.

Indeed, a principal reason for all the anxiety over an ever-worsening world governed by the Second Law can be traced back to particular precepts developed during the Scientific Revolution mentioned here by Pinker that occurred in the seventeenth century when the foundations of modern science swept away the scientific heritage of the ancient and medieval worldviews, a period that has been proclaimed “the most profound revolution achieved or suffered by the human mind,” indeed “the most important ‘event’ in Western history.”3 2 Discussing thermodynamics as a cultural problem in the modeling of human history, Henry Brooks Adams, an American historian descended from two U.S. presidents, in 1910 printed and distributed to university libraries and history professors the small volume, A Letter to American Teachers of History, (later published posthumously) proposing a “theory of history” based on the Second Law and the principle of entropy, stating essentially that all energy dissipates, that order becomes disorder, and that Earth will eventually become uninhabitable. In his 1909 manuscript The Rule of Phase Applied to History, Adams interpreted history as a process moving towards “equilibrium,” but he saw militaristic nations as tending to reverse this process, a Maxwell’s Demon of history. Countering Adams’s mal du siècle disillusionment—and the principle of entropy embodied within the Second Law—myths, both primitive and modern, have long provided us with a comforting sense of familiarity and “at homeness” in the world around us by organizing experience and human behavior within some aspect of cosmic order, thereby creating order from disorder in the everyday realm of human existence. 3 British philosopher Alfred North Whitehead, commenting a century ago on the erosion of ancient wisdom and the attendant rise of modern science in his book Science and the Modern World , called these revolutionary transformations “the most intimate change in outlook which the human race had yet encountered. Since a babe was born in a manger, it may be doubted whether so great a thing has happened with so little stir” [3, p. 10]. For British historian Herbert Butterfield, writing in his 1949 The Origins of Modern Science, 1300—1800, the Scientific Revolution “outshines everything since the rise of Christianity and reduces the Renaissance and Reformation to the rank of mere episodes, mere

106

R. Fleck

It was then that the French philosopher-scientist René Descartes (Fig. 6.1), rightly regarded as the founder of modern philosophy and the chief architect of the century’s new “mechanical” philosophy that proposed to explain everything in terms of mathematically measurable matter in motion, argued for the separate existence of mind (res cogitans: “thinking things”; immaterial thought) and body (res extensa: “extended things” of the external, material world), a dichotomy between the personal, conscious human subject and the impersonal, unconscious material universe. This Cartesian mind-body dualism contrasted the material world “out there” with the mental world “in here,” an idea that remained in fashion in both philosophy and science until the twentieth century, when the principles of quantum physics made it clear that one cannot separate subject (“I”) from object (“it”). Even then, echoing sentiments that reach back to Descartes, Einstein argued that science must be based on the belief in an external world independent of the perceiving subject: “The belief in an [objective] external world independent of the perceiving subject is the basis of all natural science” [5, p. 1]. Quite a pronouncement, coming from the author of the theory of relativity. “By design we set up the domain of nature as a system quite apart from ourselves” (Anthony Aveni, private communication). For Descartes, the world and everything in it is reducible to its machine-like mechanical essence. In sharp contradistinction to Renaissance naturalists and all animists of past and present times, Descartes viewed all of nature, the whole world and everything in it, as a machine. Nature, for Descartes, is wholly and completely lifeless; all matter is dead. Within this background of a mechanical universe utterly deanimated and fully detached from us, an entity that exists “such as it is”—the epitome of strict Cartesian objectivity—the Elizabethan statesman and prophet of science Sir Francis Bacon (1561–1626; Fig. 6.1)4 envisioned an “Interpreter internal displacements, within the system of medieval Christendom.... [I]t looms so large as the real origin both of the modern world and of the modern mentality that our customary periodisation of European history has become an anachronism and an encumbrance” [4, p. viii]. (For more on this most revolutionary period in the history of science, see Lawrence Principe’s The Scientific Revolution: A Very Short Introduction, Oxford University Press, 2011). 4 Alas, with his comprehensive visionary program in hand, Bacon was a prophet, not a practitioner, of the new science. For all his influential rhetoric, this inspiring advocate of science and publicist of seventeenth-century scientific method was not a scientist. “He was a crooked chancellor in a moral sense and a crooked naturalist in an intellectual and scientific sense,” historian Lynn Thorndike reminds us [6, p. 35]: he was Lord High Chancellor of England until relieved of his duties in 1623 on charges of bribery and improprieties in chancery suits. He died in the aftermath of the only “experiment” he is known to have conducted, one touching on the topic of thermodynamics: he caught “a chill” while stuffing a chicken with snow to test the preservative effect of low temperature. He rejected mathematics and both the telescope and the microscope, as well as the scientific merits of Copernicus, Kepler, and Galileo, and ironically even of two of his English contemporaries, Williams Gilbert and Harvey, both of whom were notable exponents of the experimental method he espoused; in short, nearly everyone and everything that stood for the new philosophy. Nevertheless, the rather

6 So, What’s to Do?

107

Fig. 6.1 With both advocates holding degrees in law and ideas on heat (even if Bacon’s “experiment” cost him his life), Cartesian rationalism (left; “truth” resides in the mind to be discovered by reason, distrust of the senses, top-down deductive a priori reasoning, abstract mathematical modeling) stands in direct opposition to Baconian empiricism (right; “truth” resides outside the mind to be discovered by observation and experiment, distrust of authority, bottom-up inductive a posteriori reasoning, practical experimental emphasis), the other dominant epistemology to emerge from the seventeenth-century Scientific Revolution, two complementary methodologies paralleling the two opposing views of understanding nature established by the ancient Greeks: Platonic mathematical idealism and Aristotelian material realism. The English polymath Robert Hooke, writing in the preface of his 1665 Micrographia, appreciated the importance of both paths to knowledge: “So many are the links upon which the true Philosophy depends, of which, if any one be loose, or weak, the whole chain is in danger of being dissolv’d; it is to begin with the Hands and Eyes, and to proceed on through the Memory, to be continued by the Reason; nor is it to stop there, but to come about to the Hands and Eyes again, and so, by a continual passage round from one Faculty to another, it is to be maintained in life and strength.” (Photographs by the author: Descartes looking down pensively from the central building of the Hungarian Academy of Sciences, and Henry Weekes’s 1845 marble statue of Francis Bacon in Cambridge University’s Trinity Chapel)

of Nature” who, in a spirit of humility before God and the “facts of nature,” sought to restore humankind, as a necessary precondition for Christ’s earthly

persuasive arguments of this loyal lawyer of London for the inductive scientific method and the goal-directed utilitarian motivation for doing science, as well as his proposals for the organization of knowledge and of the activity of science, were a tremendous force behind the new experimental philosophy of the Scientific Revolution, and remain influential in science even today. Indeed, the modern world is very much like the world envisioned by Francis Bacon.

108

R. Fleck

rule, to that biblically sanctioned dominion over nature (Genesis 1:26, 28) that was believed lost by the Fall of Adam. Envisioning his project in terms of Christian redemptive history, Bacon spells out in his 1620 Novum organum (New Instrument [of knowledge]) his prescription to reverse the curse: For man by the Fall fell at the same time from this state of innocence and from his dominion over creation. Both of these losses however can even in this life be in some part repaired; the former by religion and faith, the latter by arts and sciences…. If a man endeavor to establish and extend the power and dominion of the human race itself over the universe, his ambition (if ambition it can be called) is without a doubt both a … wholesome and a … noble thing.

Science emerges as an affair centered on the control of the material world with humanity engaged in a constant battle against the forces of nature, a radical departure from the way other world cultures, particularly those of Indigenous peoples, have understood our relationship with nature. Our anxiety over the consequences of the laws of nature—here, in particular, the Second Law of Thermodynamics—is in large measure a direct result of our inability to control the natural world, a state of affairs realized long ago within the magical tradition which coexisted for a time with the new science. Science gives us knowledge of the world around us, not control over it.5 Control over nature, like “Man’s Control over Civilization,” to borrow from the title of Leslie White’s 1948 essay, is “An Anthropological Illusion”: Man finds himself in a universe to which he must adjust if he is to continue to live in it.... [With weather] we see the situation in terms of adjustment rather than control . We may not be able to control the weather, but adjust to it we must. And knowledge and understanding make for more effective and satisfying adjustments. It would be advantageous if we could control the weather. But if we cannot, then weather prediction is the next best thing. And for prediction we must have knowledge and understanding [7, pp. 244–245].

As with weather, so with thermodynamic systems in general, all ruled by laws of nature we have no control over—but must have knowledge and understanding of in order to accept and adjust, if this be the universe we are “to continue to live in.” 5 Just last month, as I write this, my community and others across Florida were devastated by the wind and rain accompanying Hurricane Ian. The costliest hurricane to ever hit the U.S., early estimates of the damage—much of it caused by unprecedented (and unexpected) flooding—approach the $100-billion mark. We could predict the path and the characteristics of the storm, but there was absolutely nothing we could do to control this force of nature.

6 So, What’s to Do?

109

The Second Law of Thermodynamics provides a disturbing corrective to the Baconian belief in “control over nature”—for “the relief of Man’s estate,” he tells us—just as Darwinian evolution provides a humbling antidote for our naïve self-love and anthropocentric cosmic arrogance. Thanks to Bacon—and the Enlightenment that followed his footsteps into the eighteenth century and beyond—we are uneasy in a world we can’t control, a world that, in fact, controls us. Just as the devoutly religious feel uncomfortable in a Darwinian universe, so too do we feel uncomfortable in a universe governed by laws we cannot control—a universe ruled by the Second Law of Thermodynamics, gradually sliding naturally into a state of increasing chaos and disorder. Descartes’s separation of “soul” from nature, like Bacon’s call for the subduction and control of nature, loom as the ultimate source of our anxiety over having to live with the Second Law, and have been blamed, in the opinion of those more sensitive to the underlying interconnectedness of all things—those advocating for a religious naturalism that sees us as an interconnected, emergent part of nature—for a “disenchantment of the world” which consequently brought on the multitude of environmental problems we now face (see, e.g., Morris Berman’s The Reenchantment of the World , Cornell University Press, 1981, and Carolyn Merchant’s The Death of Nature: Women, Ecology and the Scientific Revolution, Harper & Row, 1980). The outcomes of the Scientific Revolution, like those of the Darwinian revolution that followed—this last being "the greatest of all intellectual revolutions” according to Ernst Mayr, one of the twentieth century’s greatest evolutionary biologists—strike at the very heart of human existence and directly impact the human condition, as they introduced wholesale changes in cultural values, fearfully revealing the immensities of space and time and the seeming insignificance of humanity (recall Note 2).6 During the Scientific Revolution we made our bed, one forever separated from a nature to be 6 In his book titled Revolution in Science (Harvard University Press, 1985, p. 299), Harvard historian of science I. Bernard Cohen made the defensible claim that

The Darwinian revolution was probably the most significant revolution that has ever occurred in the sciences, because its effects and influences were significant in many different areas of thought and belief. The consequence of this revolution was a systematic rethinking of the nature of the world, of man, and of human institutions. The French biochemist and Nobel laureate Jacques Monod, writing in his Chance and Necessity, concurred: “There is no scientific concept, in any of the sciences, more destructive of anthropocentrism than this one, and no other so rouses an instinctive protest from the intensely teleonomic creatures that we are” [8, p. 113]. Whereas the Copernican revolution was, as the evolutionary biologist Stephen Jay Gould put it, merely “about real estate,” switching positions of Earth and Sun in the Solar System, the Darwinian revolution was “about essence … about who we are … it’s what our life means insofar as science can answer that question.” Darwin’s discovery was, Gould contends, the “most discombobulating of all discoveries that science has ever made” [9].

110

R. Fleck

controlled. Now we must sleep in it—if sleep we can—and in our dreams do our best to live and learn, to understand and adjust, and try our best to be happy in a world that continues to decay despite our best efforts to make it a better place. Nothing more we can do. It’s nature. The nature we created and cast asunder. And although science evolves as we get better at understanding the world around us—quantum mechanics superseded classical (Newtonian) mechanics, for example—it does not appear that the science of thermodynamics will ever be dethroned, so we must learn it and learn to live with it. Remember what Einstein once admitted about thermodynamics: “it is the only physical theory of universal content concerning which I am convinced that, within the framework of applicability of its basic concepts, it will never be overthrown” [10, p. 33]. No one has yet found a way to cheat the Second Law. “Nothing in life is certain,” MIT professor Seth Lloyd pronounced, “except death, taxes, and the second law of thermodynamics” [11, p. 971]. Professor Lloyd continues: All three are processes in which useful or accessible forms of some quantity, such as energy or money, are transformed into useless, inaccessible forms of the same quantity.... Indeed, most of the good things in life—including life itself—arise from this gradual degradation of the useful into the useless, of order into disorder, known in physical terms as an increase in entropy.

And because of the Second Law, it is certain that the universe—and often we ourselves—will always lose when we consume the energy necessary for life itself and for all the pleasures that life affords: as we have learned, energy consumption is never 100% efficient. And, to make matters worse, the wasted energy—indeed all the energy we use, either directly or when eventually downgraded as wasted heat—drives a planet-wide thermal pollution that makes it more difficult for life on Earth. The heating of the air and water contributes to undesirable changes in the climate, worsening an already dangerous warming due to the burning of fossil fuels, a warming that is even now harming life on land and in the water. So, simplify. And then just deal with it. It’s no one’s fault that we can’t win or even break even. That’s life, and there’s nothing anyone can do about it. It’s simply a law of nature, the Second Law of Thermodynamics, which reflects nature’s inherent inefficiencies and pronounced preference for randomness and variety, for disorder over order, for waste over efficiency. In almost all instances, there will always be many unfavorable disordered possibilities and few favorable ordered ones, and in every instance, energy will be wasted. As

6 So, What’s to Do?

111

motivational author James Clear reminds us in concluding his take on how entropy explains why life always seems to get more complicated, “Given the odds against us, what is remarkable is not that life has problems, but that we can solve them at all” [12]. A word of warning about the words “order” and “disorder,” which are, like truth and beauty, relative and, as they say, in the eye of the beholder, and hence are inherently subjective descriptors that can be ambiguous and even misleading in the extreme. For example, the free expansion of a gas discussed in Chap. 4 looks unbalanced when all the air is “ordered” by confinement to half the room before the partition is removed. An unbalanced state hardly seems ordered. Likewise, the “disordered” final state, with the air evenly distributed throughout the entire room certainly looks more balanced. Consider also the process of mixing two different gases, say nitrogen and oxygen, the primary constituents of the air we breathe. Separating the nitrogen to one side of a room and oxygen to the other side results, thermodynamically, in a more ordered and hence lower entropy state which required work to be done to separate the two gases. But the higher entropy, mixed and hence thermodynamically disordered state looks more like the normal and natural state of the air we breathe. Clearly, care must be taken in assigning order and disorder in the world. Indeed, Tolstoy’s quip about unhappy families being unhappy in their own ways suggests a sociological slant to the Second Law that therefore must be interpreted in terms of its cultural setting, which, after all, determines what use is made of the ideas and findings of science. To illustrate this, I can share a story about a college friend named Mason. Never noted for his mathematical skills, he nevertheless, after a marriage lasting only three months, worked out the odds of a happy marriage mathematically. Suppose, he reasoned, that, within the range of human happiness, the average person has a 50–50 chance of being happy at any given time. Given those odds, uniting two average people in marriage diminishes the chance of a happy couple to only 25% (=0.50 × 0.50; the four possible microstates are HH, HU, UH, UU, with only the HH state providing undiluted happiness and marital bliss). I call it “Mason’s Rule.” No wonder the odds are against a happy marriage (evidently even more so for bigamists, according to “Mason’s Rule”). Of course, the numbers will change depending on the personal happiness factor, but the fact remains: Because joint probabilities are multiplicative, groups of people are less likely to be happy as a group than are individuals. No wonder it’s so difficult to get along with the other 8 billion people mucking about this rock. Here it’s important to point out that the microstate for both people being unhappy (UU) is just as ordered as the microstate for both people being

112

R. Fleck

happy (HH), although the “order” in the marriage will be decidedly different in the two cases; mathematical and matrimonial order are not always the same. Thus, one must be very cautious in applying the Second Law to sociocultural settings—indeed, anywhere outside the inanimate world where actualities may be contingent upon volition. Sociological factors may make us either more or less affected by the tendency for things to go downhill, so there is hope that, by working on it, by making an effort, by trying harder—even by just thinking positive—we can, as James Clear assures us, solve problems and realize improvements in our lives and in the lives of others. But we have to work at it. “Entropy reminds us that energy is required to maintain order. You need to anticipate things falling apart and focus on prevention” [13, p. 64]. The dangers of porting principles of science into the social arena are well known, as in the case of Social Darwinism, a nineteenth-century perversion of evolutionary science in support of political and socioeconomic exterminatory opinions ground in the guise of evolutionary progress through the struggle and “survival of the fittest,” a phrase first used in the early 1850s by the English, self-taught, sometime railway engineer, philosopher, and social psychologist, Herbert Spencer (1820–1903). Writing in his 1851 Social Statics, Spencer posits a “universal warfare maintained throughout the lower creation ... singl[ing] out the low-spirited, the intemperate, and the debilitated,” reminding his readers that “under the natural order of things society is constantly excreting its unhealthy, imbecile, slow, vacillating, faithless members.” Here was laissez-faire political economics in all its dog-eat-dog, might-is-right, every-man-for-himself glory; a law-of-the-jungle for civilized society that resonated strongly with the competitive ethos of nineteenthcentury capitalism. Social Darwinism has been implicated, not always fairly, in late nineteenth-century racism and imperialism (the former often fueled by the latter) and in the Nazi atrocities of the twentieth century as well as in the eugenics movement that aimed to improve the genetic quality of the human population by controlled breeding, claiming, with a tone that was overtly racist, on the basis of evolutionary biology that the State has a responsibility to limit the procreation of its least fit citizens. (For more on Social Darwinism, see Robert Bannister’s Social Darwinism: Science and Myth in Anglo-American Thought, Temple University Press, 1979.) In the extreme, author Arieh Ben-Naim warns readers of his Entropy Demystified: The Second Law Reduced to Plain Common Sense “that although many popular science books deal with the relationship between entropy and life, entropy and the fate of the universe, etc., all of these are pure speculations.” In fact, not all of these connections are speculative. And after dismissing all standard interpretations of the Second Law, Ben-Naim, in the

6 So, What’s to Do?

113

epilogue titled “The Future Hypothesis” in his book Information, Entropy, Life, and the Universe: What We Know and What We Do Not Know, offers a future “far brighter and better founded than the future (hypothetical) prediction based on the Second Law,” all of this being based on “logical deduction” but admittedly “only a hypothesis or a conjecture [he] cannot prove.” He then (not very persuasively) posits the existence of (get ready!...) “superrobotic supercreatures” reigning over a “new world” where “the almighty entropy ... might one day become harmless. Supercreatures could control the random behavior of atomic particles, and the statistical nature of the Second Law of Thermodynamics would be an obsolete theory” [14, pp. 404–407]. Really?! And while we’re at it—lest we lose all hope (and sanity) for a bright and better future—we should reflect on the words of Princeton University professor of the history of science, Charles Gillispie, who, over half a century ago in his influential account of the history of scientific ideas, The Edge of Objectivity, warned us against blindly letting “the ideas and findings of science” unjustifiably influence culture and personal behavior. Addressing the rising pessimism at the end of the nineteenth century in the wake of the realization of the coming heat death of the universe predicted by the Second Law, Gillispie lays the blame elsewhere, outside the bounds of science: “For it can scarcely have been science which changed the style from the optimism of the eighteenth century to the callow pessimism of the nineteenth” [15, p. 404]. After all, century endings alone often elicit a surge in collective apocalyptic hysteria. Sure, there’s nothing we can do about the ultimate heat death of the universe—but we don’t have to worry about it (recall British philosopher Bertrand Russell’s response in Note 17 of Chap. 3). And let’s not forget that the last call is billions of years down the road. As the American entrepreneur Elon Musk tweeted on 9 May 2017, “If heat death is the fate of the universe, it really is all about the journey.” Make it a good one. Arguably the greatest intellectual adventure of all time, science is, admittedly, a social enterprise. But that doesn’t mean the inverse—the results of science reaching back on us to influence human habits and culture—must necessarily ensue. To be sure, the history of science is filled with periods when our understanding of the natural world informed—and, conversely, was informed by—contemporary social and cultural trends: art historian John Adkins Richardson reminds us that “everything, without exception, is symptomatic of certain aspects of the milieu in which it happened” [16, p. 172]. We have only to recall the great Age of Faith when, for over a thousand years during the Middle Ages, science was relegated to being the “handmaiden of theology,” and when, with the direction of influence reversed, in the eighteenth-century Enlightenment the ideas of Newton impacted, in the

114

R. Fleck

words of science historian Richard Olsen, “almost every aspect of elite and popular culture” [17, p. 139].7 But we survived, and life goes on. Gillispie continues [15, pp. 404-405]: Certainly evolution and entropy were the leading novelties which nineteenthcentury science offered to the pundit. Exponents of Darwinism, as is well known, interpreted competition in the most ferocious fashion, making strife the law of life, and progress the defeat of the miserable and incompetent instead of the harmonious advancement of mankind along the paths of nature. But Darwin had only found his language in the literature of political economy [specifically, that of Herbert Spencer, who coined the term “survival of the fittest”], where the mood was already fatalistic. Nature holds no such message. Nor, perhaps, is entropy really much help to the understanding of history or the social process. Nevertheless, if the robber baron and the strong man armed took license to ruthlessness from the theory of natural selection, the intellectual who shrank from such successes found in entropy the excuse to indulge his mal du siècle [evil of the century].

Indeed, all hope for human happiness and dignity was not lost to Darwin and the apes. Nor, in more recent times, did the world give up hope of ever again being certain of anything following pioneer quantum physicist Werner Heisenberg’s announcement in 1927 of his eponymous uncertainty principle, a foundational concept in quantum mechanics, one of the most successful scientific theories of all time. We create laws, invent religions, and develop social norms and customs to prod the natural disorder of life, the universe, and everything into a semblance of order, longing to transform chaos into cosmos.

7 Newton, indeed. Here is one of the great American minds of the twentieth century, John Hermann Randall, writing in his magisterial The Making of the Modern Mind nearly a century ago [18, pp. 276–]:

Never in human history, perhaps, have scientific conceptions had such a powerful reaction upon the actual life and ideals of men.... The history of thought in that age is largely the history of the spread to all fields of human interest of the method and aims of Newtonian science.... Isaac Newton effected so successful a synthesis of the mathematical principles of nature that he stamped the mathematical ideal of science, and the identification of the natural with the rational, upon the entire field of thought.... Man and his institutions were included in the order of nature and the scope of the recognized scientific method, and in all things the newly invented social sciences were assimilated to the physical sciences.... The two leading ideas of the eighteenth century, Nature and Reason, derived their meaning from the natural sciences, and, carried over to man, led to the attempt to discover a social physics. (For specific examples of how Newton’s force reached outside the bounds of science proper and into the arts, see my essay “The Scientific Revolution in Art,” Phys. Perspect. 23, 139-169 [2021].)

6 So, What’s to Do?

115

With these upbeat thoughts in mind, we conclude with some final fitting—if characteristically less than optimistic—words of warning from Professor Peter Atkins [1, pp. 125–126, 130]: Although elaborate events may occur in the world around us, such as the opening of a leaf, the growth of a tree, the formation of an opinion, and disorder thereby apparently recedes, such events never occur without somehow being driven. That driving results in an even greater production of disorder elsewhere. The net effect, the sum of the entropy change arising from the reduction of disorder at the constructive event and the entropy change arising from the increase in disorder of the driving, dissipative event, is the net increase in entropy, an overall production of net disorder. So, whenever we see order emerging, we must lift the curtain and see greater disorder being produced elsewhere. We, indeed, all structures, are local abatements of chaos. … The spring of change is aimless, purposeless corruption, yet the consequences of interconnected change are the amazingly delightful and intricate efflorescences of matter we call grass, slugs, and people. … The world is driven forward by [the] universal tendency to collapse into disorder. We, and all our artefacts, all our achievements, are ultimately the outcome of this purposeless, natural spreading into ever greater disorder.

Order, Atkins continues [19, pp. 198–200]: … is intrinsically transient, and crumble[s] into incoherence when [a] structure ceases to be driven by a flow of energy. Death comes to a piston, as to a person, when dissipation ceases. Dust—incoherence—goes to dust; between dusts there is the ramified structure of life. To live we must dissipate and sustain our fleeting disequilibrium, for equilibrium is death… We are the children of chaos, and the deep structure of change is decay. At root, there is only corruption, and the unstemmable tide of chaos. Gone is purpose; all that is left is direction. This is the bleakness we have to accept as we peer deeply and dispassionately into the heart of the Universe. Here again, the flow of entropy rises up to become the central organizing principle in the universe.

Rather amazingly, as Atkins has stated elsewhere [20, p. 97], … concepts that effectively sprang from the steam engine … reach out to embrace the unfolding of a thought. This little mighty handful of laws truly drive the universe, touching and illuminating everything we know.

116

R. Fleck

As the noted Nobel laureate American physicist Richard Feynman and coauthors point out in the celebrated Feynman Lectures on Physics [21, p. 442], “The science of thermodynamics … constitutes one of the few famous cases in which engineering has contributed fundamentally to physical theory.” Indeed, as the American biochemist Lawrence Henderson famously remarked over a century ago, science owes more to the steam engine than does the steam engine to science, reflecting the longstanding temporal ordering of the arts typically preceding their associated sciences until more modern times. “[I]n all cases, the Arts are prior to the related Sciences. Art is the parent, not the progeny of Science,” the nineteenth-century Cambridge polymath and Master of Trinity College, William Whewell, could confidently claim. Little did Henderson appreciate how much more than science proper—things like life, the universe, and everything—is owed to this device that lay at the heart of the Industrial Revolution, these “iron scourges over Albion … with cogs tyrannic,” as the English Romantic poet William Blake darkly and despondently described—and lived—it. So make it simple. And make an effort to make things better. It’s a natural human compulsion to find order (“cosmos”) amidst chaos. And then be happy. You just might, on occasion, as the hopeful words of Steven Pinker introducing this chapter suggest, “fight back the tide of entropy and carve out refuges of beneficial order.” Sometimes. After all, just as “no change is an island of activity,” the English metaphysical poet and Shakespeare contemporary John Donne realized four hundred years ago (reaching back in time and across the “two cultures” barrier one last time) that “no man is an island”: we and everything around us are components of open systems and thus have the capability to “carve out refuges of beneficial order.” But we have to work at it. All of us. Not all will agree with the tale I tell here. While the development of the science of thermodynamics retraced hitherto is historical fact, interpretations and applications of the Second Law can be, to a greater or lesser degree, admittedly less objective. Nevertheless, I have throughout tried to steer clear of unsupported opinion, biased judgement, and outrageous fantasy. In any case, I hope you now have a better understanding of why things often have a tendency to go wrong and sometimes seem to be getting worse, and, most importantly, I hope you understand that none of this downward spiral in nature is your fault; it’s just nature, and much of the anxiety over the Second Law stems from the way we conceived of nature so many years ago. As I said at the beginning, I hope that by learning about all of this, you’ll be better prepared to deal with it all.

6 So, What’s to Do?

117

An underappreciation of the inherent tendency toward disorder, and a failure to appreciate the precious niches of order we carve out, are a major source of human folly. – Harvard psychologist, Steven Pinker [2, p. 19]

References 1. P. W. Atkins, Galileo’s Finger: The Ten Great Ideas of Science (Oxford University Press, Oxford & New York, 2003) 2. S. Pinker, "The Second Law of Thermodynamics," in This Idea Is Brilliant: Lost, Overlooked, and Underappreciated Scientific Concepts Everyone Should Know, ed. by J. Brockman (Harper, New York, 2018) 3. A. N. Whitehead, Science and the Modern World (The Macmillan Company, New York, 1925) 4. H. Butterfield, The Origins of Modern Science, 1300–1800 (G. Bell and Sons Ltd, London, 1949) 5. A. Einstein, "Maxwell’s Influence on the Evolution of the Idea of Physical Reality," Einstein Archives 65–382 (1931) 6. L. Thorndike, "Francis Bacon—A Critical View," in Origins of the Scientific Revolution, H. Kearney (Barnes and Nobel, New York, 1964) 7. L. White, "An Anthropological Illusion," Sci. Monthly 66 (1948) 8. J. Monod, Chance and Necessity, trans. A. Wainhouse (Alfred A. Knopf, Inc., New York, 1971; orig. publ. as Le hazard et la necessite, Editions du Seuil, Paris, 1970) 9. NOVA Online | Stephen Jay Gould, 1941–2002 (pbs.org) Accessed 28 February 2023 10. P. A. Schilpp (ed.), Albert Einstein: Philosopher-Scientist (Open Court, La Salle, IL, 1949) 11. S. Lloyd, “Going into Reverse,” Nature, 430 (26 August 2004); https://doi.org/ 10.1038/430971a 12. J. Clear, “Entropy: Why Life Always Seems to Get More Complicated,” https:/ /jamesclear.com/entropy Accessed 28 February 2023 13. R. Beaubien, S. Parrish, “Thermodynamics,” in The Great Mental Models, Vol. 2: Physics, Chemistry and Biology (Latticework Publishing, Glebe, Ottawa, 2019) 14. A. Ben-Naim, Information, Entropy, Life and the Universe: What We Know and What We Do Not Know (World Scientific Publishing Co., Singapore, 2015) 15. C. C. Gillispie, The Edge of Objectivity: An Essay in the History of Scientific Ideas, chapter IX: Early Energetics (Princeton University Press, Princeton, 1990; orig. publ. 1960) 16. J. A. Richardson, Modern Art and Scientific Thought (University of Illinois Press, Urbana, 1971)

118

R. Fleck

17. R. Olson, in Science & Culture in the Western Tradition: Sources and Interpretations (Gorsuch Scarisbrick, Scottsdale, AZ, 1987) 18. J. H. Randall, The Making of the Modern Mind (Columbia University Press, New York, 1926) 19. P. W. Atkins, The Second Law (W. H. Freeman & Co., New York, 1994; orig. publ. 1984) 20. P. W. Atkins, The Laws of Thermodynamics: A Very Short Introduction (Oxford University Press, Oxford & New York, 2010) 21. R. P. Feynman, R. B. Leighton, M. Sands (eds.), The Feynman Lectures on Physics: Mainly Mechanics, Radiation, and Heat, Volume 1 (Addison-Wesley, Reading, MA, 1963)

Further Reading

Picking from the list below, for a very readable, highly humanized account of the development of thermodynamics with a focus on the Second Law, Hans Christian von Baeyer’s Warmth Disperses and Time Passes: The History of Heat is highly recommended; Don Lemons’s Thermodynamic Weirdness: From Fahrenheit to Clausius traces the development of “macroscopic” thermodynamics up to the time of Boltzmann’s introduction of an atomistic (“microscopic”) interpretation of the Second Law, and includes generous and enlightening excerpts from primary (original) sources. Reaching across to the “other” culture, Bruce Clarke’s Energy Forms: Allegory and Science in the Era of Classical Thermodynamics summarizes the modernist reception of, and allegorical cultural response to, thermodynamics, including the wider cultural implications of moral and social entropy. For a “math-lite” introduction to the laws of thermodynamics, the reader is invited to consult any physics textbook, all of which include chapters on thermodynamics.

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Fleck, Entropy and the Second Law of Thermodynamics, https://doi.org/10.1007/978-3-031-34950-8

119

References

1 P. W. Atkins, The Second Law (W. H. Freeman & Co., New York, 1994; orig. publ. 1984) 2 P. W. Atkins, “Entropy: The Spring of Change.” In Galileo’s Finger: The Ten Great Ideas of Science (Oxford University Press, Oxford and New York, 2003), pp. 109–134 3 P. W. Atkins, The Laws of Thermodynamics: A Very Short Introduction (Oxford University Press, Oxford and New York, 2010) 4 A. Ben-Naim, Information, Entropy, Life and the Universe: What We Know and What We Do Not Know (World Scientific Publishing Co., Singapore, 2015) 5 A. Ben-Naim, Entropy Demystified: The Second Law Reduced To Plain Common Sense, 2nd ed. (World Scientific Publishing Co., Singapore, 2016) 6 S. C. Brown, Benjamin Thompson, Count Rumford (MIT Press, Cambridge, MA, 1979) 7 D. Cardwell, From Watt to Clausius: The Rise of Thermodynamics in the Early Industrial Age (Cornell University Press, Ithaca, 1971) 8 D. Cardwell, James Joule: A Biography (Manchester, Manchester University Press, 1989) 9 S. Carnot, Reflections on the Motive Power of Fire (Dover, New York, 1960; orig. publ. 1824) 10 C. Cercignani, Ludwig Boltzmann: The Man Who Trusted Atoms (Oxford University Press, Oxford and New York, 2010; orig. publ. 1998) 11 H. Chang, "Thermal Physics and Thermodynamics," in The Oxford Handbook of the History of Physics, ed. by J. Buchwald, R. Fox (Oxford University Press, Oxford and New York, 2013), pp. 445–472

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Fleck, Entropy and the Second Law of Thermodynamics, https://doi.org/10.1007/978-3-031-34950-8

121

122

References

12 B. Clark, Energy Forms: Allegory and Science in the Era of Classical Thermodynamics (University of Michigan Press, Ann Arbor, MI, 2001) 13 J. Clear, "Entropy: Why Life Always Seems to Get More Complicated," https:// jamesclear.com/entropy, Accessed 28 February 2023 14 O. Darrigol, J. Renn, “The Emergence of Statistical Mechanics,” in The Oxford Handbook of the History of Physics, ed. by J. Buchwald, R. Fox (Oxford University Press, Oxford and New York, 2013), pp. 765–788 15 J. Dornberg, "Count Rumford: The Most Successful Yank Abroad, Ever." Smithsonian 25, 102–115 (1994) 16 Y. Elkana, The Discovery of the Conservation of Energy (Hutchinson, London, 1974) 17 C. C. Gillispie, The Edge of Objectivity: An Essay in the History of Scientific Ideas, chapter IX: Early Energetics (Princeton University Press, Princeton, 1990; orig. publ. 1960) 18 P. M. Harman, Energy, Force, and Matter: The Conceptual Development of Nineteenth-Century Physics (Cambridge University Press, Cambridge, 1982) 19 E. Johnson, Anxiety and the Equation: Understanding Boltzmann’s Entropy (MIT Press, Cambridge, MA, 2018) 20 H. S. Kragh, Entropic Creation: Religious Contexts of Thermodynamics and Cosmology (Ashgate, Aldershot, 2013; orig. publ., Routledge, 2008) 21 H. S. Kragh, "The Many Faces of Thermodynamics," in Between the Earth and the Heavens: Historical Studies in the Physical Sciences, Chapter 1 (World Scientific Publishing Co., Singapore, 2021) 22 D. S. Lemons, A Student’s Guide to Entropy (Cambridge University Press, Cambridge, 2013) 23 D. S. Lemons, Thermodynamic Weirdness: From Fahrenheit to Clausius (MIT Press, Cambridge, MA, 2019) 24 I. R. Morus, When Physics Became King (University of Chicago Press, Chicago, 2005) 25 J. W. Patterson, "Thermodynamics and Evolution,” in Scientists Confront Creationism, ed. by L. R. Godfrey (W. W. Norton, New York, 1983), pp. 99– 116 26 S. Pinker, "The Second Law of Thermodynamics," in This Idea Is Brilliant: Lost, Overlooked, and Underappreciated Scientific Concepts Everyone Should Know, ed. by J. Brockman (Harper, New York, 2018), pp. 17–20 27 R. D. Purrington, Physics in the Nineteenth Century (Rutgers University Press, New Brunswick, 1997) 28 K. Robertson, "The Demons Haunting Thermodynamics," Phys. Today 74, 44– 50 (2021) 29 C. Smith, "Energy," in Companion to the History of Science, ed. by R. C. Olby et al. (Routledge, London, 1990), pp. 326–341 30 C. Smith, The Science of Energy: A Cultural History of Energy Physics in Victorian Britain (University of Chicago Press, Chicago, 1998)

References

123

31 C. Smith, "Force, Energy, and Thermodynamics," in The Modern Physical and Mathematical Sciences, The Cambridge History of Science, vol. 5 (Cambridge University Press, Cambridge, 2003), pp. 289–310 32 C. Smith, M. N. Wise, Energy and Empire: A Bibliographical Study of Lord Kelvin (Cambridge University Press, Cambridge, 1989) 33 C. P. Snow, The Two Cultures and the Scientific Revolution (Cambridge University Press, Cambridge, 1959) 34 N. Spielberg, B. D. Anderson, Seven Ideas that Shook the Universe, Chapter 5: “Entropy and Probability” (John Wiley & Sons, New York, 1985) 35 H. C. von Baeyer, Warmth Disperses and Time Passes: The History of Heat. (Modern Library, New York, 1999; orig. publ. as Maxwell’s Demon, Random House, 1998) 36 S. S. Wilson, "Sadi Carnot." Sci. Am. 254, 134–145 (1981)

Index

A

Abiogenesis 83, 93 Absolute temperature 23, 46–48, 73 Absolute zero 22, 23, 47, 48, 78 Adenosine triphosphate (ATP) 83 Adams, Henry Brooks 105 Adiabatic 37, 46, 79 Age of Earth 97 Age of the universe 84 Aging 50, 58, 59 Air conditioner 34, 35, 49 Alchemy 39 Amino acids 96, 97 Ampère, André-Marie 29 Anthropic principle 100 Argument from Design 97 Arrow of time 21, 57 Aristotle 9–11, 15, 25 Artificial Intelligence (AI) 89 Atomic theory 70 Avogadro 77, 78

B

Bacon, Francis 103, 106–109

Big Bang 47, 52 Bioenergetics 67 Biology 24 Black hole 52, 79, 89, 91 Blake, William 67, 116 Boltzmann constant 73 Boltzmann, Ludwig 69, 73, 75, 79, 89, 95, 98, 100 Boyle, Robert 15, 19 Brewster, David 33 Button, Benjamin 58

C

Cake 59, 90 Caloric 9, 10, 12, 14–19, 25, 39, 43, 44, 54 Calorie 9, 25, 32, 65 Calorimetry 9, 12, 43 Calorique 9, 12, 15 Carnot 92 Carnot’s cycle 46 Carnot’s Theorem 44 Carnot, Lazare 43 Carnot, Sadi 21, 39–41, 43, 54, 63

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Fleck, Entropy and the Second Law of Thermodynamics, https://doi.org/10.1007/978-3-031-34950-8

125

126

Index

Carnot 41, 43–45 Chance 41, 77, 80, 81, 90, 96, 97, 100, 109, 111 Chaos 6, 59, 92, 103, 109, 114–116 Chemical energy 29, 66, 71, 95 Clapeyron, Émile 21, 45, 46, 54 Clausius, Rudolf 21–23, 33, 34, 36, 54–56, 62, 63, 70, 81 Climate 14, 52, 89, 110 Clinton, Bill 24 Closed system 56, 87, 94, 96 Coefficient of performance 48 Coin toss 69, 73, 75–78, 80, 88–91 Combustion 10, 13, 42, 43, 47, 81, 82, 84 Complexity 82, 83, 89, 97 Conservation of energy 26, 33, 38, 66 Conservation of mass 13, 26 Control over nature 108, 109 Copernicus, Nicolaus 10, 104, 106 Creationists 94

D

Dalton, John 74 Dante 39 Darwin, Charles 16, 33, 34, 52, 53, 98, 99, 109, 114 Davy, Humphrey 18, 29 Dawkins, Richard 98 Descartes, René 15, 25, 29, 103, 106, 107, 109 Devil’s Chaplain 98 Dice 80 Dirac, Paul 5 Directionality in nature 14, 34, 59 Directionality, see Directionality in nature natural direction, see Directionality in nature Disorder parameter 75 Dissipation 42, 49, 53, 56, 87, 93, 115

Divine Watchmaker 97 DNA 24 Donne, John 10, 116

E

Eddington, Arthur 41, 57 Efficiency, Carnot 44, 47–49, 64 Efficiency, thermal 46, 48, 64 Egg 59, 63, 69, 90 Einstein, Albert 23, 25, 26, 33, 67, 70, 72, 74, 91, 104, 106, 110 Eiseley, Loren 3 Elliot, T. S. 51 E = mc 2 26, 67 Engels, Friedrich 52 Engines 28, 29, 31, 33–35, 39, 41–44, 46, 47, 55, 62–64, 81 Enlightenment 109, 113 Entropic creation 52 Entropy 1, 21, 22, 40, 41, 54–59, 61–63, 65, 67, 69, 71, 73–84, 87–98, 103–105, 110–116 Evolution 33, 51–53, 58, 87, 96, 98, 100, 109, 114 chemical evolution 93

F

Faraday, Michael 29, 72 Feynman, Richard 17, 24, 58, 61, 79, 84, 116 First Law of Thermodynamics 9, 18, 21, 24, 26, 32, 34, 36–39, 41, 49, 54, 56, 64, 67 First Law, see First Law of Thermodynamics Flammarion, Camille 50, 51 Fleck, R. 59 Food 17, 25, 26, 58, 65, 66, 82–84, 94–96 Fourier, Jean-Baptiste Joseph 45, 57 Franklin, Benjamin 16, 33, 42 Frost, Robert 50

Index

127

G

I

Galilei, Galileo 3, 4, 22, 39, 40, 105, 106 Gas constant 36, 55 Genesis 52, 94, 108 Genetic 24, 59, 93, 100, 112 Gibbs free energy 67 Gibbs, J. Willard 21, 67, 77 Gould, Stephen Jay 109 Gravity, gravitational 15, 17, 18, 24, 29, 39, 45, 47, 52, 56, 66, 91, 100 Gulliver’s Travels 3

Ideal gas 35, 36, 38 Imponderable fluid 9, 10, 15, 25, 39, 45 Industrial Revolution 46, 116 Infinite 23 Information theory 14, 87–89 Intelligent Design 97 Internal combustion engine 43, 61 Internal energy 36–38, 55 Irreversible 45, 47, 50, 55–59, 61, 63, 94 Isentropic 79 Isothermal 46

H

J

Habitable zone 94 Hall, Asaph 5 Hawking radiation 91 Hawking, Stephen 28, 91, 99 Heat 4, 6, 9–12, 14, 15, 17–19, 21, 22, 24–39, 41–59, 62–66, 69–71, 79–84, 94, 95, 107, 110 Heat death 21, 49, 50, 52, 56, 62, 113 Heat engine 19, 33–35, 38, 41–46, 48, 49, 52, 54, 63, 64, 80–84, 92 Heating 12, 30, 35, 38, 47–49, 110 Heat pump 34, 35, 48, 49, 63 Heat transfer 14, 55, 61–63, 65, 81 Heisenberg, Werner 61, 83, 114 Hemoglobin 96, 97 Hendrix, Jimi 80 Heraclides of Pontus 104 Herschel, William 29 Hitchhiker’s Guide 88, 100 Hokusai, Katsushika 60 Hooke, Robert 15, 107 Humanities 1–3, 5, 6, 22, 108, 109 Humpty Dumpty 59, 63, 67 Huygens, Christiaan 29

Jetliner 90, 92 Jonathan Swift 60 Joule, James Prescott 16, 21, 26, 30–33, 54, 56, 59 Joule, see Joule, James Prescott K

Kelvin, Lord (William Thomson) 16, 21, 28, 30, 31, 33, 39, 49, 46, 72 Kepler, Johannes 3, 5, 106 Kinetic energy 29, 84 Kinetic theory 70 L

Laforgue, Jules 51 Lagrange, Joseph-Louis 13 Latent heat 82 Lavoisier, Antoine-Laurent 9–13, 15, 16 Laws of thermodynamics 5, 10, 22, 24, 83 Leibniz, Gottfried Wilhelm 29 Lemaître, Georges 52 Locke, John 9, 10, 15 Logarithm 73, 79, 88, 91

128

Index

Lord Rayleigh 90 Lord Tennyson, Alfred 34

Open system 94, 116 Orwell, George 39

M

P

Macroscopic 67, 70, 73–75, 78, 81 Macrostate 73, 75–77, 79 Marriage 92, 111, 112 Mason’s Rule 111 Mass energy 26, 67 Maxwell, James Clerk 16, 70–72, 90 Maxwell’s Demon 71, 105 Mayer, Julius Robert 21, 26–28, 30, 33, 39, 54 Mechanical equivalent of heat 30, 32 Metabolism, metabolic 66, 69, 82–84 Microscopic 70, 71, 73–76, 80, 81 Microstate 73, 75–79, 111 Milton, John 104 Mind-body dualism 106 Mixedupness 76, 77, 80 Mixing 12, 14, 43, 89, 90, 111 Molar gas constant 36, 55 Mole 77, 78 Molecular 9, 18, 19, 24, 36, 61, 69–71, 74, 79–81, 83, 84 Monod, Jacques 109 Murphy’s Law 1, 5, 88

Paley, William 97, 98 Pauli, Wolfgang 29 Perpetual motion 38, 39 Phlogiston 9, 12, 13, 16, 18 Photosynthesis 84, 95 Piston 37, 62, 81, 115 Plato 104 Playroom 91 Poincaré, Henri 29 Pollock, Jackson 60 Pope Pius XII 52 Potential energy 29, 66 Power 6, 19, 24, 25, 32, 34, 39, 41–44, 58, 64, 73 Pressure 36, 38, 44–46, 55, 59, 67, 70, 78, 83 Prigogine, Ilya 83, 94 Principle of parsimony 104 Probability 67, 70, 71, 73, 75, 77–79, 89, 93, 111 Protein 83, 96 Ptolemy, Claudius 17 Puzzle 90, 92 pV-diagram 45, 46

N

Q

Natural selection 33, 34, 53, 100, 114 Neil Young 92 Newton, Isaac 12, 15, 16, 18, 28, 29, 32, 40, 41, 45, 57, 66, 67, 70, 72, 104, 105, 113, 114 Nuclear 24, 25, 61, 95

Quality of energy 56, 57 Quantity of energy 56 Quantity of heat 17, 26, 30, 45, 46, 54, 55, 65, 95 Quantum 5, 19, 25, 26, 47, 52, 61, 72, 78, 83, 94, 106, 110, 114 Quantum gravity 91 Quetelet, Adolphe 71

O

Ockham’s razor 104 Oersted, Hans Christian 29

R

Radioactivity 53

Index

Random 19, 35, 59, 61, 77, 80, 84, 90, 93, 97, 98, 113 Refrigerator 28, 34, 35, 38, 48, 62 Relativity 5, 23, 26, 67, 72, 91, 106 Reversible 43, 46, 55, 57, 59, 61, 79 Richter scale 73 Romanticism 24 Rousseau, Jean-Jacques 25 Royal Society 16, 17, 33, 72 Rumford, Count (Benjamin Thompson) 9, 15–18 Thompson, Benjamin 11, 17 Russell, Bertrand 51

S

Sacred Tetrad 11 Schrödinger, Erwin 71, 94 Schwarzschild radius 91 Scientific Revolution 2, 103, 105–107, 109, 114 Second Law of Thermodynamics 1–7, 9, 14, 18, 21, 24, 25, 33, 34, 37, 39–41, 44, 45, 47, 50, 52–54, 55–59, 61–67, 69, 73, 74, 79, 88, 90, 91, 94, 103, 108–110, 113 Second Law, see Second Law of Thermodynamics Sedgwick, Adam 33 Shakespeare, William 1, 3–7, 10, 22, 23, 40, 55, 97, 98, 116 Shannon, Claude 88 Shannon entropy 88 Smocovitis, Vassiliki Betty 24 Snow, C. P. 1–4, 6, 55 Snowflake 82 Social Darwinism 112 Solar 43, 51, 95, 96, 109 Specific heat 14, 65, 66 Spencer, Herbert 112, 114 Star Trek 24, 93 Star Wars 24 State variable 36, 55

129

Statistical mechanics 69, 70, 78 Steam engine 28, 29, 34, 41–43, 46, 58, 61, 67, 89, 115, 116 Stored 39, 56, 66 Stott, Nicole 60 Swift, Jonathan 3 Swinburne, Algernon 51

T

Tait, Peter Guthrie 31 Temperature 12, 14, 22, 23, 25, 30, 32, 34–38, 43–49, 52, 53, 55, 56, 61–67, 70, 71, 77–79, 81, 83, 95, 106 Temperature conversion 48 Thermal energy 9, 35–37, 45, 48, 55–57, 80 Thermal equilibrium 22, 52, 56, 62 Thermodynamics 6, 10, 14, 21–25, 31, 36, 39, 42, 45, 46, 52–56, 61, 63, 65, 67, 69–73, 75, 77, 80, 83, 87–89, 91, 94, 105, 106, 108, 110, 116 Thermometer 22, 32 Third Law of Thermodynamics 24, 47 Third Law, see Third Law of Thermodynamics Thompson, see Rumford, Count (Benjamin Thompson) Thomson, William, see Kelvin, Lord (William Thomson) Thorne, Kip 91, 96 Time’s arrow 57, 63, 78 Tolstoy, Leo 92 Trophic level 84, 95 Turbulence 59–61 Turbulent 59–61, 80, 83 Two Cultures 2, 3, 6, 22, 55, 67, 97, 116

130

Index

V

van Gogh, Vincent 60 Volta, Alessandro 29 Viscosity 60, 61, 71 von Helmholtz, Hermann 21, 27, 28, 33, 39, 49, 54, 56, 98, 104

Whewell, William 116 Wilson, Edward O. 3 Work 2, 14, 18, 23, 24, 26, 28, 29, 31–39, 41, 43–49, 52, 54–56, 61, 63–67, 71–73, 81, 87, 88, 90, –92, 97, 98, 111, 112, 116

W

Y

Waste 42, 52, 63, 82, 94, 96, 110 Watt, James 46 Weber, Max 89 Weight 12, 22, 31, 32, 56, 57, 65–67 Wells, H. G. 51

Yoda 6

Z

Zeroth Law of Thermodynamics 22, 24