Applications of Modern Physics in Medicine 9781400865437

The connections between modern physics and medical technology Many remarkable medical technologies, diagnostic tools, a

229 76 10MB

English Pages 296 [300] Year 2014

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Applications of Modern Physics in Medicine
 9781400865437

Table of contents :
CONTENTS
PREFACE AND GUIDE TO USING THIS BOOK
TECHNICAL ABBREVIATIONS
TIMELINE OF SEMINAL DISCOVERIES IN MODERN PHYSICS
TIMELINE OF DISCOVERIES AND INVENTIONS IN MODERN MEDICAL PHYSICS
CHAPTER 1 Introduction
CHAPTER 2 When You Visit Your Doctor: The Physics of the “Vital Signs”
CHAPTER 3 Particles, Waves, and the Laws that Govern Them
CHAPTER 4 Photon and Charged-Particle Interactions with a Medium
CHAPTER 5 Interactions of Radiation with Living Tissue
CHAPTER 6 Diagnostic Applications I Photons and Radionuclides
CHAPTER 7 Diagnostic Applications II MRI and Ultrasound
CHAPTER 8 Applications in Treatment
APPENDIX A Constants, Powers of 10, and Conversions Mentioned in the Text
APPENDIX B Mortality Modeling
APPENDIX C Evaluation of the Sound Field from One Transducer
NOTES
INDEX

Citation preview

APPLICATIONS OF MODERN PHYSICS IN MEDICINE

Applications of Modern Physics in Medicine Mark Strikman Kevork Spartalian Milton W. Cole

PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD

Copyright © 2015 by Princeton University Press Published by Princeton University Press, 41 William Street, Princeton, New Jersey 08540 In the United Kingdom: Princeton University Press, 6 Oxford Street, Woodstock, Oxfordshire OX20 1TW press.princeton.edu All Rights Reserved ISBN 978-0-691-12586-2 British Library Cataloging-in-Publication Data is available This book has been composed in Times New Roman with Helvetica Neue LT Std Extended display Printed on acid-free paper. ∞ Printed in the United States of America 10 9 8 7 6 5 4 3 2 1

Io stimo più il trovare un vero, benchè di cosa leggiera, che l’ disputar lungamente delle massime questioni senza conseguir verità nissuna. (I value more any discovery, however small, than a long discussion of the big questions that ends with no consequence.) —Galileo Galilei There cannot be a greater mistake than that of looking superciliously upon practical applications of science. The life and soul of science is its practical application... —Lord Kelvin

CONTENTS

Preface and Guide to Using This Book 

xi

Technical Abbreviations 

xv

Timeline of Seminal Discoveries in Modern Physics 

xvii

Timeline of Discoveries and Inventions in Modern Medical Physics  

xix

Chapter 1  Introduction 1.1 Overview  1.2 The Meaning of the Term Modern Physics  1.3 Mortality  1.4 How to Use This Book  Exercises 

1 5 6 7 8

Chapter 2  When You Visit Your Doctor: The Physics of the “Vital Signs” 2.1 Introduction  2.2 Stethoscope  2.3 Sphygmomanometer and Blood Pressure  2.4 Electrocardiogram  2.5 Physics and Physiology of Diet, Exercise, and Weight  Exercises 

10 11 12 15 17 21

Chapter 3  Particles, Waves, and the Laws that Govern Them 3.1 What Is Modern Physics?  3.2 Light: Particle or Wave?  3.3 Atoms  3.4 Lasers  3.5 Relativity  3.6 Nuclei  3.7 X-Rays and Radioactivity  Exercises 

22 25 30 41 45 53 63 80

viii • Contents

Chapter 4  Photon and Charged-Particle Interactions with a Medium 4.1 Overview  4.2 Mean Free Path and Cross Sections  4.3 Photon Interactions  4.4 Electron and Positron Interactions  Exercises 

84 85 87 98 104

Chapter 5  Interactions of Radiation with Living Tissue 5.1 Introduction  5.2 Cell Death Due to DNA Radiation Damage  5.3 Dependence of Cell Survival on the Dose  5.4 Low Doses of Radiation  5.5 Radiation Dose versus Altitude  Exercises 

107 108 112 116 119 121

Chapter 6  Diagnostic Applications I: Photons and Radionuclides 6.1 Overview  6.2 Photons  6.3 X-Rays and Gamma Rays  6.4 Radionuclides  6.5 Novel Ideas for Nuclear Imaging  Exercises 

122 122 133 156 166 168

Chapter 7  Diagnostic Applications II: MRI and Ultrasound 7.1 Overview  7.2 Magnetic Resonance Imaging (MRI)  7.3 Ultrasound  7.4 Multimodal Imaging  Exercises 

171 172 199 220 224

Chapter 8  Applications in Treatment 8.1 Overview  8.2 Treatment with Radiation  8.3 Treatment with Particles  8.4 Treatment with Ultrasound  8.5 Treatment with Microwaves  8.6 Treatment with Lasers  Exercises 

226 226 233 239 244 244 246

Contents • ix

Appendix A Constants, Powers of 10, and Conversions Mentioned in the Text Fundamental Constants  Powers of 10 and Their Prefixes  Conversion Factors and Equations 

247 247 248

Appendix B Mortality Modeling251 Appendix C Evaluation of the Sound Field from One Transducer Far-field (Fraunhofer) Region  Near-field (Fresnel) Region 

255 257

Notes 

261

Index 

267

PREFACE AND GUIDE TO USING THIS BOOK

Modern medicine is a rapidly evolving and diverse field that relies increasingly on technological developments. Many of these developments involve the sophisticated application of the fundamental principles of physics. As a result, there is a growing need for an educated community of scientists, engineers, and technicians who have some background in the relationship between these principles and the applications. Our book is intended to meet this need. Equally important is that we hope it is useful for students majoring in diverse subject areas (psychology, natural sciences, nursing, engineering, . . . .) who are curious about this field and/or intend to find careers in related subjects. Every reader of this book has either heard of or experienced personally one or more of the ubiquitous applications in modern medicine. Did you not wonder how such things work? For example, two of the book’s three authors have seen high-­ resolution ultrasound images of their children in the mother’s uterus; this is a truly remarkable technique! The reader should recognize similarly that when the first X-ray images were obtained, just about a century ago, the observers were equally dazzled by those pictures. Indeed, they were even more surprised because at the time no one knew what an X-ray was. Our book is designed to provide a useful introduction to the various methods of modern medicine as well as a resource “handbook,” helpful for understanding future developments. The contents should be comprehensible to undergraduate students who have a background equivalent to a 1-year physics course at the introductory level, as is taught at most colleges and universities. Parts of the book (especially chapters 1 and 2) do not require even this background. Readers with a more extensive exposure to physics may skip chapter 3 or refer to it while reading later chapters. The idea for this book emerged out of a course, Applications of Modern Physics in Medicine, created and taught for the last 11 years by one of the authors (Mark Strikman). He found that this course’s ability to bring students to the cutting edge of science (e.g., applications that are just now entering hospitals) is particularly stimulating for both him, the teacher, and the students. The success of this course (i.e., increasing enrollment and favorable student reviews), our enthusiasm for the subject, and the recognition of the need for such a book are together responsible for our interest in writing it. While the subject is of immense scientific interest for its own sake, we hope that the book will have particular appeal for the growing number of students who are considering careers in medical physics, biophysics, medicine or nuclear engineering. In addition, it should appeal to current practitioners of the diverse medical technologies in use today. Furthermore, it is possible that the book will be used as

xii • Preface

a supplement to courses in nuclear physics and engineering, as well as biophysics courses. Finally, we hope that individuals with a reasonable physics background who experience medical problems and receive diagnoses and treatments will be interested in learning more about the underlying science presented here. The overall subject content of this book corresponds closely to that of the Penn State one-semester physics course from which the book emerged. In detail, however, the book’s content differs in many ways from the course, especially in topic emphasis and in choice of model calculations, as well as level of presentation. Any course using this book as a text could apply it in a variety of ways, depending on the students’ backgrounds and interests. For students seeking just a qualitative overview of this subject, chapters 3 and 4 can be omitted. For other kinds of students (those with particularly strong backgrounds in physics), the extended review of modern physics presented in chapter 3 may not be necessary, although our experience suggests that students appreciate the opportunity to fill gaps in their understanding of some topics (and most students have more than a few such gaps). Chapter 4 goes into considerable detail about the nature of particle propagation in matter. In some cases, this level of presentation may not be particularly useful, so it might not need to be covered. Chapter 5 is concerned with the interactions between radiation and living tissues. This is important for understanding why some methods are used, while others are not, in assessing or treating medical conditions in various parts of the body. The last three chapters provide the principal aspects of medical physics, applying the principles described in chapters 3–5. Hence these final chapters merit particular attention in any version of a comprehensive approach to this subject; they are not to be missed. In writing this book, we have been excited to learn about the wide variety of techniques used in the development of the tools of modern medicine. At the same time, trying to understand these techniques is a humbling experience because the literature is not always transparent, often assuming a strong physics or medical background on the part of the reader. This problem is especially true of the most recent discoveries, reported by scientists who are narrowly focused and eager to get their work published quickly in the most prominent scientific journals. We hope that our attempts to understand and explain these topics have been clear and accurate. Readers wishing to comment critically on the text are invited to contact the authors; their input will be appreciated. This book provides two levels of description. In some cases, for example, chapters 1 and 2, the presentation is primarily qualitative and nontechnical. In other cases, it is fairly technical and quantitative. We believe that most of the discussion does not require a mastery of physics as we have described the various techniques in primarily qualitative terms. We hope that nonphysicists are not intimidated by the technical discussion presented in some cases. While the book includes a relatively large number of equations, one does not need them to understand how any of the methods work. In the medical physics course taught at Penn State, students were presented with some quantitative problems, meaning that a certain level of mathematical competence was needed. Some students had difficulty coping with the breadth of the physics subject matter, which is not surprising—given the breadth of the subject matter. Many students found especially challenging, if not annoying, the wide variety of systems of units (such as ergs, calories, Btu, joules, and electron volts for energy), reflecting the extended history of the subject and the diverse origins of the various

Preface • xiii

methods used in medicine. In many cases, we have tried to present numerical calculations several ways to accommodate these different sets of units. This book includes many problems, all of which are original. They vary considerably in difficulty, so students should be aware of this variation and course instructors should take care in designing homework assignments. Solutions to all the problems are included in an Instructor’s Manual, provided by Princeton University Press to faculty teaching a course based on this textbook. There is also a companion Web page: modernphysicsinmedicine.com, which is freely available to the general public. Readers will find there a wide variety of informative resources. Included are Mark Strikman’s lecture notes from the most recent version of the Penn State course, Applications of Physics in Medicine. These notes will be updated frequently. Also found on the Web site is a comprehensive bibliography of the subject, also revised as needed. Finally, a number of publications can be found there. Among these are articles of primarily historical interest, such as “ancient” publications in the research literature, as well as articles from current semipopular literature, which helps to make aspects of medical physics accessible to interested individuals. In writing this book, we have benefited from help provided by many scientists and physicians, as well as input from the students who experienced the Strikman course or have seen early drafts of this text. We are pleased to acknowledge Mary Callahan, MD, for her very helpful guidance in the interpretation of CT and other scans. We are also grateful to Adriene Beltz, Béatrice Bonga, Mark Conradi, James Gerardo, MD, Paul Goldhagen, David Hammond, Roger Herman, Neal Holter, MD, Brian Horton, Hye-Young Kim, Sasha Krol, Stephen Lefrak, MD, Susan Lemieux, Dinah Loerke, Angela Lueking, Scott Luria, MD, Katrina McGinty, MD, Jessica McNutt, Igor Mishustin, Pierre-Jean Nacher, Kate Noa, Matt Noll, Igor Pshenichnov, Brian Saam, Alex Sell, Shachar Shimonovich, David Townsend, Susan Trolier-McKinstry, Paul Wagner, Elisabeth Weiss, MD, Dmitriy Yablonskiy, and Michael Zhalov. These individuals provided guidance, helpful input, and/or critical evaluations of the many drafts of the text. Nathalie Huchette, of the Marie Curie Museum, kindly provided a photograph, from 1934, used in chapter 1, which advertised the alleged health benefits of exposure to radiation. Kate and Ryan Elias helpfully provided ultrasound images of their daughter, as did Angela Lueking of her son. Dave Cole helped greatly with his photographs and with the organization of the Web page. Eleanor Coppes and Kate Noa have done a remarkable job of transferring our vague concepts and mental images into clear drawings. Mark Strikman would like to acknowledge support of the Penn State Physics Department, which provided one semester’s release time for organizing the course that made this book possible. The individual editors of our book at Princeton University Press have been conscientious and supportive; we are most grateful to Ingrid Gnerlich and Samantha Hasey. Finally, and significantly, we are happy to acknowledge our spouses, Nonna, Denise, and Pamela, who have patiently endured our absence while writing the book and provided occasional critical advice about how to write it.

TECHNICAL ABBREVIATIONS

ADC

Apparent diffusion coefficients

Sec. 7.2.8

APD

Avalanche photodiode

Sec. 6.2.4

BOLD (MRI)

Blood-oxygen-level-dependent (MRI)

Sec. 7.2.6

CCD

Charge-coupled device

Sec. 6.3.4

CLI

Čerenkov luminescence imaging

Sec. 6.5.2

CRT

Cathode ray tube

Sec. 6.3.2

CT

Computed tomography

Sec. 6.4.3

dB

Decibel

Sec. 7.3.1

DSA

Digital subtraction angiography

Sec. 6.3.4

DTS

Digital tomosynthesis

Sec. 6.3.6

EBT

Electron-beam tomography

Sec. 6.4.3

FEV

Forced expiratory volume

Sec. 7.2.8

FVC

Forced vital capacity

Sec. 7.2.8

fMRI

Functional magnetic resonance imaging

Sec. 7.2.6

HVT

Half-value thickness

Sec. 7.3.3

IMRT

Intensity-modulated radiation treatment

Sec. 8.2.3

LCD

Liquid crystal display

Sec. 6.3.2

LED

Light-emitting diode

Sec. 6.2.4

LET

Linear energy transfer

Sec. 5.2.3

linac

Linear accelerator

Sec. 8.2.3

LOR

Line of response

Sec. 6.4.3

MDCT

Multidetector computed tomography

Sec. 6.3.6

MET

Metabolic equivalent of task

Sec. 2.5

MRI

Magnetic resonance imaging

Sec. 7.2.1

MRS

Magnetic resonance spectroscopy

Sec. 7.2.4

NMR

Nuclear magnetic resonance

Sec. 7.2.1

OER

Oxygen equivalent ratio

Sec. 5.2.2

PEG

Phase encode gradient

Sec. 7.2.4

PET

Positron emission tomography

Sec. 6.4.4

xvi • Technical Abbreviations PHA

Pulse height analyzer

Sec. 6.4.4

PMT

Photomultiplier tube

Sec. 6.2.4

PZT

Lead zirconate titanate

Sec. 7.3.4

RBE

Relative biological effectiveness

Sec. 5.2.3

RMR

Resting metabolic rate

Sec. 2.5

SNR

Signal-to-noise ratio

Sec. 6.2.2

SPAD

Single-photon avalanche diode

Sec. 6.2.4

SPECT

Single-photon emission-computed tomography

Sec. 6.3.6

SPL

Sound pressure level

Sec. 7.3.1

SSG

Slice select gradient

Sec. 7.2.4

TIMELINE OF SEMINAL DISCOVERIES IN MODERN PHYSICS

1895 Wilhelm Roentgen discovers X-rays, a mysterious form of radiant energy 1896 Henri Becquerel discovers radioactivity 1897 J. J. Thomson discovers the electron 1898 Marie and Pierre Curie discover new elements, polonium and radium, intense sources of radiation 1900 Max Planck announces a quantum hypothesis to explain the blackbody radiation spectrum 1900 Paul Villard discovers gamma radiation 1905 Albert Einstein proposes both his special theory of relativity and a theory of photoelectric effect, employing the quantum concept of the photon for the first time 1909 Robert Millikan’s oil-drop experiment proves that electric charge is quantized in units of e, the magnitude of the electron charge 1911 Ernest Rutherford explains alpha particle scattering data with the hypothesis that the positive charge within the atom is localized within a tiny nucleus 1911 H. Kammerlingh Onnes discovers superconductivity  1913 Niels Bohr publishes his semiclassical theory of the atomic spectra 1915 Einstein proposes his general theory of relativity 1919 Arthur Eddington reports data confirming Einstein’s general theory of relativity 1919 Ernest Rutherford discovers the proton 1920 Otto Stern and Walther Gerlach demonstrate quantization of the electron’s spin 1924 Louis-Victor de Broglie hypothesizes that particles are waves 1925 Werner Heisenberg proposes a matrix formulation of quantum mechanics 1925 Wolfgang Pauli proposes the exclusion principle to explain atomic spectra 1926 Erwin Schrödinger proposes a wave theory of quantum mechanics and shows its equivalence to the matrix mechanics approach 1927 Clinton Davisson and Lester Germer observe electron diffraction by a crystal, proving that particles are waves 1929 Ernest Lawrence invents the cyclotron, the first high-energy particle accelerator

xviii  •  Timeline of Seminal Discoveries

1930 Wolfgang Pauli proposes the neutrino, a neutral particle, to explain beta decay data 1932 James Chadwick discovers the neutron 1932 Carl Anderson discovers the positron (antielectron), predicted by Paul Dirac 1934 Enrico Fermi produces new (transuranic) elements with neutron bombardment of lighter elements 1938 Otto Frisch and Lise Meitner hypothesize neutron-induced fission of uranium to explain data 1938 Isidore Rabi demonstrates nuclear magnetic resonance with molecular beam experiments 1942 Enrico Fermi leads project to create the first nuclear reactor, an artificial chain reaction 1946 Felix Bloch and Edward Purcell exhibit NMR in liquids and solids 1948 John Bardeen, Walter Brattain and William Shockley invent the transistor 1960 First functioning laser made by Theodore Maiman, with many prior contributions to its concept and development 1964 Murray Gell-Mann and George Zweig predict the existence of quarks

TIMELINE OF DISCOVERIES AND INVENTIONS IN MODERN MEDICAL PHYSICS

(Most of these discoveries can be credited to several scientists, of whom just one or two are mentioned.) 1895 William Roentgen publishes X-ray photograph of his wife’s ring finger 1896 First applications of X-rays for diagnostics and treatment 1901 Henri Becquerel and Pierre Curie publish evidence of radiation damage to the hand due to exposure to radium 1907 First applications of radium for tumor treatments 1917 Johann Radon develops technique (radon transform) used in tomography 1930s Development of MeV scale electron accelerators/generators for X-ray treatments 1934 Frédéric Joliot-Curie and Irène Joliot-Curie, first medical use of isotopes; birth of “nuclear medicine” 1937 Invention of klystron by Varian brothers, leading to the development of compact electron accelerators for medicine. 1942,8 Karl Dussik and George Wild develop ultrasound imaging technique 1946 US Atomic Energy Act permits scientists to use radionuclides produced in nuclear reactors in medicine 1946 Robert Wilson suggests the use of proton beams in oncological treatments 1949 First radionuclide imaging (by Benedict Cassen) of iodine uptake in thyroid gland 1957 Hal Anger makes first gamma camera 1958 Jack Kilby and Robert Noyce produce first integrated circuit (chip), permitting miniaturized electronic circuitry 1960 David Kuhl and Roy Edwards construct Mark IV scanner using radionuclide tomography 1961 James Robertson and coworkers make first planar PET scan 1972 Raymond Darmadian and Paul Lauterbur invent first MRI machines 1971 Godfrey Hounsfield makes first commercial CT scanning device 1995 William Happer and Gordon Cates produce high-resolution images of the lung with hyperpolarized noble gas Xe

APPLICATIONS OF MODERN PHYSICS IN MEDICINE

CHAPTER 1

Introduction 1.1 OVERVIEW Ours is an increasingly technological world. We take for granted the rapid pace of scientific progress and its consequences for our daily lives, exemplified by Moore’s law, which says that computer speeds double about every 18 months [1]. This remarkable fact is familiar to many of us because most educated people in the western world use computers, to some extent, so that the speed increase with each successive computer purchase is evident. Less well known is the equally remarkable speed at which modern medicine advances. The situation is different, partly because medical progress of a patient is often not quantifiable and partly because most people are not aware of incremental developments in the medical field. For those individuals who are interested in understanding this progress, like this book’s readers, it is fortunate that the underlying principles of modern medicine change much more slowly than the applications themselves. Since these principles and their connection to modern applications are the focus of this book, we are optimistic about its value lasting longer than the lifetime of most computers. The reader of this text will become capable of understanding the fundamental basis of many of the most important medical tools used today. While space considerations alone constrain the extent to which we can provide detailed descriptions of these devices and the principles underlying them, there are several other reasons for this limitation. One is that two of the three authors (Cole and Strikman) are theoretical physicists, who are not researchers in this field. The third (Spartalian) is an experimental physicist familiar with the underlying scientific principles, but he, too, is not a researcher in the field of medical physics. Another reason for the book’s focus on general principles is related to the pace of technical advance; tomorrow’s technology will not be that of today, so the details of the most recent developments are not crucially important for most readers to understand—except in those few instances when the progress is based on the application of new or different principles. Finally, we think that it is important for “students” (including the authors!) to appreciate how these developments have occurred; this means that some discussion of the history of these various developments is provided. For exam-

2 • Chapter 1

Figure 1.1. Advertisement for Tho-Radia cream and powder from 1934. It describes this product as beautifying because it is healing, made “according to the formula of” Dr. Alfred Curie, no relation to Pierre or Marie Curie. Figure provided by the Marie Curie Museum, Paris, where the advertisement’s description labels this product as “very costly, but with such a small dose of radiation as to be harmless.”

ple, by learning that some of the most famous pioneers of nuclear physics (e.g., three Nobel Prize winners: Pierre and Marie Curie and their daughter, Irène Joliot-­Curie) suffered from radiation damage, one is reminded of the importance of laboratory precautions. By learning that António Egas Moniz won the 1949 Nobel Prize for the invention of frontal lobotomy, a now discredited technique, we are reminded of the ephemeral nature of some scientific “discoveries.” Figure 1.1 provides another kind of learning experience, taken from an era when the perceived benefits of radiation received more attention than concern about its potential harm. The advertisement, shown in full color in plate 1, attributes to the radioactive skin product the glow emanating from the woman’s face, exuding beauty and health. The subtitle “Embellissantes parce que curatives” translates as [these products are] “beautifying because (they are) healing.” Monographs and numerous scientific publications describe each of the tools of modern medicine. We will refer only to some of the most useful of these in the appropriate sections of this book. A reader interested in a complete, detailed survey of

Introduction • 3

either the principles or their applications will have to search beyond this book. One place to find some relevant articles is the Web site accompanying this book, http:modernphysicsinmedicine.com. At that site is a bibliography of books and papers selected from the literature of medical physics, emphasizing articles at the level intended for readers of this book. Also found there are some of the classic articles important to the history of this field as well as a current version of the lectures accompanying this course. The term modern physics appears in this book’s title. Its meaning, which is conventional in physics, includes the numerous revisions of our scientific understanding that began with the discovery of X-­rays by Wilhelm Roentgen in 1895 and that of radioactivity by Henri Becquerel in 1896. The flood of subsequent discoveries includes that of the particle nature of matter (atoms and the zoo of subatomic particles) as well as the laws of relativity and quantum mechanics. Many of these revolutionary findings are critically important for the subsequent progress in medical technology. In describing these relationships, we will do our best to explain the basic principles underlying them. “Think like a physicist” is an expression often quoted in our community and an attribute that we hope to foster in our readers and students. This expression refers to a thinking person’s ability to use whatever information is known to estimate a quantity of some interest. Such a strategy has immense practical value, even in the age of the Internet, when many alleged “answers” are readily obtained. A famous example of this approach is attributed to the Italian-­American physicist, Enrico Fermi, who tried to imbue in his students an appreciation of this approach to both physics and thinking, in general. His classic question to his students was more or less this: How many piano tuners work in the city of Chicago? Obviously, this question has very little to do with physics, but it can be answered approximately with Fermi’s approach to estimating quantities. For the sake of this book, we propose that you (the reader) address a more relevant question about medicine: How many obstetricians are there in the United States? Before reading the following “solution,” try thinking about the problem patiently and then answering it as best you can. As you will see here, the Fermi way of thinking yielded different “solutions” for the three authors of this book, who addressed it independently. We will call our answers N1, N2, and N3. Here is one author’s method of answering the question. His calculation (yielding N1) assumes “steady state,” meaning no net population growth. It also oversimplifies many other variables. Assume that there are 300 million residents of the United States. Of these, about half are female and, of those, perhaps a fourth are of childbearing age. This means that there are about 40 million women in the cohort who are capable, in principle, of bearing children. Let us suppose that each of these women has two children during her 20-­year, childbearing interval. Hence, there are 2/20 = 0.1 children per year borne by each woman. This results in a total of 0.1 × 40 = 4 million births per year. Now, let us suppose (and this is a completely uneducated guess!) that the average obstetrician delivers 3 babies per day for each of 300 working days of the

4 • Chapter 1

year, or a total of 900 babies per year. By dividing this number into the total number of births, one concludes that there are about N 1 = 4 × 106/900 ~ 4000 obstetricians in the United States. Note the rounding present at each step of the calculation, in recognition of the uncertainty present in each variable. This “solution” to the problem can be written as a formula: N1 =

(population) × (childbearing fraction) × (children per yearr per eligible woman) , (delivery rate)

N1 =

(300 × 106 ) × ( 18 ) × 110 ≈ 4000. 900

This very approximate formula is based on several crude estimates as well as rounding and hand-­waving assumptions. We could provide uncertainties of each quantity, resulting in an estimate of the overall uncertainty in the calculation. Suppose, for example, that each of the four factors appearing in the preceding equation has an uncertainty of ±15%. Then, the final answer has an uncertainty of 4 × 15% = 60%. Thus, the actual answer could reasonably be expected to lie within the interval 1600 to 6400. That is quite an extended range of values, but perhaps even this uncertainty is an understimate. The answer N2 found by a second author of this book is much higher than these values: N2 = 40,000 obstetricians, based on a set of alternative assumptions, equally naïve and equally valid. Start with a population of 4 million babies born in a year, estimated as before, which is equivalent to a birth rate of 13 babies per 1000 residents, that is, (4 million babies per year)/(300 million residents). Assumptions of author 2: (a) The mothers of each of these babies see an obstetrician for prenatal care. (b) All pregnancies last 36 weeks. (c) Including delivery, a mother is seen by her obstetrician 10 times altogether. (d) An obstetrician sees 20 prospective mothers in a week and works 48 weeks per year. From the last assumption, the number of visits per year to one obstetrician is 20 × 48 = 960 visits. The number of obstetricians is then given as follows:



N2 =

total no. of maternal visits per year no. of visits per obstetriciaan per year

N2 =

4 × 106 × 10 ≈ 40, 000 960

This answer is a factor of 10 higher than the previous estimate, N1, even lying outside of the range of uncertainty mentioned before. The second calculation (made independently of the first) evidently relies on alternative assumptions. It is not obvious where either estimate made one or more fundamental errors. However, a careful examination will reveal that the different approaches correspond to a number of visits in the first case that is lower by a factor of 10 than in the second.

Introduction • 5

What about N3, the third author’s estimate? Without going into detail, we report that author 3’s answer was N3 = 4000 obstetricians, consistent with the first estimate. Does the agreement between authors 1 and 3 suggest that their answer is correct? No! Even unanimous agreement would ensure nothing; after all, most people thought the earth’s surface was flat just a few centuries ago. In point of fact, according to the American College of Obstetricians and Gynecologists, there are more than 30,000 board-­registered obstetricians or gynecologists in the United State at present. Hence the low estimate made by authors 1 and 3 is much less reliable than that made by author 2. What is the lesson here for medicine and science in general? First, to reiterate, even if everyone obtains similar answers, such a consensus does not guarantee that the consensus answer is close to correct! We may all have made one or more unwarranted approximations. This happens often in science when new discoveries are made that contradict existing assumptions, however confidently these are held. The need to revise knowledge in the light of new evidence is especially important in the field of medicine; this is one aspect that makes the field so fascinating. However, remembering that human lives may be at stake, uncertainties have serious consequences. We must all be aware, at all times, of the provisional nature of human knowledge.

1.2 THE MEANING OF THE TERM MODERN PHYSICS The understanding of physics achieved during the nineteenth century entailed numerous discoveries that gave the scientific community a well-­justified sense of accomplishment. One of the most remarkable of these discoveries was the realization that work and heat transfer are closely related properties, the reason being that both involve the fundamental concept of energy. This realization led to a fundamental conclusion, embodied in the first law of thermodynamics: Energy is conserved at the macroscopic scale just as it is at the microscopic scale. Arguably of equal consequence is an apparently unrelated discovery involving a connection between two other physical properties: Electricity and magnetism are “unified.” The term unified means that one must take into account simultaneously the presence of both electric and magnetic fields rather than considering either one in isolation. A change in one field necessarily induces a change in the other field. While manifestly of importance as a basic principle, this realization is also important for practical applications, such as electric power generation in a turbine and voltage change in a transformer. The climax of this discovery is often identified as the mathematical summary embodied in Maxwell’s equations. The solution of these equations showed for the first time that light is a manifestation of the electromagnetic field. With these equations, one could show that a fluctuating current (as in an antenna) gives rise to an electromagnetic field that propagates outward from the source. This prediction was later confirmed in experiments of Hertz, who showed how radio waves can be

6 • Chapter 1

created at a source and then propagate through space. Thanks to this discovery, previously inconceivable devices like telephones and cell phones (as well as electronic networking) became possible. The confidence derived from these and many other discoveries was deserved— but surprises were yet to come. Many of these occurred within a few years of 1900, so many that the period was recognized as revolutionary. What emerged from this turbulent period is what we now call modern physics, a set of principles and resulting phenomena that could not have been anticipated by even the most imaginative thinkers of the day. These properties are discussed in chapter 3, where contrast is made to the “laws” that the discoveries replaced.

1.3 MORTALITY Having referred earlier to “medical progress,” we believe that it is worthwhile to describe its consequences before embarking on a detailed account of medicine’s many developments, which constitutes the remainder of this book. There are many ways to address this progress question. The scope of this issue is much broader than can be presented here except in a cursory way. We attempt to do so nevertheless. Figure 1.2 compares mortality data in the United States for two specific years, 1941 and 2008 (chosen in part because the relevant statistics are available). The mortality rate, denoted by λ(a) here, is defined as the probability that a person of age a will die in the coming year. The first point we note is the one most relevant to the present purpose, showing the progress of modern medicine: the mortality rate has decreased by a factor of about 2.5 over the 67-­year interval between these two studies. For example, the figure reveals that an 80-­year-­old person had a 12% probability of dying (in the coming year) in 1941, while in 2008 this probability had fallen to just 5%. The rate λ is much smaller (about a factor 100 smaller), of course, for a 20-­year-­old person than for an 80-­year-­old person. Specifically, the rate λ(20) fell from about 0.14% to 0.07% during the same 67-­year interval. Particularly interesting, in our view, is that the decrease in mortality is so dramatic for young children, those younger than the minimum in the function λ(a), which occurs at around a = 10 years. We believe that these longitudinal differences are attributable in large part to improved medical care,* although other variables, like better nutrition, shorter workday, reduced manual employment, greater awareness of the relation between lifestyle and health and improved sanitation, are also factors contributing to the reduction in mortality rate. The abundance of statistical data permits quite-­focused studies of individual kinds of disease. For example, the overall mortality rate in the United States declined by 45% between 1963 and 2010. This decrease has different contributions from the various individual diseases. For example, over this 47-­year period, the death rate due to cardiovascular disease declined by 71%, while that due to noncardiovascular diseases decreased by just 5%. The difference between these percentages is usually attributed to two factors: a decrease in smoking (about 50% reduction during this period) plus the greatly improved diagnosis and treatment of heart disease, especially the use of medications to control cholesterol and high blood pressure (even though *  More specifically, a large part of this decrease in mortality is due to successes in treating and preventing heart disease and cancer, both of which came in the wake of advances in diagnostics and better treatment.

Introduction • 7

Mortality rate per year (percent)

100

10 1941

1

2008

Figure 1.2. Mortality rate λ(a) as a function of age for the United States in the years 1941 (light line) and 2008 (dark line), as indicated. Data from sources

0.1

0.01 0

20

40 60 Age (years)

80

100

cited in the footnotes [2]. Note the logarithmic scale on the ordinate.

the risk factor of obesity is increasing rapidly*). During this same interval, incidentally, the mortality rate due to asthma and chronic obstructive pulmonary disease (COPD, which includes bronchitis and emphysema) actually increased by 156%! Since smoking is one primary cause of COPD, one is tempted to attribute the difference between the large increase in COPD and the large decrease in cardiac disease to one other factor: increasing air pollution, which is the second major cause of COPD. Such comparisons belong in the realm of epidemiology, which is outside of the primary domain of this book. In appendix B we discuss the subject of mortality from a somewhat different perspective. One reason for doing so is to illustrate how simple mathematical models can capture the key ideas in a problem and yield accurate predictions of fairly complicated behavior. Such an approach, like Fermi’s way of estimating physical quantities, is valuable for analyzing problems of medical physics.

1.4 HOW TO USE THIS BOOK This book discusses topics varying considerably in their conceptual or mathematical complexity. Among these topics are many for which we aspire to convey just a qualitative understanding of the subject matter. For example, the next chapter is concerned with the physics involved in some traditional medical instruments, like the stethoscope. Technical details are not discussed because they are unnecessary and not very helpful, the basic reason being that the physics of the stethoscope has remained essentially that of the year 1900, which marks the advent of the modern era. This, however, does not imply that the device itself has not evolved; it is now much more sensitive than when first developed.

*  During this interval, the obese fraction of the adult population in the United States has increased from about 13% to 35%, according to the National Institutes of Health. http://win.niddk.nih.gov/publications/PDFs/stat904z. pdf

8 • Chapter 1

In contrast to the second chapter, much of the book is concerned with techniques that require some understanding of the principles of modern physics. In some of these cases, we feel that a quantitative discussion is important. Then we present the mathematical apparatus that we regard as necessary for a quantitative understanding of the technique. Since not all readers may want or need that level of sophistication, such a presentation may be skimmed without sacrificing the qualitative side of the subject. At the end of each chapter, we have appended a number of exercises. These are intended to consolidate one’s understanding of the material by bringing together topics that are discussed in the text with knowledge already acquired in an introductory-­ level physics course. For some of these exercises, it is expected that one may either know or know how to find certain numerical values on the Internet. The required mathematical sophistication is at the level of a second course in calculus. Having brought up mathematics, it is appropriate to discuss a subject not greatly appreciated: different systems of units, an unavoidable complication. While for the most part we feel obliged to use the modern SI system, there are many cases where other units are more practical and/or conventional (like electron volts in atomic and nuclear physics). In such cases, we usually present calculations in both sets of units. Appendix A presents tables of conversion between these systems, as well as values of the important constants used in this book.

EXERCISES 1.1 Let’s practice thinking like a physicist, in the spirit of Enrico Fermi. We will consider three problems: (a) You have Avogadro’s number, N0 = 6 × 1023, of grains of rice. For how long will you able to feed everyone on the earth, assuming that people eat only rice and that the population remains constant? (b) Demographers have estimated upper limits to future population growth; now it is your turn. How many people can be packed on the land surface of the earth, assuming that each person is about one arm’s length from his or her neighbors? (c) Ben Franklin estimated the size of an oil molecule in the eighteenth century by pouring a teaspoonful of oil on a pond of area ½ acre. The result of his experiment was that the surface became “as smooth as a looking glass.” What molecular size can be deduced from this finding? 1.2 Suppose that Jorge, an elderly man, contracts a disease with a constant mortality rate λ = 0.2 per year. What is the probability that Jorge will live at least 5 years after he first gets the disease? What is the probability that Jorge will live another 5 years but will not make it to the sixth year? 1.3 Figure 1.2 shows two nearly straight lines on a semilogarithmic plot. Each curve can be approximated by the equation ln[λ(a)] = m(a – a0) + ln b, where m and b are constants, a is the age, and a0 is some “initial age” below which the mortality rate is negligibly small and above which the curves are approximately linear. Consider the 1941 data.

Introduction • 9

(a) Assuming that a0 = 40 years, estimate the slope m and intercept a0. (b) Estimate the median lifetime (the “life expectancy”) such that one-­half of the cohort born in 1941 is deceased. Hint: Use appendix B. 1.4 Note that the two curves in figure 1.2 differ essentially by a lateral shift by 12 years (beyond age 40). Making few (if any) additional approximations use this observation to argue that the life expectancy of the 2008 cohort exceeds that of the 1941 cohort by 12 years. 1.5 As reported in the CDC/NCHS National Vital Statistics Report, Vol. 61, no. 4, May 8, 2013, the overall mortality rate of black Americans has long been approximately 25% higher than that of white Americans. The data in appendix B show that approximately 50% of the 2008 cohort is predicted to die before age 82. What percentage of black Americans is predicted to die before that age?

CHAPTER 2

When You Visit Your Doctor The Physics of the “Vital Signs”

2.1 INTRODUCTION When you undergo a routine medical examination, your body is assessed with a set of standard probes (stethoscope, thermometer, etc.) and your vital signs (weight, blood pressure, pulse rate, temperature, etc.) are measured. Many of the techniques used in such an exam have changed little over the last century. Hence, they are not really applications of modern physics in medicine, the focus of this book. Nevertheless, we believe that it is worthwhile to consider the physics inherent in some of these traditional methods. The level of this discussion is superficial compared with that presented later in the book. This chapter is an attempt to whet your appetite for the general subject of the role of physics in methods of diagnosis and treatment in modern medicine. In addition, this chapter has another more subtle and indirect purpose. By addressing the familiar experience of a medical exam, it reminds the reader that much of the best science is driven by nothing more than curiosity. The classic story of Isaac Newton inventing the laws of gravity after watching an apple fall is probably apocryphal, but the story provides a useful lesson. Looking around and thinking about what you see and hear should raise questions about your surroundings. To try to answer them is a stimulating experience, even if you are not a “professional” who is actively researching these questions. So, the next time you experience even a routine medical procedure, why not give some thought to how it works? The next three subsections of this chapter deal with some measurements and apparatus used routinely by medical personnel: the stethoscope, the sphygmomanometer, and the electrocardiogram (EKG). The final subsection discusses a set of specific medical issues (diet, exercise, and weight) that concern increasingly many people. These issues are evidently interrelated within the context of heart disease, high blood pressure, diabetes, and so on. The physics of this relationship is intriguing, signifi-

When You Visit Your Doctor  •  11

cant, and comprehensible without any advanced background in the subject. It provides a good, qualitative starting point for what follows in the rest of this book.

2.2 STETHOSCOPE This device is the basic tool used for listening to the sounds produced by activity within the internal organs of a patient. These organs include the lungs, the intestines, and the heart as well as prominent arteries. A fundamental question we address is this: how does the stethoscope differ from its ancient, primitive predecessor—the physician’s ear positioned in contact with the patient? Before answering this question, we need to say something about sound. In particular, we recall that there exists an intimate connection between the pitch of a sound (what you perceive) and the sound’s frequency, a quantitative property of the sound wave itself. Frequency is a measure of how often the pressure on your ear rises and falls. (The frequency range of audible sound lies between 100 and 10,000 cycles per second, denoted by hertz, abbreviated Hz.) Pitch and frequency are directly related insofar as a higher-­frequency sound wave is detected as a sound of higher pitch. We will discuss this relationship more extensively when describing the technique of ultrasound in chapter 7. One answer to the question “What’s new about the stethoscope?” is this: compared with the physician’s ear, it provides a significantly amplified sound. More useful for medical applications is the prospect of recording, manipulating, and analyzing the stethoscope’s sound quantitatively. The so-­called acoustic stethoscope converts vibrations of the patient’s skin directly into pressure waves, which propagate as sound waves through the stethoscope’s tubes, to be heard eventually by the physician (or nurse or physician’s assistant). There are many clever ways to accomplish this. A mid-­twentieth-­century development is the two-­sided stethoscope, which has a diaphragm, often plastic, on one side and a bell shape (hollow cup) on the other side. These two sides yield somewhat complementary information, with the diaphragm emphasizing the higher-­pitch sounds, while the bell preferentially captures the lower-­ pitch sounds. In practice, one side has an advantage for monitoring the pulmonary system, while the other often proves to be more useful for hearing activity of the cardiovascular system. In particular, the sound of breathing is close to white noise (called broadband sound because an extended range of pitch is present) that has been filtered in the lung and heard through the chest wall. As such, this sound is relatively high in frequency, so the diaphragm side of the stethoscope is used preferentially. However, if the practitioner applies force when using the bell side, the skin becomes stretched, so that it, too, can detect higher-­frequency sounds. The perceived sound volume is somewhat limited in the traditional acoustic stethoscope. Nevertheless, many practitioners still use this simple device, as its sensitivity has improved greatly over time. Sophisticated electronics have greatly enhanced the sound reception of the modern electronic stethoscope. High-­speed electronic circuitry has enabled this stethoscope’s output, an electrical signal, to be amplified, filtered to remove noise, transmitted wirelessly, and recorded. One particularly valuable component of many stethoscopes is a piezoelectric crystal, which is a solid material that directly converts pressure variations into electrical signals,

12 • Chapter 2

which are easily manipulated. We shall see in chapter 7 that such a crystal is an essential component to the generation and detection of ultrasound.

2.3 SPHYGMOMANOMETER AND BLOOD PRESSURE This device, possessing a cumbersome (six-­syllable) name, is another familiar instrument in the routine medical arsenal. The name is a compound word, derived from sphygmos, which means “pulse” in Greek, and manometer, which is any device that measures pressure. In its routine use, the sphygmomanometer consists of two parts: a stethoscope and a cuff that surrounds part of the upper arm of the patient. This latter area contains the brachial artery, the major artery in the region. The stethoscope’s bell is positioned at the interior elbow region (technically called the antecubital fossa) to monitor the arterial flow. The cuff is initially inflated to a very high pressure (measured with the manometer connected to it), so that no blood can flow through the constricted artery. The pressure is lowered gradually by deflating the cuff until, at some point, one hears with the stethoscope an initially loud sound (called a Korotkoff sound), due to the onset of blood flow in the artery. The pressure at this point of the procedure is identified as the systolic pressure. Upon further deflation of the cuff, a second point is reached when this sound of the flowing blood can no longer be detected. The pressure at that point is the diastolic pressure. These two distinct pressures are attributed to corresponding events in the heart’s cyclic pulsation. The systolic pressure is the peak pressure in the patient’s artery associated with the maximum arterial pressure produced by the heart. Conversely, the diastolic pressure reflects the pressure of the resting phase of the cardiac cycle. The positioning of the sphygmomanometer, mentioned above, is aimed at avoiding any gravitational correction to the pressure reading, since the upper arm’s vertical position is essentially equal to the heart’s height. Figure 2.1 shows schematically the sequence of operations for determining a patient’s blood pressure. The blood pressure determined with this device is usually stated as something like “110 over 75” (a fairly normal pair of values, depending on the patient’s age and health). This description means that the systolic pressure is 110 mmHg (the pressure created by a 110-­mm column of mercury), while the diastolic pressure is 75 mmHg. (The reader is reminded that atmospheric pressure at sea level is 760 mmHg and that blood pressure readings are relative to that benchmark value). The blood pressure actually varies along the arterial pathways, due to both gravity and the effect of viscous resistance on the flow of blood. The gravitational variation with position can be described approximately with a basic equation (Bernoulli’s principle) for the case of a fluid at rest, of assumed constant density ρ. This principle explains why pressure in the ocean increases with increasing depth below the surface. The difference in pressure between two arbitrary points, one at depth y and another at increased depth y + Δh, is given by this equation when the fluid is at rest:

P( y + ∆h) − P( y ) = ρ g h .

(2.1)

The geometry used to derive this relationship appears in figure 2.2. Consider an imaginary rectangular volume of liquid, with vertical edge length Δh and base area A, surrounded by other liquid, as in a deep swimming pool. In equilibrium (the usual

When You Visit Your Doctor  •  13 (a)

(b)

(c)

120 80

No blood flow. No sounds detected. Cuff pressure above systolic.

Threshold for partial flow. First sounds heard. Cuff pressure matches systolic.

Threshold for unoccluded flow. Sounds disappear. Cuff pressure matches diastolic.

Figure 2.1. Sequence for measuring blood pressure: (a) the cuff is inflated until the brachial artery is occluded and no sounds are heard; (b) the cuff pressure is released slowly until the Korotkoff sounds are heard—the systolic pressure is noted; (c) further release of the cuff pressure restores full blood flow and Korotkoff sounds disappear—the diastolic pressure is noted. Fabove = P(y)A

Δh

Fbelow = P(y + Δh)A

Figure 2.2. Schematic depiction of a volume of liquid, surrounded by other liquid. The net force from below, Fbelow, exceeds that from above, Fabove. The difference Fbelow – Fabove is the upward buoyant force.

situation), this volume does not accelerate, which means that the force due to the pressure from below must match exactly the sum of two downward forces, the force due to the pressure from above and the weight of the fluid within the volume. Remembering that force is pressure times area, we can say this with an equation:

P( y + ∆h) A = P( y ) A + Weight.

(2.2)

14 • Chapter 2

Now weight is mass (m) times the acceleration of gravity (g); mass is volume (V) times density of the fluid (ρ), and volume is base (A) times height (Δh). In other words,

Weight = mg = ρ Vg = ρ A ∆h g .

Putting this in equation (2.2) and canceling the common factor of the area gives*

P( y + ∆h) = P( y ) + ρ g ∆h .

(2.3)

Equation (2.3) matches equation (2.1) if the pressure at depth y, P(y), is transposed to the left side with a change in sign. As an example, let us evaluate the pressure difference between two points separated by Δh = 16 in. ≈ 40 cm = 0.40 m. The density of blood is about 1.06 times that of water: ρ = 1.06 × 103 kg/m3, so with g = 9.80 m/s2, we find a pressure difference of magnitude |ΔP| = ρgΔh ≈ 4200 N/m2 = 4200 Pa (pascals). Since 1 atm (atmosphere) =101.3 kPa = 760 mmHg, we find that the corresponding pressure drop between vertical positions y + Δh and y is given by

∆P = 520 Pa × (760 mmHg / 1.01 × 105 Pa ) ≈ 32 mmHg.

This difference is relatively small—about 4% of an atmosphere—but it is no coincidence that it is approximately equal to the normal difference between the systolic and diastolic pressures, also known as the pulse pressure. In a standing human, the pulse pressure has to be sufficiently high to deliver oxygenated blood from the heart to the brain by overcoming the pressure drop ρgΔh; here Δh is the vertical heart to brain distance, roughly 16 in. If the pulse pressure for a standing human drops below its normal value, the brain can no longer get its blood supply. In that case, one usually faints and falls to the ground, Δh is reduced to zero, and blood flow to the brain resumes . . . fortunately. A second source of pressure drop in the human circulatory system is the liquid’s viscosity. This property represents a source of resistance to flow, as in the extreme case of a very viscous fluid like molasses, which flows slowly. The viscous decrease of the pressure along a flow path is completely analogous to the decrease of electrical voltage in a wire; both fall linearly with distance due to the resistance (electrical or viscous) within the medium (wire or artery). The resulting pressure drop ΔPflow along an artery of length L and cross-­sectional area A is given in the simplest approximation (constant density, nonturbulent flow in a straight channel) by Poiseuille‘s equation:

∆Pflow =

8π µ LQ . A2

(2.4)

Here μ is the dynamic viscosity of the fluid and Q is the volume flow rate. It expresses the volume passing by a given point per unit time, such as gallons or liters per sec*  Perhaps a more familiar version of this equation is P(h) = P0 + ρgh. Here P0 is the surface pressure [(1 atm (atmosphere)]. For freestanding water, this equation implies that the pressure increases by 1 atm for each additional 10 m of depth.

When You Visit Your Doctor  •  15

ond. In the case of a fluid having constant velocity v, we have Q = vA. Without going into detail, we can say that a typical value of the blood’s viscosity is about a factor of four larger than that of water;* μwater is roughly equal to 0.001 Pa · s = 1 cP (centipoise) at room temperature, but it is somewhat greater at body temperature. Equation (2.4) expresses a significant dependence on the area A. This means that a blockage of an artery (reducing A by a factor of two, for example) greatly increases the pressure drop ΔPflow across a given arterial segment of length L. As a result, the pressure is greatly diminished downstream from the arterial blockage relative to its upstream value. Consequently, the heart must pump blood significantly more forcefully in order to maintain flow in the downstream region. This required pumping increases the blood pressure upstream and stresses the heart. In psychological or physical situations (high temperature, heavy manual work, etc.) that are stressful in their own right, a heart attack becomes more likely due to such blockage. A numerical example of equation (2.4) is this: assume an artery of radius 5 mm, fluid speed v = 0.2 m/s, and μ = 4 cP. The equation yields a gradient of the pressure given by ΔPflow/L = 256 Pa/m. Over an artery of length 10 cm, the pressure drop is then ΔPflow = 25.6 Pa, which corresponds to about 0.2 mmHg. This is significantly smaller than the gravitational pressure drop (~8 mmHg) over this distance, assuming that the length is vertical. If, instead, this arterial area becomes 90% constricted by atherosclerosis to radius ~1.5 mm, then ΔPflow = 20 mmHg, which is not at all negligible. With such a constriction, the artery may need to have a stent inserted to prop it open; this procedure is called an angioplasty.

2.4 ELECTROCARDIOGRAM As the name implies, an electrocardiogram is a record of measurements of electrical signals emanating from the heart (kardia in Greek) or its immediate vicinity. It is abbreviated either EKG, derived from the German name Elektrokardiogramm, or ECG. The EKG version is more commonly used in order to avoid confusion with an EEG (since the sound of “EEG” is closer to that of “ECG”). The EEG is an electroencephalogram, which is a recorded measurement of electrical activity at or near the scalp of a patient. One other similar name is echocardiogram, abbreviated ECHO, which is essentially an ultrasound image of the heart. It is unfortunate that so many tools used for medical diagnoses have similar-­sounding names. The EKG is a record of electrical activity (charge motion and voltage change) measured at various points on the patient’s skin, which provides an indirect tracking of the electrical signals due to cardiac activity. An example of an EKG output during one cycle of the heart is shown in figure 2.3. The various features labeled in this output plot have letter names (P, Q, R, S, T) and are attributed to specific aspects of the patient’s cardiac activity within or near its various chambers, as a function of time. For example, the first rounded bump shown in the figure is called the P wave, which originates in electrical signals from the left atrium and the right atrium, the upper chambers of the heart, while the QRS complex (two dips surrounding a sharp peak) represents signals created when electrical impulses move through the ventricles, the lower chambers of the heart. The T wave * 

The saying “blood is thicker than water” is literally true!

16 • Chapter 2 QRS complex R

P

T

Q

S

Figure 2.3. One time period (heart cycle) of a patient’s EKG measurement, showing voltage as a function of time. Letters denote the various wave signatures during this period. The observed pattern is repeated with the frequency of the heartbeat, typically 50 to 100 beats per minute, so that the displayed interval corresponds to about one second. The vertical overall scale S to R is about 0.8 mV.

represents a resetting of the lower chambers, in preparation for muscular contractions associated with the next heartbeat. What is the meaning of these traces? The answer is that a healthy heart exhibits a more or less periodic variation in electrical properties, repeated with each heartbeat. The corresponding EKG pattern displays this time variation, revealing the time-­ dependent electrical voltages at different parts of the heart. These signatures are interpreted in terms of the voltages and charge arrangements on the individual cells constituting the heart’s muscles. As current and voltages fluctuate in an essentially periodic way, information about them spreads outward to the patient’s skin. The EKG signal is a greatly amplified measure of these surface potentials. If a measured EKG is not approximately periodic, this situation is termed arrhythmia, a deviation from the normal heart’s pattern. An extreme example of this situation occurs with a myocardial infarction (heart attack). That corresponds to a situation where a blockage (occlusion) of an artery causes ischemia (reduced blood flow), resulting in the death of heart tissue cells that need nutrients, like oxygen, provided in the blood flow. The 1924 Nobel Prize in Medicine was awarded to Willem Einthoven for his development of the EKG and his detailed analysis of the pattern indicated in figure 2.3. A key component of this instrument is the string galvanometer, which detects and amplifies very small (~ millivolt) electrical potential (voltage) fluctuations at the skin of the patient. This change in potential is attributed to the action current, a flow of charge that accompanies the periodic muscular activity of the heart. Einthoven’s first set of achievements included improvements in the galvanometer, which consists of a thin, stretched quartz wire placed in a magnetic field. The vibrations of this wire are due to the magnetic force, which is proportional to the driving current, the variable of interest. Einthoven’s subsequent accomplishments included his identification of the lettered features, mentioned earlier, in the EKG and his demonstration that the EKG pattern can reflect the presence of heart disease in one or other of the various regions of the heart, with a correlation between the pattern and the region of the damaged heart muscle. Finally, Einthoven developed sophisticated models of the electrical conduction process involved in EKG measurements, emphasizing the need for a set of electrical leads, at various points, in order to localize the position in the heart

When You Visit Your Doctor  •  17

which has a problem. This body of work represents an ingenious combination of basic science and medical application.

2.5 P  HYSICS AND PHYSIOLOGY OF DIET, EXERCISE, AND WEIGHT A prosperous society may offer its citizens both abundant, nutritious food and convenient, door-­to-­door transportation to work, shopping, entertainment, and so on. While this situation represents “the good life” for many people, for some individuals there is an unfortunate outcome—a sedentary lifestyle, with infrequent exercise, and a poorly chosen diet. This latter scenario has adverse consequences, including a growing population of overweight people, with increased incidence of cardiac disease and other medical problems. As a result, there is much attention, among both these individuals and the health care community, paid to the relationship between diet, exercise, and weight. While this situation involves interesting and significant physics problems, in part, it is also obviously a sociological/psychological problem. For many individuals, the weight problem appears unnecessarily complex because of contradictory claims of hypothetical solutions.* Nevertheless, in principle, the physics of this diet-­exercise-­weight relationship is not complicated. A key quantity called the energy balance captures the essential ingredients. As the word balance implies, this concept involves a comparison between two quantities. Specifically, the energy balance represents the net caloric excess received by an individual during some time interval, such as 1 day:

net caloric excess = (caloric intake) – (expended energy).

(2.5)

Before explaining this equation, we state the definition of a calorie:

1 cal (calorie) = energy to raise the temperature of 1 g of water by 1°C (degree Celsius).

(2.6)

You may recall the conversion TCelsius = (5⁄9) × [TFahrenheit – 32] that shows that the temperature change of 1°C corresponds to 1.8°F (degrees Fahrenheit). In discussing calories, there is a particularly unfortunate complication: the term calorie, when conventionally used to describe food and dieting, is actually larger than the preceding “chemistry/physics textbook” definition, equation (2.6), by a factor of 1000. This much larger unit is sometimes called the “dietary calorie” or “kilogram calorie,” since it represents the energy needed to increase the temperature of 1 kg (1 kilogram) of water by 1°C. In this section, we avoid this semantic ambiguity by calling the larger unit of energy a Calorie, written with a capital C and abbreviated Cal. Thus,

1 Cal = 1000 calories = 1 kcal = 4184 J.

(2.7)

For example, a “typical” soft drink, beer, or juice contains about 100 Cal in a cup (8 fl. oz ≈ 0.24 L). In order to describe the relation between exercise and energy expen*  For example, a cover article in the September 2013 issue of the magazine Shape has the title “Eat More, Weigh Less.”

18 • Chapter 2

diture, let us consider a specific example, involving a 175-­lb (80-­kg) author of this book who is walking 3 mi/h on a treadmill. The treadmill rates his output as 84 W = 84 J/s, a typical value, which, incidentally, is the power consumption of a typical incandescent lightbulb. As indicated by the units, joules per second, the power level expresses energy expenditure per unit time. From this representative example, we will now learn why it is convenient and logical to employ the “larger” (big C) Calorie. In particular, the total energy (E) produced in a treadmill exercise during a brief interval of 60 s equals a quantity somewhat greater than 1 Cal, as shown in this calculation (which uses the conversion from joules):

E = (84 J/s) × (60 s) × (1 Cal/4184 J) ≈ 1.2 Cal.

This tiny result—only 1.2 Cal—is discouraging, because it would seem to imply that the 100-­Cal energy content of an 8-­oz cola drink is energetically equivalent to about 100 Cal/(1.2 Cal/min) = 83 min of exercise. Whew! There is some very good news, however: the wattage computed earlier refers just to the useful power output of the treadmill, that which could be recovered in some ingenious device that exploited the treadmill’s energy production. In fact, the same treadmill states that the rider’s energy expenditure is 276 Cal/h, or about 4.6 Cal/min. This is a factor 4.6/1.2 = 3.8 greater than what is computed previously. This larger rating refers to the actual energy expended by the exercising author, which is what counts in weight-­reduction programs, for example. This means that to expend the equivalent of the 100-­Cal soda requires much less time than was calculated before: exercise time to expend Calories in one soda = 100 Cal/(4.6 Cal/min) = 22 min. Even this result may be discouraging to soda or beer drinkers; is one drink worth the trouble of 22 min of exercise? We leave the answer to the reader. People who use treadmills are acutely aware of an optional way to increase their rate of energy expenditure without moving faster: by imposing an upward slope. This strategy can be very effective, as demonstrated by the following example. Walking at 3 mi/h, the treadmill mentioned previously states that the power expenditure is 4.6 Cal/min when horizontal, but the power is much higher, 8.6 Cal/min, when the treadmill is inclined at 7°. How do we account for this difference, a near doubling in exercise power? Here is the relevant calculation: the speed of v = 3 mi/h = 1.34 m/s. Now, the vertical component vy of this speed is obtained from the trigonometric relation vy = v sin 7° = 0.16 m/s. We wish now to compute the extra energy, or power, needed for the uphill motion. Recall that the energy expenditure associated with lifting mass m a distance Δy is given by mg Δy, the product of the vertical force, that is, the weight mg, and Δy. The acceleration of gravity is g = 9.8 m/s2, and let m = 80 kg. The power (Py) required for just this additional upward effort equals the product of mg and the rate of change of height with time (Δy/Δt), which equals vy, the vertical component of the velocity: Py = mg × vy= 80 × 9.8 × 0.16 = 125 W       × (1 Cal/4184 J) × (60 s/min) = 1.8 Cal/min.

When You Visit Your Doctor  •  19

However, the exercise machine’s readout asserts that the additional effort is actually 8.6 – 4.5 = 4.1 Cal/min. The apparent “discrepancy” is the factor 4.1/1.8 = 2.3, but this is attributable (as in the preceding example) to the fact that we expend much more energy than is actually needed to do the work of walking upward. This fact is obvious because our bodies heat up when we exercise, manifesting “wasted energy,” even if we go nowhere. We may conclude that the actual fraction of the uphill work that serves to lift our bodies is 1.8/4.1, or 44%, of the effort. The remaining 56% of our activity is expended as heat and other forms of wasted energy. In either case, we observe that walking uphill is an excellent way to burn more Calories without having to walk any faster. We turn next to the issue of diet. The central point for qualitative purposes is that a Calorie is a direct measure of energy content, which is related to the quantity and type of food that someone eats. Equation (2.5) is completely equivalent to a statement of conservation of energy, which we write as

energy balance = (energy input) – (energy output).

(2.8)

Both of these equations, (2.5) and (2.8), imply that if we consume more calories than we expend in exercise, then the left sides of these equations are positive. This means that we will gain weight. The difference (the energy excess) becomes energy that is stored as fat. In ancient times, before food was plentiful, this mechanism of energy/ fat storage provided humans with a reservoir, in anticipation of the lean times in the future when food might not be available. Nowadays, such caloric excess is usually not very desirable. These straightforward equations lead to many questions, a few of which we address. Possibly the most important question pertains to the relationship between someone’s caloric excess and the corresponding weight gain. The answer is that 3500 Cal is approximately equivalent to 1 lb (0.45 kg) of fat [1]. How much food consumption does this represent? The food equivalent depends on the individual’s diet, since some foods have a much higher caloric content (per pound) than others. Typically, for example, the caloric content per unit weight is about twice as high for fat than for protein or carbohydrates. Fortunately, this important variable is not a mystery because of laws requiring that calorie content be specified on food packages. Now that we have discussed the “input” term in the energy balance, we return to the energy “output” term, associated with exercise. What is the relationship between exercise and energy expenditure? There is no unique answer to this question because the individual’s weight is a critically important variable in the relation between caloric expenditure and exercise time. Table 2.1 demonstrates this situation. The conversion to metric units is 1 mi/h ≈ 0.45 m/s. Table 2.1. Energy expenditure (in Cal/h) as a function of the exerciser’s weight [2]. Activity Walking 2 mi/h Walking 3.5 mi/h Biking qm =

hf . exp [ hf /(k BT )] − 1

(3.4)

The subscript qm refers to the quantum nature of the derivation. This expression yields the classical limit (kBT per mode, or 2kBT when both polarizations are included) in the regime hf     ≡  .  2π 

(3.15)

The application of these ideas to the atom can be seen in figures depicting the probability density near the nucleus. We have previously discussed the case of H, for which D(r) appears in figure 3.10. Figure 3.11 depicts D(r) for a potassium atom (K), in which the electrons may be divided into a group of core electrons, close to the nucleus, and a more distant 4s valence electron. Note the logarithmic scales on both axes. The 18-­electron core is labeled 1s22s22p63s23p6, where the superscripts denote the number of electrons in each shell; the other number is the principal quantum number (n = 1, 2, or 3) and the letters s and p refer to particular angular momentum values (the s state has angular momentum L = 0, p has L = 2, . . .). The three bumps in the electron density of K indicate the maximum probability positions of the n = 1, 2, and 3 shells, respectively. The characteristic unit of length for these quantities is the Bohr radius, a0. Thus, the n = 2 shell of electrons has its maximum probability at distance 0.25a0 from the nucleus, while the n = 3 shell is farther out, near 1a0. The 4s valence electron is seen to contribute only for r-­values somewhat greater than 2 a0 ~ 1 Å. Its probability extends out to almost 3 Å, consistent with the atomic

Particles and Waves  •  41

Radial probability density D(r)

100

10

1

0.1

0.01 0.001

19K

0.01

(4s1)

0.1 r (a.u.)

1

10

Figure 3.11. The radial probability density D(r) of electrons near a K atom, as a function of distance from the nucleus, measured in Bohr radii. The arrow indicates the interface between the core region and the valence electron. Adapted from V. Sahni, Z. Qian, and K. D. Sen, J. Chem. Phys., 114, 8784–8788 (2001).

volume Vatom = 76 Å3 presented in figure 3.6. (That is, if Vatom = 4πr3/3, we deduce an atomic radius r = 2.6 Å.) It must be emphasized that the idea of “probability” in quantum mechanics represents an intrinsic limitation of one’s knowledge of the world; no such limit appeared in classical physics. The revolutionary concept of uncertainty was difficult for some of the most eminent physicists to accept, even though it may seem “obvious” from the apparently random ticks emitted by a Geiger-­Müller counter near a radioactive sample. With nearly a century of successful confirmation of the quantum description the microscopic world, the concept is now nearly unanimously accepted in the world of physics.

3.4 LASERS By any measure of significance, the laser is one of the most remarkable and valuable inventions emerging from modern physics. The diverse applications of lasers surround us, including compact disc and DVD players, precise carpenter’s levels, bar code readers, holograms, and many kinds of surgical tools. For most individuals, the laser is like the proverbial “black box”; few users of this sophisticated device know the unusual properties of laser light and even fewer understand the underlying physics of it. This situation is unfortunate because the principles of the laser are not really complicated for those who understand the relationship between atoms and photons. In this section, we will describe this relationship and explain how it results in the unique properties and capabilities of lasers. Before doing so, we clarify what a laser is not: a source of energy. The laser uses more energy than it supplies. We can summarize what a laser actually is: a device capable of converting incoherent energy from some source into a coherent and focused light beam. The word coherent means that laser light is an electromagnetic wave analog of a phalanx of soldiers marching in perfect step; successive ranks pass a point at uniformly spaced times. To understand what this statement means and how this property occurs, as well as other laser properties, we must first elaborate some principles of the previous section, specifically on how matter (gas or solid) absorbs or emits electromagnetic energy. This as-

42 • Chapter 3

pect of laser physics involves some subtlety associated with the required simultaneous presence of many atoms and many photons. Let us consider a gas of independent atoms in the presence of a normal, random field of electromagnetic radiation, which may be thought of as a thermal photon “bath,” that is. a confined collection of photons. We assume that no laser or other external source of photons is present. As explained in section 3.2, the spectral distribution of this photon bath—that is, the number of photons with a certain frequency N(f )—is determined by the temperature T of the chamber walls confining the gas and is described by the Planck function. We thus assume that the photons are in thermal equilibrium with the walls, meaning that no net energy is exchanged between the photon bath and the walls. Given equation (3.4) (the energy at a given frequency f ) and the energy per photon (hf ), we can find a proportionality relating the number of photons in a specific mode of frequency f to the temperature of the bath:

N( f ) ∝

2 . exp [ hf /(k BT )] − 1

(3.16)

The factor of 2 accounts for the two polarizations of the photons at any specified frequency. The proportionality in the preceding relation takes into consideration that there are many possible directions for these photons, each corresponding to a different photon state. This multiplicity is responsible for the coefficient of proportionality, which we omit. An important atomic property is the discrete (line) spectrum of an atom, which we consider here. Three kinds of interatomic transition are photon absorption, stimulated emission, and spontaneous emission, which determine the possible experimental spectra. Figure 3.12 displays these three possible transitions schematically in a case of the simplest useful representation of an atom in a laser. This case involves just two energy levels, which are separated by an energy difference ΔE = Ee – Eg. Here Eg is the ground state energy, that is, the lowest energy of the atom, and Ee is the energy of the atom in an excited state. The preceding relation shows that the Planck distribution describes an infinite range of photon frequencies f and that the number having each frequency depends on the temperature T. As discussed in section 3.2, the energy of a typical photon is given approximately by the relation Ephoton ~ kBT. Because the energy of a photon is proportional to its frequency [equation (3.6)], low temperature means low energy and, therefore, low frequency for a “typical” photon. This means there are relatively few photons of high energy in the bath, so there will be few transitions involving such high photon energy (required to excite an atom from the ground to the excited state). Nevertheless, if an atom happens to find itself in the excited state, it will usually decay quickly to the ground state by spontaneous emission (typically in less than a nanosecond). These factual statements imply that at low T, virtually all of the atoms will be in the ground state. This qualitative conclusion is consistent with the Boltzmann law of statistical probability; this law says that, on average, the equilibrium ratio of Ne, the number of atoms in the excited state to Ng, the number in the ground state is given by

Ne = exp [ − ∆ E /(k BT )] , Ng

(3.17)

Particles and Waves  •  43 Excited state

Absorption

Emission

Ground state

Figure 3.12. Schematic depiction of transitions between two atomic levels. Shown are optical absorption, spontaneous emission (downward solid arrow), and stimulated emission (downward dashed arrow). Optical absorption and stimulated emission require photons to trigger them, while spontaneous emission does not.

where ΔE is the energy difference between the excited and the ground state. Now, let’s consider what happens with increasing temperature T. Since ΔE > 0, the pre­ ceding ratio is small at low T but approaches unity at very high T. The crossover between “low-­” and “high-­” temperature regimes occurs near a temperature we will call Tcrossover . It can be estimated by setting the ratio in the exponent of equation (3.16) to be 1, meaning that Tcrossover ~ ΔE/kB. Although the preceding description seems plausible, a profound problem is lurking here, which is not apparent at first sight. Einstein realized that equation (3.17) contains a paradox, based on the following considerations. Suppose that the atomic gas is heated until it comes to equilibrium at very high temperature, T >> Tcrossover. Then, the many ambient photons having sufficient energy ΔE will quickly depopulate the pool of ground state atoms by exciting them to the excited state, resulting in Ne >> Ng. However, equation (3.17) affirms that Ne cannot possibly exceed Ng (in equilibrium)—so we have arrived at a contradiction! This situation raises a deep conceptual question and necessitates the introduction of a genuinely new idea. Einstein resolved this problem by hypothesizing the existence of a novel phenomenon, called stimulated emission. This expression refers to a process, represented in figure 3.12 as a downward dashed line, in which the atoms in the excited state undergo (downward) transitions to the ground state at a rate proportional to the number of ambient photons of energy that satisfies the relation hf = ΔE. With this hypothetical process included, both up-­and down-­transition rates have contributions that are proportional to the photon population at this transition frequency. At high T, because of the high photon density, the stimulated emission rate swamps the spontaneous emission rate, leading to equal populations in the ground and in the excited states. This finding is consistent with the Boltzmann relation, equation (3.17), since that equation predicts that Ne/Ng → 1 at very high T. The paradox of equilibrium at high temperature is thus resolved! Not only that, but Einstein’s novel hypothesis provided the seminal concept (stimulated emission) out of which emerged the laser, the existence of which was not even contemplated until some 40 years later. A scientist summarizing Einstein’s contributions to the quantum theory said of this work, “Another gem is the concept of stimulated emission. To claim that Einstein almost invented the laser would be an exaggeration, but the laser’s underlying mechanism, stimulated emission of radiation, was a creation of his radiation theory” [9]. Indeed, the word laser is an acronym, standing for light amplification by stimulated emission of radiation. How do these physical principles, especially the stimulated emission process, give rise to the existence of a laser? To answer this question, we consider a gas consisting of many atoms inside a volume V. Suppose that this gas is exposed to an in-

44 • Chapter 3 (a)

(b)

Ee

Ee

hf = Ee – Eg Eg

2 hf

4 hf Eg

Figure 3.13. Schematic depiction of initial steps in a laser cascade. In (a), one photon stimulates the emission from an atom of a second photon. In (b), the two photons each stimulate other photon emissions, resulting at right in a total of four outgoing photons.

tense supply of electromagnetic energy, from some “normal” power source like a flash lamp. This flood of input photons permeates V, causing many more transitions from the ground to the excited state than would normally occur at the experimental temperature. As a result, one drastically alters the population ratio of the atomic states, so that Ne > Ng. This situation is called population inversion, because it is the opposite of what happens in equilibrium, according to equation (3.16). Such behavior is inherently a nonequilibrium situation, maintained by an extremely high rate of supplying energy, so that the gas does not equilibrate. Under these circumstances, the Boltzmann expression is not appropriate. (Some scientists prefer to characterize these conditions with the Boltzmann relation and use a negative temperature to describe the inverted population ratio, since if T  0, since the front of the car is moving away from the photon source while the rear is moving toward it. Here is the mathematical analysis: the time trear is such that the sum of the distance moved by the train during that time (vtrear) and that moved by the photon (ctrear) is L. Thus, L = (c + v)trear, so trear = L/(c + v). A similar calculation yields tfront = L/(c – v), so that Alice finds a difference between the arrival times at the two ends given by

Particles and Waves  •  47 M

h I

F

vtA



∆t A =

Figure 3.14. The path of a photon, as observed by Alice. It starts at the candle, initially at point I. The photon is reflected at M and returns to the candle’s final position at F. The candle moves a horizontal distance vtA during this time.

2v L . c2 − v2

(3.18)

One can summarize this situation as follows: Alice and Bob will disagree about the various time intervals between emission and detection of light at the two ends: Bob will say that the light from the candle reaches front and rear ends simultaneously, while Alice will say that light from the candle reaches the rear before it reaches the front. The rank ordering of the times measured by the two observers is summarized here:

trear , A < trear , B = tfront , B < tfront , A.

This finding is a straightforward application of the known properties of light, given the M&M result. However, the qualitative content of this conclusion is surprising to those unfamiliar with relativity: these two observers disagree on the times when the light reaches the two ends of the train car. It might seem strange that seemingly absolute concepts like time and simultaneous events are actually relative. It is important to emphasize that there is no “correct” answer to the question: did the photons arrive at the same time at the two ends? Each observer finds answers consistent with the laws of physics, but the two sets of answers are different. Does this mean that any other observer gets a different answer? No, an important fact is that all observers in the same state of motion measure the same propagation times. That is, every passenger sitting in the train car will observe the simultaneous arrival of light at each end, at time ttrain, agreeing with Bob. Similarly, everyone standing on the platform watching the train pass will measure two distinct arrival times, the values observed by Alice. The values of the physical quantities measured in each frame of reference will coincide and conform to the known laws (or else the laws would have to be revised!). To further elucidate this intriguing situation, let us consider a different thought experiment. Imagine that a candle is placed on the floor of the train car. Consider light photons moving from the candle vertically up to the ceiling, where they hit a mirror (M) and are reflected back down to the floor. Bob, on the train, measures the time (tB) required for a photon to travel on this round trip. If the car’s height is h, then the time measured by Bob is tB = 2h/c, since the total distance traversed by the photon is 2h and the photon’s speed is c. Standing on the platform, Alice watches this same experiment, while the train passes, and observes light propagating on the path

48 • Chapter 3

I → M → F. This path is indicated in figure 3.14, where I denotes the candle’s initial position (observed by Alice) at the moment a photon left it and F denotes the final position where Alice observes the photon hit the floor of the train car. During the time (called tA) required for the light to go up and down, the candle moves a horizontal distance vtA, as shown in the figure. This time tA can be computed with the help of simple geometry. Alice sees the light propagate from I to M to F, from floor to mirror and back.* Half of this flight time is spent on the floor-­to-­mirror path, so that this path length is IM = ctA/2. Now, the Pythagorean theorem tells Alice that the hypotenuse IM is given by 1/ 2



2  ct  vt   IM = MF = A =  h 2 +  A   .  2   2 

Squaring this relation and rearranging yields h2 = (c2 – v2)/(tA /2)2. We found earlier that h = ctB/2. Squaring that relation and equating it to the previous expression yields

(ctB)2 = (c2 – v2)(tA)2.

Rearranging this last equation yields an expression for the ratio of the times measured by the two observers:

tA 1 = tB 1 − v 2 /c 2

(

)

1/ 2

≡ γ .

(3.19)

Here, we have introduced an important quantity, called γ, that expresses this ratio tA /tB in terms of v/c. Note that γ ≥ 1 (assuming that v