Field Theoretic Method in Phase Transformations (Lecture Notes in Physics, 1016) [2 ed.] 3031296044, 9783031296048

This book describes a novel and popular method for the theoretical and computational study of phase transformations and

227 32 14MB

English Pages 528 [504] Year 2023

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Field Theoretic Method in Phase Transformations (Lecture Notes in Physics, 1016) [2 ed.]
 3031296044, 9783031296048

Table of contents :
Preface
What Is This Book About?
How Is This Edition Different from the First One?
Who Is This Book For?
Historical Note
Nomenclature
References
Contents
About the Author
Part I: Classical Theories of Phase Equilibria and Transformations
Chapter 1: Thermodynamic States and Their Stabilities
1.1 Laws of Thermodynamics
1.2 Thermodynamic Stability of Equilibrium States
1.3 Dynamic Stability of States
1.4 Stability of Heterogeneous States
1.5 Analysis of Dynamic Stability in Terms of Normal Modes
Exercises
References
Chapter 2: Thermodynamic Equilibrium of Phases
2.1 Definition of a Phase and Phase Transition
2.2 Conditions of Phase Equilibrium
2.3 Ehrenfest Classification of Phase Transitions
2.4 Phase Coexistence and Gibbsian Description of an Interface
Exercises
Reference
Chapter 3: Examples of Phase Transitions
3.1 Crystallization (Freezing)-Melting
3.2 Martensitic Transition
3.3 Magnetic Transitions
3.4 Ferroelectric Transition
Exercises
References
Chapter 4: Isothermal Kinetics of Phase Nucleation and Growth
4.1 JMAK Theory of Nucleation and Growth
4.1.1 Theory of Thermally Activated Nucleation and Growth
4.2 Classical Nucleation Theories
4.2.1 Gibbsian Theory of Capillarity
4.2.2 Frenkel´s Distribution of Heterophase Fluctuations in Unsaturated Vapor
4.2.3 Becker-Döring Kinetic Theory of Nucleation in Supersaturated Vapor
4.2.4 Zeldovich Theory of Nucleation*
4.3 Critique of Classical Nucleation Theories
Exercises
References
Chapter 5: Coarsening of Second-Phase Precipitates
5.1 Formulation of the Problem
5.1.1 Rate Equation
5.1.2 Condition of Local Equilibrium
5.1.3 Mass Conservation Condition
5.1.4 Particle Size Distribution Function
5.1.5 Initial, Boundary, and Stability Conditions
5.2 Resolution of the Problem
5.2.1 Unchanging Supersaturation
5.2.2 Asymptotic Analysis of Bulk Properties
5.2.3 Asymptotic Analysis of the Distribution Function
5.3 Domain of Applicability and Shortcomings of the Theory
5.3.1 Quasi-Steady Diffusion Field
5.3.2 Local Equilibrium
5.3.3 Absence of Nucleation
5.3.4 Mean-Field Approximation
5.3.5 Disregard of Coalescence
5.3.6 Spherical Shape
5.3.7 Asymptotic Regime
5.3.8 Small Particles
Exercises
References
Chapter 6: Spinodal Decomposition in Binary Systems
Exercises
Chapter 7: Thermal Effects in Kinetics of Phase Transformations
7.1 Formulation of the Stefan Problem
7.1.1 Stefan Boundary Condition
7.1.2 Thermodynamic Equilibrium Boundary Condition
7.1.3 Phase Equilibrium
7.1.4 Initial Condition
7.1.5 Far Field Condition
7.2 Plane-Front Stefan Problem
7.2.1 Diffusion Regime of Growth
7.2.2 Adiabatic Crystallization
7.2.3 Critical Supercooling Crystallization
7.2.4 General Observations
7.3 Ivantsov´s Theory*
7.4 Morphological Instability*
7.5 Dendritic Growth
7.5.1 Analysis of Classical Theories of Crystallization
7.6 Numerical Simulations of Dendritic Growth
7.6.1 Cellular Automata Method
7.6.2 Simulation Results
7.6.3 Conclusions
7.7 Rate of Growth of New Phase
Exercises
References
Part II: The Method
Chapter 8: Landau Theory of Phase Transitions
8.1 Phase Transition as a Symmetry Change: The Order Parameter
8.2 Phase Transition as a Bifurcation: The Free Energy
8.3 Classification of the Transitions
8.4 The Tangential Potential
8.5 Other Potentials
8.6 Phase Diagrams and Measurable Quantities
8.6.1 First-Order Transitions
8.6.2 Second-Order Transitions
8.7 Effect of External Field on Phase Transition
Exercises
References
Chapter 9: Heterogeneous Equilibrium Systems
9.1 The Free Energy
9.1.1 Gradients of Order Parameter
9.1.2 Gradients of Conjugate Fields*
9.1.3 Field-Theoretic Analogy
9.2 Equilibrium States
9.3 One-Dimensional Equilibrium States
9.3.1 General Properties
9.3.2 Classification
9.3.3 Thermomechanical Analogy
9.3.4 Type-e Solutions: General Properties
9.3.5 Type-e1 Solution: Bifurcation From the Transition State
9.3.6 Type-e3 Solution: Approach to Thermodynamic Limit
9.3.7 Type-e4 Solution: Plane Interface
9.3.8 Interfacial Properties: Gibbs Adsorption Equation*
9.3.9 Type-n4 Solution: Critical Plate-Instanton
9.4 Free Energy Landscape*
9.5 Multidimensional Equilibrium States
9.5.1 Ripples: The Fourier Method*
9.5.2 Sharp-Interface (Drumhead) Approximation
9.5.3 The Critical Droplet: 3d Spherically Symmetric Instanton
9.6 Thermodynamic Stability of States: Local Versus Global*
9.6.1 Type-e4 State: Plane Interface
9.6.2 General Type-e and Type-n States
9.6.3 3d Spherically Symmetric Instanton
Exercises
References
Chapter 10: Dynamics of Homogeneous Systems
10.1 Evolution Equation: The Linear Ansatz
10.2 Dynamics of Small Disturbances in the Linear-Ansatz Equation
10.2.1 Spinodal Instability and Critical Slowing Down
10.2.2 More Complicated Types of OPs
10.3 Evolution of Finite Disturbances by the Linear-Ansatz Equation
10.4 Beyond the Linear Ansatz
10.5 Relaxation with Memory
10.6 Other Forces
Exercises
References
Chapter 11: Evolution of Heterogeneous Systems
11.1 Time-Dependent Ginzburg-Landau Evolution Equation
11.2 Dynamic Stability of Equilibrium States
11.2.1 Homogeneous Equilibrium States
11.2.2 Heterogeneous Equilibrium States
11.3 Motion of Plane Interface
11.3.1 Thermomechanical Analogy
11.3.2 Polynomial Solution
11.3.3 Selection Principle
11.3.4 Morphological Stability
11.4 Emergent Equation of Interfacial Dynamics
11.4.1 Nonequilibrium Interface Energy
11.5 Evolution of a Spherical Droplet
11.6 Domain Growth Dynamics
11.7 *Thermomechanical Analogy Revisited
Exercises
References
Chapter 12: Thermodynamic Fluctuations
12.1 Free Energy of Equilibrium System with Fluctuations
12.2 Correlation Functions
12.3 Levanyuk-Ginzburg Criterion
12.4 Dynamics of Fluctuating Systems: The Langevin Force
12.4.1 Homogeneous Langevin Force
12.4.2 Inhomogeneous Langevin Force
12.5 Evolution of the Structure Factor
12.6 *Drumhead Approximation of the Evolution Equation
12.6.1 Evolution of the Interfacial Structure Factor
12.6.2 Nucleation in the Drumhead Approximation
12.7 Homogeneous Nucleation in Ginzburg-Landau System
12.8 *Memory Effects: Non-Markovian Systems
Exercises
References
Chapter 13: Multi-Physics Coupling: Thermal Effects of Phase Transformations
13.1 Equilibrium States of a Closed (Adiabatic) System
13.1.1 Type-E1 States
13.1.2 *Type-E2 States
13.2 Generalized Heat Equation
13.3 Emergence of a New Phase
13.4 Non-isothermal Motion of a Curved Interface
13.4.1 Generalized Stefan Heat-Balance Equation
13.4.2 Surface Creation and Dissipation Effect
13.4.3 Generalized Emergent Equation of Interfacial Dynamics
13.4.4 Gibbs-Duhem Force
13.4.5 Interphase Boundary Motion: Heat Trapping Effect
13.4.6 APB Motion: Thermal Drag Effect
13.5 Variability of the System: Length and Energy Scales
13.6 Pattern Formation
13.6.1 One-Dimensional Transformation
13.6.2 Two-Dimensional Transformation
Exercises
References
Chapter 14: Validation of the Method
14.1 Physical Consistency
14.2 Parameters of FTM
14.3 Boundaries of Applicability
Exercises
Part III: Applications
Chapter 15: Conservative Order Parameter: Theory of Spinodal Decomposition in Binary Systems
15.1 Equilibrium in Inhomogeneous Systems
15.2 Dynamics of Decomposition
15.3 Evolution of Small Disturbances
15.4 Pattern Formation
15.5 Role of fluctuations
Exercises
References
Chapter 16: Complex Order Parameter: Ginzburg-Landau´s Theory of Superconductivity
16.1 Order Parameter and Free Energy
16.2 Equilibrium Equations
16.3 Surface Tension of the Superconducting/Normal Phase Interface
Exercise
References
Chapter 17: Multicomponent Order Parameter: Crystallographic Phase Transitions
17.1 Invariance to Symmetry Group
17.2 Inhomogeneous Variations
17.3 Equilibrium States
Exercises
References
Chapter 18: ``Mechanical´´ Order Parameter
Reference
Chapter 19: Continuum Models of Grain Growth
19.1 Basic Facts About Growing Grains
19.2 Multiphase-Field Models
19.3 Orientational Order Parameter Field Models
19.4 Phase-Field Crystal
References
Appendix A: Coarse-Graining Procedure
Appendix B: Calculus of Variations and Functional Derivative
Appendix C: Orthogonal Curvilinear Coordinates
Appendix D: Lagrangian Field Theory
Appendix E: Eigenfunctions and Eigenvalues of the Schrödinger Equation and Sturm´s Comparison Theorem
Appendix F: Fourier and Legendre Transforms
Fourier Transform
Legendre Transform
Appendix G: Stochastic Processes
The Master and Fokker-Plank Equations
Decomposition of Unstable States
Diffusion in Bistable Potential
Autocorrelation Function
The Langevin Approach
Appendix H: Two-Phase Equilibrium in a Closed Binary System
Appendix I: On the Theory of Adsorption of Sound in Liquids
Epilogue: Challenges and Future Prospects
References
Appendix A
Appendix B
Appendix E
Appendix F
Appendix G
Index

Citation preview

Lecture Notes in Physics

Alexander Umantsev

Field Theoretic Method in Phase Transformations Second Edition

Lecture Notes in Physics Volume 1016 Founding Editors Wolf Beiglböck, Heidelberg, Germany Jürgen Ehlers, Potsdam, Germany Klaus Hepp, Zürich, Switzerland Hans-Arwed Weidenmüller, Heidelberg, Germany

Series Editors Roberta Citro, Salerno, Italy Peter Hänggi, Augsburg, Germany Morten Hjorth-Jensen, Oslo, Norway Maciej Lewenstein, Barcelona, Spain Luciano Rezzolla, Frankfurt am Main, Germany Angel Rubio, Hamburg, Germany Wolfgang Schleich, Ulm, Germany Stefan Theisen, Potsdam, Germany James D. Wells, Ann Arbor, MI, USA Gary P. Zank, Huntsville, AL, USA

The series Lecture Notes in Physics (LNP), founded in 1969, reports new developments in physics research and teaching - quickly and informally, but with a high quality and the explicit aim to summarize and communicate current knowledge in an accessible way. Books published in this series are conceived as bridging material between advanced graduate textbooks and the forefront of research and to serve three purposes: • to be a compact and modern up-to-date source of reference on a well-defined topic; • to serve as an accessible introduction to the field to postgraduate students and non-specialist researchers from related areas; • to be a source of advanced teaching material for specialized seminars, courses and schools. Both monographs and multi-author volumes will be considered for publication. Edited volumes should however consist of a very limited number of contributions only. Proceedings will not be considered for LNP. Volumes published in LNP are disseminated both in print and in electronic formats, the electronic archive being available at springerlink.com. The series content is indexed, abstracted and referenced by many abstracting and information services, bibliographic networks, subscription agencies, library networks, and consortia. Proposals should be sent to a member of the Editorial Board, or directly to the responsible editor at Springer: Dr Lisa Scalone [email protected]

Alexander Umantsev

Field Theoretic Method in Phase Transformations Second Edition

Alexander Umantsev Department of Chemistry, Physics, and Materials Science Fayetteville State University Fayetteville, NC, USA

ISSN 0075-8450 ISSN 1616-6361 (electronic) Lecture Notes in Physics ISBN 978-3-031-29604-8 ISBN 978-3-031-29605-5 (eBook) https://doi.org/10.1007/978-3-031-29605-5 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2012, 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

To my father Rudolf and my mother Yuliya

Preface

What Is This Book About? Phase transitions are significant changes in system’s properties and symmetry, which happen as a result of changes of the external conditions (temperature, pressure, chemical potential, etc.). Although various phase transitions are discussed in the book as physical phenomena, the book is more about the method to study the phase transitions than about the phenomena themselves. A lot has been written about the behavior of systems close to the critical point; it is characterized by special features such as scale invariance. However, these are rare cases and most of the systems spend most of their time far away from the critical points. Rephrasing Feynman, we can say that there is plenty of room away from the critical point. Evolutions of systems when they are not close to the critical points are characterized by completely different physical features like rate of nucleation and growth, microstructure or pattern formation, structure modification and coarsening, etc. Physical descriptions of these features require that special consideration is taken of the free energies of the phases involved in the transformations, which in many cases are known either from thermodynamic calculations or direct experimental measurements. All that sets the stage for a different, more phenomenological, approach to phase transitions, which is the main subject of this book. One of the most convenient ways of addressing the general problem of phase transformations is the Field-Theoretic Method (FTM), which is a subclass of DFTs. FTM is based on the Landau paradigm, which assumes that, in addition to temperature, pressure, and composition, the free energy is a continuous function of a set of internal (hidden) parameters associated with the symmetry changes, which are usually called the long-range order parameters (OPs). Different transitions may be laid out into the same framework if proper physical interpretations of OPs are found. Although significant strides in the rigorous derivation of the basic equations of the method from first principles have already been made (see Appendix A), this task is not finished yet. That is why the issue of physical consistency has always played and vii

viii

Preface

Fig. P.1 Beams and columns configuration of the Field-Theoretic Method

still plays significant role in the development of FTM. This issue can be expressed as follows: whatever the equations of the method are, their implications should not contradict the basic laws of physics, see Fig. P.1. In Part I, I intent to outline some of the classical, macroscopic principles of phase transformations in materials with the emphasis on the mechanisms and their stability. I do not believe that the current trend in Materials Science accentuating AI (artificial intelligence) when researchers use a combination of AI and experimentation to replace the Theory of Materials is extremely productive. At the end of the day, nothing can replace a formula with its predictive capability. However, the theories presented in Part I required very different, at times contradictory approaches. In Part II, I lay out the Field-Theoretic Method itself, which can be an effective theoretical instrument and an efficient computational tool. In practical applications, the method often goes under the name of Phase Field. I will try to convince the reader that FTM is capable of describing all mechanisms of phase transformations described in Part I under the same umbrella. In a way, FTM is a grand unification of the Materials Theory. To be exact, we will discuss the six columns of the method stapled together by the beam of the requirement of Physical Consistency depicted in Fig. P.1 and conclude with the discussion of the validation of the method and its boundaries of applicability. In Part III, broader applications of FTM to specific problems of Materials Science are considered. The Epilogue is about the challenges and prospects of FTM. The chapters contain untitled abstracts that provide a brief description of the related problem and a summary of the main results. The main content of the chapters is self-sufficient for its understanding. There are worked-out examples of derivations scattered in many chapters and in a few appendices. The chapters are peppered with questions (Why?), which are left for self-study. In the end of each chapter, there are exercises related to the subject and a few references critical for understanding the material. More difficult sections of the chapters, which may be omitted at first

Preface

ix

reading, are marked with asterisks (*). At the end of the book, there are several appendices, which provide mathematical digressions necessary for better understanding of the material.

How Is This Edition Different from the First One? As we have learned from the previous section, this book is about the Field-Theoretic Method of study of phase transformations in materials, which was introduced some seventy years ago and developed into a powerful method of inquiry. However, we should not think that the scientists were asleep at the switch prior to that. They developed many theories of transformation in different conditions, which are of great interest for us from the stand points of mathematical formulation and experimental verification. These theories, dubbed classical, are the subject of Part I, which was not a part of the First Edition. Unfortunately, almost all these theories are disconnected from each other methodologically and technically. FTM deals with this problem by providing a unified physical basis for the field of study and is the subject of Parts II and III.

Who Is This Book For? This book is for researchers who are interested in all aspects of phase transformations, especially for practitioners who are involved in theoretical studies or computer simulations of the phenomena and would like to expand their knowledge in the direction of the field theory of phase transitions. This book can be used as a textbook for a graduate or upper-level undergraduate course in physics of phase transitions. The readers should be familiar with the basic tenets of Physics—Classical Mechanics, Thermodynamics and Statistical Mechanics—and Mathematics—Calculus and Differential Equations. Although basic knowledge is assumed, more specific topics, critical for the subject of the book, are presented briefly in the appendices. There is an advantage of having all the components of the method collected in one place. Please feel free to send your comments to the author via: [email protected]

Historical Note Although phase transitions are always associated with some kind of singularity, the application of continuum ideas in this scientific area has a very long history. The first attempts in contemporary history to consider phase changes using continuous methods belong to G.W. Gibbs [1] who considered coexistence of phases and, in the first paragraph of his “Theory of Capillarity,” delineated the main reason for the

x

Preface

continuity between them—the finite size of the “sphere of molecular action.” Although most of his treatment of the interface between the phases uses the concept of “the surface of discontinuity,” he always thought of it as a mere abstraction. Later, Van der Waals [2], in his study of the equation of state of the liquid-gas transition, introduced the gradient of density contribution into the system’s local energy (not entropy). This allowed him to calculate the surface tension of the system and estimate the thickness of the interfacial layer. The path of science through history is unpredictable. No matter how much we try to control it by moral and financial support, science usually chooses serendipitous routes. The Field Theory of Phase Transformations is a case in point. The fundamental basis of what this book is about was laid out not only in one year but also in one city: Moscow, Russia, in 1937. That year L. D. Landau [3, 4] published three papers on the theory of phase transitions and scattering of X-rays, which introduced the concept of the order parameter, started the new theoretical method, and explained how the theory should account for the system’s heterogeneities. That same year L. I. Mandelshtam and M. A. Leontovich [5] published a paper on the theory of adsorption of sound in liquids where they developed an approach to an evolving system which was pushed away from equilibrium. As if all that is not enough of a coincidence, all four papers were published in the same, seventh volume of the same Journal of Experimental and Theoretical Physics, although in different issues. It is worth mentioning here that 1937 was a very difficult year for Russia when Stalin’s purges had started; very soon Landau himself fell victim to those. So, 1937 was the theory’s most successful year; the crossroads of science and politics are intertwined. The papers by Landau received significant attention in the “western” scientific community (yet the third one [4] significantly less than the first two [3]), but the paper by Mandelshtam and Leontovich [5] remains virtually unknown to this community, although an interesting discussion of that paper can be found in the section “Second viscosity” of Landau and Lifshitz “Fluid Mechanics” [6], the first edition of which appeared in 1944. Later on, in 1954, Landau and Khalatnikov adopted that idea of how to deal with disequilibrium and implemented it in their important paper [7], unfortunately without mentioning the contribution of Mandelshtam and Leontovich. It was known at the time that Leontovich was very upset over that incident (in particular with V. L. Ginzburg who wrote a bad review on one of his works related to the subject). In their paper [5], Mandelshtam and Leontovich introduced an internal variable to describe the deviation of a system from equilibrium and motivated the dynamic equation for its evolution. They argued that it is not necessary to imbue this quantity with a special physical interpretation but is necessary to find its relations with the thermodynamic properties of the system. They analyzed interaction of this variable with the internal energy of the system and discussed the role of the thermal fluctuations in determining the parameters of the equations. The author of this book does not know the exact reason of why this paper, although influential in the Russian scientific community, remains virtually unknown in the west. One possible explanation is that Landau’s papers were translated into English (and German) while

Preface

xi

Mandelshtam-Leontovich’s were not. That is why the author decided to publish a translation of that seminal paper in Appendix I of this book. In 1950 Ginzburg and Landau [8] expanded the early phenomenological ideas of the theory and applied them to superconductivity. However, the significance of that work for the Russian reader was not in the introduction of the gradient energy term into the free energy functional (which seems to be the main novelty of that paper for the “western” readers) because that term was introduced by Landau earlier, see Eq. (11) in Ref. [4]. The work was significant because it showed how to establish coupling of the order parameter to the magnetic field and the method of calculating the interfacial energy between the phases. In the paper, the authors looked at equilibrium problems only and did not discuss dynamics of the system at all. Later physicists learned how to describe the dynamics of the order parameter and called it the Time-Dependent Ginzburg-Landau Equation (TDGLE) to recognize the contribution of the approach (sometimes this equation is called Allen-Cahn’s, which is a complete misnomer). In the late 1950s, J. Cahn and J. Hilliard [9, 10] published a series of papers on the application of the continuum method to the phenomenon of spinodal decomposition. After that, the whole field of the continuum theory burst out into many different directions. Curiously, the dynamic equation that describes the spinodal decomposition bears the names of Cahn and Hilliard although it actually appeared in a paper authored by Cahn only, which was preceded by a paper by M. Hillert [11]. In the late 1970s, there was a lot of talk of calculating the speed of the solidification front taking the emerging latent heat into account. These issues were particularly prominent at Carnegie Mellon University where faculty of the Physics and Mathematics Departments participated in the discussion. One day, J. Langer (Physics Dept.) posted a note on a discussion board (see Fig. P.2) where he suggested a name for the new quantity—the phase field, “which will identify the phase.” The physical meaning of the phase field is practically identical to that of the order parameter introduced by Landau. Later the term appeared in the paper by G. Fix [12] (Mathematics Dept.), unfortunately without a reference on the source of origin. To finish the story of “the speed of the solidification front with the latent heat,” in late 1980s, the author of this book and A. Roytburd derived the thermodynamically consistent equation of heat, which accounts for all the aspects of evolution of the phase field (see [13] and Sect. 13.2). In the past decades, there has been inflation of theoretical and computational research using FTM when the method has become very popular in theoretical and computational studies of very different transformations in materials (crystallization of pure substances and alloys, precipitation in the solid state, spinodal decomposition, martensitic transformation, and grain growth), in mechanics (dislocation gliding, formation of voids and pores, plasticity, and fracturing), in cosmology and high energy physics (phase transitions in the early universe), in biophysics (chemotaxis, protein folding), and even in the studies of human societies (revolutions and other social events). FTM is successful because it is:

xii

Preface

Fig. P.2 J.S. Langer’s introductory note about the concept of the Phase Field (with permission from the author)

• Accurate enough to correctly predict the transition path in many different situations • Simple enough to allow the theoretical analysis • Comprehensive enough to be interesting for practical applications • Computationally friendly enough to be used for numerical simulations

Preface

xiii

The success of the method is due to (i) its deep phenomenology, which manages to tie up very different phase transformations into the same framework, (ii) its computational flexibility, and (iii) its ability to transcend the constraints of spatial/ temporal scales, imposed by strictly microscopic and macroscopic methods, hence becoming a truly multi-scale, multi-physics one with significant predictive power. However, this book is not about the numerical methods. Several good books have been written about the method; some of them are listed below [14–21]. However, in the opinion of the author, they do not give the complete and unified approach to the method, which is the goal of the present book.

Nomenclature In addition to the standard scientific-literature designation of functional dependence: y = func(x), the author uses in this book a non-traditional designation: y ¼ constðxÞ

ðP:1Þ

when he means to say that the variable y is independent of the variable x. The author chooses to use the designation (P.1) instead of the traditional y = const because the latter designation implies that the variable y does not vary at all, while (P.1) means that the variable y does not vary with x, although it may vary with other variables of the problem. Except for a few cases, the author does not discuss the units of the physical quantities considered in the book, which should not be taken to mean that they are unitless. Dimensional analysis of the relations is left to the exercises in the book; the reader should use the self-check also. In this book, different terms, which have practically same meaning, are used for the phases before and after the transition: oldnew, initial-final, parent (mother)-daughter, matrix-precipitate, or nutrient-incipient (emerging). The choice is merely a literary convenience. The referenced equations are numerated in each unit (chapter or appendix) separately with the unit’s number separated from the equation’s number by a period, e.g., (1.1). Important equations in the examples are referenced with the chapternumber preceding letter E, e.g., (3E.7). Some details of derivations in the text have been left for self-analysis; they are marked throughout the text as (Why?). The author suggests that the reader finishes the derivations on his/her own because no knowledge can be acquired without hard work. Many Latin and Greek letters have multiple uses in different chapters, which should not cause any confusion. Exception is made for the following list of letters, which retain the same meaning throughout the entire book: G—Gibbs (or Landau-Gibbs) free energy of the whole system g—Gibbs (or Landau-Gibbs) free energy density V—volume of the whole system R—radius of spherical precipitates

xiv

Preface

x, y, z—Cartesian coordinates v—velocity η—order parameter κ—gradient energy coefficient σ—interfacial energy γ—kinetic (relaxation) coefficient The following abbreviations are used: APB—anti-phase domain boundary BC—boundary conditions CAM—cellular automata method CNT—classical nucleation theory ELE—Euler-Lagrange equation FTM—Field-Theoretic Method GB—grain boundaries GHE—generalized heat equation GL—Ginzburg-Landau OP—order parameter TDGLE—time-dependent Ginzburg-Landau equation TDLGLE—time-dependent Langevin-Ginzburg-Landau equation [Q]—jump of Q across a boundary line on a phase diagram ||Q||—unit or dimension of Q St—the Stefan number or dimensional undercooling Ki—kinetic number U—thermodynamic number Fayetteville, NC, USA

Alexander Umantsev

References 1. J.W. Gibbs, The Scientific Papers, vol. 1 (Dover, New York, 1961) 2. J.D. van der Waals, Verhandel. Konink. Akad. Weten. Amsterdam (Sect. 1) 1(8), 56 pp (1893); see also English translation in J. Stat. Phys. 20, 200 (1979) 3. L.D. Landau, Phys. Z. Sowjet. 11, 26, 545 (1937); see also Collected Papers of L.D. Landau, ed. by D. Ter-Haar (Cordon and Breach, London, 1967), pp. 193, 209 4. L.D. Landau, Phys. Z. Sowjet. 12, 123 (1937); see also Collected Papers of L.D. Landau, ed. by D. Ter-Haar (Cordon and Breach, London, 1967), p. 236 5. L.I. Mandel’shtam, M.A. Leontovich, Zh. Eksp. Teor. Fiz. 7, 438 (1937) (in Russian); see also English translation in Appendix K of this book 6. L.D. Landau, E.M Lifshitz, Fluid Mechanics, 2nd edn, vol. 6 (Course of Theoretical Physics) §81, p. 308), L.D. Landau, E.M. Lifshitz, Fluid Mechanics (Pergamon Press, Reading, 1968), p. 305 7. L.D. Landau, I.M. Khalatnikov, Sov. Phys. Dokladi. 96, 469 (1954) 8. V.L. Ginzburg, L.D. Landau, Zh. Exsp. Teor. Fiz. 20, 1064 (1950) 9. J.W. Cahn, J.E. Hilliard, J. Chem. Phys. 28, 258 (1958)

Preface

xv

10. J.W. Cahn, J.E. Hilliard, J. Chem. Phys. 31, 688 (1959) 11. M. Hillert, A solid-solution model for inhomogeneous systems. Acta Metal. 9, 525–546 (1961) 12. G.J. Fix, Phase field models for free boundary problems, in Free Boundary Problems: Theory and Applications, vol. 2, ed. by A. Fasano, M. Primicerio (Pitman, Boston, 1983), p. 580 13. A. Umantsev, A. Roytburd, Nonisothermal relaxation in a nonlocal medium. Sov. Phys. Solid State 30(4), 651–655 (1988) 14. L.D. Landau, E.M Lifshitz, Statistical Physics (Pergamon Press, Oxford, 1958) 15. A.Z. Patashinski, V.L. Pokrovski, Fluctuation Theory of Phase Transitions (Nauka, Moscow, 1982, in Russian) 16. J.D. Gunton, M. Droz, Introduction to the Theory of Metastable States (Springer-Verlag, Heidelberg, 1983) 17. A.G. Khachaturyan, Theory of Structural Transformations in Solids (Wiley-Interscience, New York, 1983) 18. N. Goldenfeld, Lectures on Phase Transitions and the Renormalization Group (Taylor & Francis, Boca Raton, 1992) 19. H. Emmerich, The Diffuse Interface Approach in Materials Science (Springer, Berlin-Heidelberg, 2003) 20. D.I. Uzunov, Introduction to the Theory of Critical Phenomena, 2nd edn. (World Scientific, Singapore, 2010) 21. N. Provatas, K. Elder, Phase-Field Methods in Materials Science and Engineering (Wiley, 2010)

Contents

Part I

Classical Theories of Phase Equilibria and Transformations

1

Thermodynamic States and Their Stabilities . . . . . . . . . . . . . . . . . . 1.1 Laws of Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Thermodynamic Stability of Equilibrium States . . . . . . . . . . . . . 1.3 Dynamic Stability of States . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Stability of Heterogeneous States . . . . . . . . . . . . . . . . . . . . . . . 1.5 Analysis of Dynamic Stability in Terms of Normal Modes . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3 3 5 8 10 12 14 14

2

Thermodynamic Equilibrium of Phases . . . . . . . . . . . . . . . . . . . . . 2.1 Definition of a Phase and Phase Transition . . . . . . . . . . . . . . . . 2.2 Conditions of Phase Equilibrium . . . . . . . . . . . . . . . . . . . . . . . 2.3 Ehrenfest Classification of Phase Transitions . . . . . . . . . . . . . . . 2.4 Phase Coexistence and Gibbsian Description of an Interface . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15 15 17 21 22 23 23

3

Examples of Phase Transitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Crystallization (Freezing)-Melting . . . . . . . . . . . . . . . . . . . . . . 3.2 Martensitic Transition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Magnetic Transitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Ferroelectric Transition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

25 25 27 29 37 38 38

4

Isothermal Kinetics of Phase Nucleation and Growth . . . . . . . . . . . 4.1 JMAK Theory of Nucleation and Growth . . . . . . . . . . . . . . . . . 4.1.1 Theory of Thermally Activated Nucleation and Growth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

39 39 44

xvii

xviii

Contents

4.2

Classical Nucleation Theories . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Gibbsian Theory of Capillarity . . . . . . . . . . . . . . . . . 4.2.2 Frenkel’s Distribution of Heterophase Fluctuations in Unsaturated Vapor . . . . . . . . . . . . . . . . . . . . . . . . 4.2.3 Becker-Döring Kinetic Theory of Nucleation in Supersaturated Vapor . . . . . . . . . . . . . . . . . . . . . . 4.2.4 Zeldovich Theory of Nucleation* . . . . . . . . . . . . . . . 4.3 Critique of Classical Nucleation Theories . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. .

47 48

.

52

. . . . .

56 63 67 67 68

5

Coarsening of Second-Phase Precipitates . . . . . . . . . . . . . . . . . . . . . 5.1 Formulation of the Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Rate Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.2 Condition of Local Equilibrium . . . . . . . . . . . . . . . . . . 5.1.3 Mass Conservation Condition . . . . . . . . . . . . . . . . . . . 5.1.4 Particle Size Distribution Function . . . . . . . . . . . . . . . 5.1.5 Initial, Boundary, and Stability Conditions . . . . . . . . . . 5.2 Resolution of the Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Unchanging Supersaturation . . . . . . . . . . . . . . . . . . . . 5.2.2 Asymptotic Analysis of Bulk Properties . . . . . . . . . . . . 5.2.3 Asymptotic Analysis of the Distribution Function . . . . . 5.3 Domain of Applicability and Shortcomings of the Theory . . . . . 5.3.1 Quasi-Steady Diffusion Field . . . . . . . . . . . . . . . . . . . 5.3.2 Local Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 Absence of Nucleation . . . . . . . . . . . . . . . . . . . . . . . . 5.3.4 Mean-Field Approximation . . . . . . . . . . . . . . . . . . . . . 5.3.5 Disregard of Coalescence . . . . . . . . . . . . . . . . . . . . . . 5.3.6 Spherical Shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.7 Asymptotic Regime . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.8 Small Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

69 70 70 72 75 76 77 77 77 78 85 89 89 90 90 90 90 90 90 91 92 93

6

Spinodal Decomposition in Binary Systems . . . . . . . . . . . . . . . . . . . 95 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

7

Thermal Effects in Kinetics of Phase Transformations . . . . . . . . . . 7.1 Formulation of the Stefan Problem . . . . . . . . . . . . . . . . . . . . . . 7.1.1 Stefan Boundary Condition . . . . . . . . . . . . . . . . . . . . . 7.1.2 Thermodynamic Equilibrium Boundary Condition . . . . 7.1.3 Phase Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.4 Initial Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.5 Far Field Condition . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Plane-Front Stefan Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 Diffusion Regime of Growth . . . . . . . . . . . . . . . . . . . . 7.2.2 Adiabatic Crystallization . . . . . . . . . . . . . . . . . . . . . . .

101 102 103 104 104 104 105 105 105 108

Contents

xix

7.2.3 Critical Supercooling Crystallization . . . . . . . . . . . . . . 7.2.4 General Observations . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Ivantsov’s Theory* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Morphological Instability* . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Dendritic Growth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.1 Analysis of Classical Theories of Crystallization . . . . . 7.6 Numerical Simulations of Dendritic Growth . . . . . . . . . . . . . . . 7.6.1 Cellular Automata Method . . . . . . . . . . . . . . . . . . . . . 7.6.2 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7 Rate of Growth of New Phase . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Part II 8

9

110 111 111 117 121 123 124 124 127 131 132 134 134

The Method

Landau Theory of Phase Transitions . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Phase Transition as a Symmetry Change: The Order Parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Phase Transition as a Bifurcation: The Free Energy . . . . . . . . . . 8.3 Classification of the Transitions . . . . . . . . . . . . . . . . . . . . . . . . 8.4 The Tangential Potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Other Potentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 Phase Diagrams and Measurable Quantities . . . . . . . . . . . . . . . 8.6.1 First-Order Transitions . . . . . . . . . . . . . . . . . . . . . . . . 8.6.2 Second-Order Transitions . . . . . . . . . . . . . . . . . . . . . . 8.7 Effect of External Field on Phase Transition . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Heterogeneous Equilibrium Systems . . . . . . . . . . . . . . . . . . . . . . . . 9.1 The Free Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.1 Gradients of Order Parameter . . . . . . . . . . . . . . . . . . . 9.1.2 Gradients of Conjugate Fields* . . . . . . . . . . . . . . . . . . 9.1.3 Field-Theoretic Analogy . . . . . . . . . . . . . . . . . . . . . . . 9.2 Equilibrium States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 One-Dimensional Equilibrium States . . . . . . . . . . . . . . . . . . . . 9.3.1 General Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.2 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.3 Thermomechanical Analogy . . . . . . . . . . . . . . . . . . . . 9.3.4 Type-e Solutions: General Properties . . . . . . . . . . . . . . 9.3.5 Type-e1 Solution: Bifurcation From the Transition State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.6 Type-e3 Solution: Approach to Thermodynamic Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

139 139 143 149 152 156 157 157 160 161 168 169 171 172 173 175 177 178 182 182 185 187 190 192 194

xx

Contents

9.3.7 Type-e4 Solution: Plane Interface . . . . . . . . . . . . . . . . 9.3.8 Interfacial Properties: Gibbs Adsorption Equation* . . . . 9.3.9 Type-n4 Solution: Critical Plate—Instanton . . . . . . . . . 9.4 Free Energy Landscape* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Multidimensional Equilibrium States . . . . . . . . . . . . . . . . . . . . 9.5.1 Ripples: The Fourier Method* . . . . . . . . . . . . . . . . . . . 9.5.2 Sharp-Interface (Drumhead) Approximation . . . . . . . . . 9.5.3 The Critical Droplet: 3d Spherically Symmetric Instanton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6 Thermodynamic Stability of States: Local Versus Global* . . . . . 9.6.1 Type-e4 State: Plane Interface . . . . . . . . . . . . . . . . . . . 9.6.2 General Type-e and Type-n States . . . . . . . . . . . . . . . . 9.6.3 3d Spherically Symmetric Instanton . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

11

194 198 201 202 204 205 210 213 220 221 222 224 226 227

Dynamics of Homogeneous Systems . . . . . . . . . . . . . . . . . . . . . . . . 10.1 Evolution Equation: The Linear Ansatz . . . . . . . . . . . . . . . . . . 10.2 Dynamics of Small Disturbances in the Linear-Ansatz Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.1 Spinodal Instability and Critical Slowing Down . . . . . . 10.2.2 More Complicated Types of OPs . . . . . . . . . . . . . . . . . 10.3 Evolution of Finite Disturbances by the Linear-Ansatz Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Beyond the Linear Ansatz . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 Relaxation with Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6 Other Forces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

229 229

236 238 238 240 241 241

Evolution of Heterogeneous Systems . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Time-Dependent Ginzburg-Landau Evolution Equation . . . . . . . 11.2 Dynamic Stability of Equilibrium States . . . . . . . . . . . . . . . . . . 11.2.1 Homogeneous Equilibrium States . . . . . . . . . . . . . . . . 11.2.2 Heterogeneous Equilibrium States . . . . . . . . . . . . . . . . 11.3 Motion of Plane Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3.1 Thermomechanical Analogy . . . . . . . . . . . . . . . . . . . . 11.3.2 Polynomial Solution . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3.3 Selection Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3.4 Morphological Stability . . . . . . . . . . . . . . . . . . . . . . . 11.4 Emergent Equation of Interfacial Dynamics . . . . . . . . . . . . . . . 11.4.1 Nonequilibrium Interface Energy . . . . . . . . . . . . . . . . . 11.5 Evolution of a Spherical Droplet . . . . . . . . . . . . . . . . . . . . . . . 11.6 Domain Growth Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.7 *Thermomechanical Analogy Revisited . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

243 243 245 246 248 249 250 250 254 256 257 258 259 260 264 266 267

232 234 236

Contents

xxi

12

Thermodynamic Fluctuations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1 Free Energy of Equilibrium System with Fluctuations . . . . . . . 12.2 Correlation Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3 Levanyuk-Ginzburg Criterion . . . . . . . . . . . . . . . . . . . . . . . . 12.4 Dynamics of Fluctuating Systems: The Langevin Force . . . . . . 12.4.1 Homogeneous Langevin Force . . . . . . . . . . . . . . . . . 12.4.2 Inhomogeneous Langevin Force . . . . . . . . . . . . . . . . 12.5 Evolution of the Structure Factor . . . . . . . . . . . . . . . . . . . . . . 12.6 *Drumhead Approximation of the Evolution Equation . . . . . . . 12.6.1 Evolution of the Interfacial Structure Factor . . . . . . . . 12.6.2 Nucleation in the Drumhead Approximation . . . . . . . 12.7 Homogeneous Nucleation in Ginzburg-Landau System . . . . . . 12.8 *Memory Effects: Non-Markovian Systems . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

13

Multi-Physics Coupling: Thermal Effects of Phase Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1 Equilibrium States of a Closed (Adiabatic) System . . . . . . . . . . 13.1.1 Type-E1 States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1.2 *Type-E2 States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2 Generalized Heat Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3 Emergence of a New Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4 Non-isothermal Motion of a Curved Interface . . . . . . . . . . . . . . 13.4.1 Generalized Stefan Heat-Balance Equation . . . . . . . . . . 13.4.2 Surface Creation and Dissipation Effect . . . . . . . . . . . . 13.4.3 Generalized Emergent Equation of Interfacial Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.4 Gibbs-Duhem Force . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.5 Interphase Boundary Motion: Heat Trapping Effect . . . 13.4.6 APB Motion: Thermal Drag Effect . . . . . . . . . . . . . . . 13.5 Variability of the System: Length and Energy Scales . . . . . . . . . 13.6 Pattern Formation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.6.1 One-Dimensional Transformation . . . . . . . . . . . . . . . . 13.6.2 Two-Dimensional Transformation . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

334 336 338 341 344 345 346 348 350 351

Validation of the Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1 Physical Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2 Parameters of FTM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3 Boundaries of Applicability . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

353 353 354 356 358

14

. . . . . . . . . . . . . . .

269 271 275 277 278 280 282 284 287 289 291 294 295 300 301 303 304 304 313 318 323 330 331 333

xxii

Contents

Part III 15

16

17

Applications

Conservative Order Parameter: Theory of Spinodal Decomposition in Binary Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 15.1 Equilibrium in Inhomogeneous Systems . . . . . . . . . . . . . . . . . . 15.2 Dynamics of Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . 15.3 Evolution of Small Disturbances . . . . . . . . . . . . . . . . . . . . . . . 15.4 Pattern Formation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.5 Role of fluctuations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Complex Order Parameter: Ginzburg-Landau’s Theory of Superconductivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.1 Order Parameter and Free Energy . . . . . . . . . . . . . . . . . . . . . . . 16.2 Equilibrium Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3 Surface Tension of the Superconducting/Normal Phase Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Multicomponent Order Parameter: Crystallographic Phase Transitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.1 Invariance to Symmetry Group . . . . . . . . . . . . . . . . . . . . . . . 17.2 Inhomogeneous Variations . . . . . . . . . . . . . . . . . . . . . . . . . . 17.3 Equilibrium States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . .

361 361 364 367 371 371 374 374 375 375 378 383 385 385 387 387 389 390 398 398

18

“Mechanical” Order Parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405

19

Continuum Models of Grain Growth . . . . . . . . . . . . . . . . . . . . . . . 19.1 Basic Facts About Growing Grains . . . . . . . . . . . . . . . . . . . . . 19.2 Multiphase-Field Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.3 Orientational Order Parameter Field Models . . . . . . . . . . . . . . . 19.4 Phase-Field Crystal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

407 407 408 410 412 413

Appendix A: Coarse-Graining Procedure . . . . . . . . . . . . . . . . . . . . . . . . 415 Appendix B: Calculus of Variations and Functional Derivative . . . . . . . 423 Appendix C: Orthogonal Curvilinear Coordinates . . . . . . . . . . . . . . . . . 431 Appendix D: Lagrangian Field Theory . . . . . . . . . . . . . . . . . . . . . . . . . . 437 Appendix E: Eigenfunctions and Eigenvalues of the Schrödinger Equation and Sturm’s Comparison Theorem . . . . . . . . . . . . . . . . . . . . . 443

Contents

xxiii

Appendix F: Fourier and Legendre Transforms . . . . . . . . . . . . . . . . . . . 449 Appendix G: Stochastic Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455 Appendix H: Two-Phase Equilibrium in a Closed Binary System . . . . . . 473 Appendix I: On the Theory of Adsorption of Sound in Liquids . . . . . . . 479 Epilogue: Challenges and Future Prospects . . . . . . . . . . . . . . . . . . . . . . 495 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499

About the Author

Alex Umantsev is a Professor of Materials Physics in the Department of Chemistry, Physics, and Materials Science at Fayetteville State University in North Carolina. He earned his doctorate in 1986 in Moscow (Russia) developing the first successful computer model of the dendritic growth and pioneering research on thermal effects of phase transformations and physical consistency of the newly introduced fieldtheoretic method. In early 1990s, he worked as a research associate at Northwestern University. After that he began his teaching career. His research interests span the areas of materials theory and multiscale modeling of phase transformations from traditional small-molecule metallic or ceramic systems to crystallization of macromolecules of polymers and proteins. He has always been interested in the processing-structure-properties relations of materials ranging from their production to the analysis of their failure.

xxv

Part I

Classical Theories of Phase Equilibria and Transformations

Chapter 1

Thermodynamic States and Their Stabilities

A cursory survey of laws of thermodynamics relevant to the problems of phase transformations is given. Although similar information can be found in other textbooks, e.g., [1], the succinct presentation of the basic ideas and methods is helpful for understanding the general concepts in phase transformations.

1.1

Laws of Thermodynamics

Consensus of the researchers is that thermodynamic behavior of a system can be described by four laws. Zeroth Law Although not always recognized as a separate law due to the lack of a formula to express it, this statement presents a law because it expresses a property of a macroscopic system, which is always true. Every natural process in an isolated system proceeds toward an equilibrium state, never away from it. As a consequence of the zeroth law, the concept of temperature T is introduced and used for characterization of an equilibrium state of the system. First Law A thermodynamic system, which interacts with the surroundings, can be characterized by a function U, called internal energy, which can be changed by two different ways: (i) doing mechanical work on the system or (ii) transferring a certain amount of heat to it. ΔU = ΔW þ ΔQ

ð1:1aÞ

The heat ΔQ may be also understood as dissipation of the work ΔW, e.g., via dry friction or plastic transformation. However, there is the third way to change the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Umantsev, Field Theoretic Method in Phase Transformations, Lecture Notes in Physics 1016, https://doi.org/10.1007/978-3-031-29605-5_1

3

4

1

Thermodynamic States and Their Stabilities

internal energy of a thermodynamic system interacting with the surroundings—add a certain amount of the substance from the surroundings or remove from it. Then, the first law can be expressed by the following: ΔU = ΔW þ ΔQ þ μΔN

ð1:1bÞ

where N is the number of particles in the system (we do not consider here chemical content or reactions) and μ is a quantity introduced by Gibbs to characterize the energy transfer due to mass reallocation, which he called (electro)chemical potential.1 Second Law (Clausius Formulation) There exists a function S of the state of the system called the entropy such that: (i) The entropy of a system is the sum of the entropies of its parts. (ii) The entropy of a closed system is determined by the energy and volume of the system so that (∂S/∂U)V is always positive. (iii) The entropy of an insulated closed system increases in any natural change, remains constant in any reversible change, and is a maximum at equilibrium: ΔS ≥ 0

ð1:2aÞ

(iv) The entropy of a reversible change is as follows: ΔS =

ΔQ : T

ð1:2bÞ

Combined application of the first and second laws leads to introduction of another thermodynamic function called the Helmholtz free energy: F = U - TS,

ð1:3aÞ

which governs behavior of a system open to the exchange of heat with the surroundings or the Gibbs free energy: G = U - TS þ PV

ð1:3bÞ

if the system is open to the volume changes also. Here P is the pressure and V is the volume. The first and second laws warrant classification of thermodynamic quantities by their relations to the size of the system expressed by its volume V, mass M, or

1

Characterizing a system by the number of particles does not mean assuming a microscopic approach.

1.2

Thermodynamic Stability of Equilibrium States

5

number of particles N. Most of the thermodynamic quantities can be divided into the following categories: A. Extensive quantities whose amounts are proportional to the size of the system: Q/V(N, M ). Examples of extensive quantities are mass, volume, energy, entropy, free energy, magnetization, etc. B. Intensive quantities whose amounts are independent of the size of the system: q = const(V ). Intensive quantities can be broken into three categories (a) Densities, which are just ratios of an extensive quantity and the system size: q = Q/V. Examples of densities are mass density, energy density, magnetization density, etc. (b) Fields, which are emerging properties, that is, the properties which can be defined on the macroscale only. Examples of the fields are temperature, pressure, and chemical potential (chemical potential, however, may be defined on the microscale also). (c) Excess quantities, which may have units of extensive ones but, like densities and fields, are independent of the size of the system. An example of an excess quantity is the free energy of a critical nucleus in nucleation of a new phase. There are thermodynamic quantities, which do not fall into either category, e.g., the surface energy of a system. However, the surface tension (interfacial energy), which is a ratio of the surface energy and the surface area, is an excess quantity. Third Law (Nernst Heat Theorem) The entropy change associated with any isothermal reversible process of a condensed system approaches zero as the temperature approaches zero. The third law has a number of alternative formulations, which are equivalent to the Nernst heat theorem, for instance, the principle of the unattainability of absolute zero or the statement that the entropy of the most stable phase is zero at the absolute zero. The third law is more important for the low-temperature phase transitions than for those where the equilibrium temperature TE is far away from the absolute zero.

1.2

Thermodynamic Stability of Equilibrium States

The issue of thermodynamic stability is so important because instability of a system leads to a phase transition. Rephrasing Russian writer L. Tolstoy, one can say that “stable systems are all alike; every unstable system is unstable in its own way.” There are many kinds of instabilities, some of which will be considered in this and the following sections. An equilibrium state is a state which does not change with time. This definition applies to the states of mechanical systems as well as to the states of thermodynamic systems. In order to make clear what is meant by the thermodynamic stability and

6 Fig. 1.1 States of stable (a) (c) and unstable (b) mechanical equilibrium

1

(a)

Thermodynamic States and Their Stabilities

(b)

(c)

instability of an equilibrium state in a thermodynamic system, let us first discuss a purely mechanical one. In Fig. 1.1 are depicted three different equilibrium states of a block subjected to the gravitational force near earth. We know that each state is a state of mechanical equilibrium because the line of action of the net gravitational force passes through the supporting edge of the block, which allows all the forces exerted on the box to balance (add up to zero), which is the condition of the mechanical (static) equilibrium. However, there is an important physical distinction between the states (a), (b), and (c) in Fig. 1.1. It is possible to see the distinction if one slightly disturbs the position of its equilibrium. Indeed, the block returns to position (a) or (c) but not to position (b). In the latter case, the block will settle in position (a) or (c). We say that the states (a) and (c) are stable while the state (b) is unstable. Only stable equilibria are realizable in practice since slight disturbances are always present in reality. There are two physical approaches on how one can validate what we have just described intuitively based on our practical experience: one approach is to use the Newtonian forces. Another approach is to use the concept of the potential energy. From the standpoint of the Newtonian mechanics, we can calculate the net force (torque) exerted on the block in the disturbed positions and see that the force returns the block to positions (a) or (c)—restoring force—but moves the block away from the disturbed position (b), removing force. On the other hand, using the concept of the gravitational potential energy, we can show that in positions (a) or (c) the potential energy of the block is a minimum among all the disturbed positions while in position (b) it is a maximum among all the positions slightly disturbed about the equilibrium state. We can go one step further and find that the state (c) is “more stable” that the state (a) because there are “more” disturbances, which return the block back to the equilibrium state (c) than to the equilibrium state (a). One may say that the equilibrium state (c) is absolutely stable, while the equilibrium state (a) is unstable compared to (c). The same fact can be expressed by stating that in position (c) the gravitational potential energy of the block is less than that in position (a). An equilibrium state of a thermodynamic system may be absolutely stable, unstable, or unstable compared to the absolutely stable state. The latter one is called metastable. Like in mechanics, a truly unstable thermodynamic state is unrealizable because of the thermodynamic disturbances called thermal fluctuations. In thermodynamics, the “energy method” is significantly more convenient than the method of forces. For instance, assuming fluctuations of energy and volume, as a consequence of the second law, that is, vanishing of the first differential of the entropy of the system, one obtains equality of temperature and pressure (to be exact, (∂S/∂U)V and (∂S/∂V)U) in all parts of the system as the condition of equilibrium. However, a

1.2

Thermodynamic Stability of Equilibrium States

7

condition of stable equilibrium requires negativity of the second differential also. Then, the condition of stability of matter (phase) may be expressed as positivity of the specific heat CP: CP 

∂H ∂T

2

= -T P

∂ G ∂T 2

> 0,

ð1:4aÞ

P

and isothermal compressibility βT: βT  -

1 ∂V V ∂P

2

=T

1 ∂ G V ∂P2

>0

ð1:4bÞ

T

However, not all disturbances of a system are due to fluctuations of its thermodynamic parameters (energy and volume in a closed system or temperature and pressure in an open system). A system may experience disturbances while its temperature and pressure remain unchanged, for instance, density change in a fluid (gas or liquid), magnetization in a ferromagnet, degree of a chemical reaction in a mixture, or proportion of two phases in a material. To characterize such changes, we introduce the set of parameters {Xi} and call them hidden, as opposed to thermodynamic, like temperature and pressure or energy and volume.2 From the standpoint of statistical thermodynamics, a hidden parameter is a slowly varying (in space and time) function of the dynamic variables of the system—the generalized coordinates and momenta—and represents an additional restriction (first integral) of the phase space available for the system. It can be obtained through the coarse-graining of the system (see Appendix A). Fixing the value of the hidden parameter removes ergodicity of the phase space. The ergodicity is restored at equilibrium where the hidden parameter becomes a function of the thermodynamic variables, like temperature and pressure. At equilibrium, a system must be stabilized against all disturbances including fluctuations of the hidden parameters. To find the condition of stability, we invoke the second law in application to a system open for the exchanges of heat and volume with the surroundings. As stated above, such state should have less Gibbs free energy than nonequilibrium (disturbed) states, while an unstable equilibrium state should have more free energy than disturbed ones. Hence, to examine stability of the state, one must analyze on minimum the Gibbs free energy of the system as a function of the hidden parameters G{Xi; T. P} which characterize all states—equilibrium and nonequilibrium: ∂G ∂X i

= 0, T,P

and

2

Do not confuse them with the hidden variables in quantum mechanics.

ð1:5aÞ

8

1

Thermodynamic States and Their Stabilities

2

∂ G ∂X 2i

> 0:

ð1:5bÞ

T,P

Notice that in a way the condition of stability with respect to hidden variables (1.5b) is opposite to that with respect to the thermodynamic variables (1.4a and 1.4b). This is because hidden variables can be changed independently of the surroundings while the thermodynamic variables cannot. The concept of hidden variables in thermodynamics is similar to the concept of the degrees of freedom in mechanics where they uniquely define the configuration of the system given the constraints on it, like temperature or pressure. Furthermore, as negativity of the second derivative in (1.5b) is a condition of instability of the state, the inflection point, at which: 2

∂ G ∂X 2i

= 0,

ð1:5cÞ

T,P

is called a spinodal point. It separates regions of stability and instability with respect to small fluctuations.3

1.3

Dynamic Stability of States

Not only equilibrium states are of interest for us. Suppose that, in accordance with the equations governing the system, it is in a state where the set of hidden parameters or coordinates characterizing the state changes linearly with time so that the rates of change of the coordinates (velocities) do not vary as functions of time. Such state is called stationary. The stationary states can be realized only in certain ranges of the external parameters, like temperature and pressure. The reason for that lies in their inherent instability, that is, their inability to sustain themselves against small disturbances to which any physical system is subjected. In other words, the physical reasons of instability of the equilibrium and stationary states are similar. However, the methods of their analyses are different because the laws of thermodynamics are not applicable to nonequilibrium states.4 Instead, the stationary states are subjected to the laws of nonequilibrium thermodynamics, e.g., the principle of minimum entropy production [2]. In considering the stability of such a system, we ask a question: if the stationary state is disturbed, will the disturbance gradually die down or will it grow in such a

3

Often this definition is applied to systems where X is composition of the state; however, nothing restricts us against expanding this definition on the states of general nature. 4 Except for the case when using the Galilean invariance, the stationary state can be reduced to the equilibrium one.

1.3

Dynamic Stability of States

9

way that the system departs from the initial state and never comes back to it? In the former case, we say that the state is stable with respect to the disturbance; in the latter case, we say that it is unstable. Clearly, the state may be considered stable only if it is stable with respect to every possible disturbance, and it is unstable even if there is even one special mode of disturbance with respect to which it is unstable. The same ideas may be applied to an equilibrium state; in this case we must arrive at the same conclusion as that from the thermodynamic analysis. This is called the principle of dynamic consistency. On the other hand, the same ideas of the dynamic stability are applicable to nonequilibrium states (basic states or permanent regimes) of more complicated nature than the stationary ones. For instance, a stable self-similar process in hydrodynamics is defined as such where perturbations do not grow faster than the process itself. A question of the dynamic equations governing behavior of the hidden parameters of the system is very important for us, but we will defer it till later (see Sect. 10.2). However, mathematically, these equations can be written in the form of a system of ordinary differential equations (ODEs) of the first order: dX i = F ðX i , t; EPÞ; i = 1, . . . n dt

ð1:6Þ

where t is time, {Xi} is the set of coordinates (hidden parameters) characterizing the state—degrees of freedom—and (EP) is a set of the control parameters, which determine the external conditions, like temperature and pressure. If time t does not appear explicitly in the r.h.s. of (1.6), the system is called autonomous. Let X 0i ðt Þ be a basic dynamic state—trajectory—initiated at X 0i ð0Þ and {Xi(t)} be another trajectory starting at {Xi(0)}, both governed by (1.6). Then, the basic state is said to be stable (to be exact, stable in Lyapunov’s sense) if the distance between the basic and the perturbed trajectories can be kept arbitrary small for all times. More formally, if ε is being given, δ can be found such that: X i ð0Þ - X 0i ð0Þ < δðεÞ⟹ X i ðt Þ - X 0i ðt Þ < ε for t > 0:

ð1:7Þ

If, in addition, the distance X i ðt Þ - X 0i ðt Þ tends to zero when time increases, the state is called asymptotically stable. Lyapunov also developed a method of analysis of stability of the states, which are solutions of the systems of differential equation, like (1.6). The method, called the second Lyapunov method, can be better explained for a trajectory ~ ðY i , t; EPÞ and the basic state is the origin: Y i ðt Þ = X i ðt Þ - X 0i ðt Þ for which dY i =dt = F 0 Y i ðt Þ = 0. V(Yi) is called a Lyapunov function if it is (1) a scalar function of Yi, continuous with continuous first-order partial derivatives; (2) positive definite in a neighborhood D of the origin, that is, V(Yi = 0) = 0 and V(Yi) > 0 for all Yi ≠ 0 in D—in other words, has a minimum at the origin; and (3) decreasing as a function of time through Yi(t), that is, V_ ðY i Þ is negative semidefinite:

10

1

dV V_ ðY i Þ  = dt

n i=1

Thermodynamic States and Their Stabilities

∂V ~ F ðY i , t; EPÞ ≤ 0 ∂Y i

ð1:8Þ

If such a Lyapunov function exists, then the basic state Y 0i ðt Þ = 0 is stable. Moreover, if V_ ðY i Þ is negative definite, the basic state is asymptotically stable. Geometrically, it means that the representative point of a trajectory Yi(t), moving on the surface of the Lyapunov function, approaches the basic state Y 0i ðt Þ = 0. The method of Lyapunov’s function can be applied to the analysis of stability of equilibrium states. In this case, the free energy function (Helmholtz or Gibbs) plays a role of Lyapunov’s function, and the conclusions of the thermodynamic and dynamic analyses coincide. Although Lyapunov’s method provides us with the analysis of the local (linear) stability against small disturbances, it can be used for the analysis of the global (nonlinear, absolute) stability against all disturbances. The described types of stability do not exhaust the problem of stability of a system or a process. For instance, in analytical mechanics stability or instability of a closed orbit is assessed by the behavior of a slightly perturbed orbit: if the latter remains bounded (not necessarily closed), the original closed orbit is called stable; otherwise (the perturbed orbit escapes to infinity) it is called unstable. Another example of stability may be found in the theory of approximation of functions where the approximation expansion is called stable if the errors in the series do not increase compared to the magnitude of the function. However, in this book, we will concentrate on the problem of stability of thermodynamic states and stationary processes.

1.4

Stability of Heterogeneous States

To better understand various cases of stability in thermodynamic systems, we need to discuss first various states which may be encountered there. We have already discussed homogeneous equilibrium states. The stable subset of those will be called phases and discussed in Sect. 2.1. Inhomogeneous states—equilibrium or not—are called structures. Essentially one-dimensional (1D) structures are called interfaces (fronts) and two-dimensional (2D), surfaces, and the nonequilibrium (timedependent) states are called flows (basic states or permanent regimes). A distinction between the homogeneous and inhomogeneous states is often framed as a difference between the discrete and continuous systems. Stability of phases and stationary homogeneous flows have been discussed above. Same ideas may be applied to the analysis of stability of structures, which is often called morphological stability. However, their implementation meets more challenges. Indeed, in a heterogeneous equilibrium system, some of the state variables turn into functions of space, and the states of the system should be mathematically described by a class of functions instead of numbers. The free energy of the whole system becomes a functional of the hidden variables (coordinates) and their spatial derivatives:

1.4

Stability of Heterogeneous States

11

^g½X i ðrÞ, ∇X i ðrÞ; T, P d 3 x

GfX i ; T, Pg =

ð1:9Þ

V

where V is the total volume occupied by the system and the integrand ^ g½X i ðrÞ; T, P is the Gibbs free energy density. The equilibrium states can be found as the functions that minimize this functional, that is, (1.9) plays a role of the Lyapunov functional for a heterogeneous system. As a consequence of the inclusion of heterogeneities into the free energy functional, to find the equilibrium states of the system, we have to use the calculus of variations instead of the calculus of functions of several variables. As explained in Appendix B, the equilibrium states can be found among the solutions of the EulerLagrange equation (ELE): δG δX i

∂^g ∂X i

 T,P

-∇ T,P

∂^ g Xi ∂∇

= 0:

ð1:10Þ

T,P

called extremals. The sufficient condition for an extremal to deliver a weak minimum to the functional G{Xi; T. P}5 is for its second variation to be positive definite: δ2 G

T,P

=

1 2

^ i ð rÞ d 3 x > 0 δX i ðrÞHδX

ð1:11Þ

V

where: ^= H

2

2

2

2

∂ ^g ∂ ^g ∂ ^g ∂ ^ g -∇ -∇ ∇∇2 ∂X i ∂X i ∂∇X i ∂X i ∂∇X i ∂∇X i ∂∇X i ∂∇X i

ð1:11aÞ

is called Hamilton’s operator. Then, evaluation of stability of the extremal requires finding the spectrum of eigenvalues of the operator and analyzing them on positivity (see Sect. 9.6 and Appendix E). The method of Lyapunov’s function(al) can also be used to analyze stability of a flow if an appropriate function(al) is found. An example of such is Prigogine’s entropy production function, which must reach minimum in a stable stationary state. Computers made a revolution in the area of finding equilibrium states of thermodynamic systems. If the dynamic equations of evolution of the system are known, it turned out to be more efficient to numerically solve them than to seek solutions of the equilibrium equations analytically and analyze them on the stability. The downside of the numerical method is that it is not a trivial task to escape the traps of metastability.

5

Relative to functions close to the extremal.

12

1.5

1

Thermodynamic States and Their Stabilities

Analysis of Dynamic Stability in Terms of Normal Modes

Although illuminating, the Lyapunov methods do not present a complete analysis of stability even for the equilibrium or stationary states. The problem is that, as we have explained above, stability means stability with respect to all possible modes of disturbance of a thermodynamic system, while solutions of ODE (1.6) describe only spatially homogeneous modes. Inclusion of the reaction of the system on the spatially inhomogeneous modes of disturbance is usually done with the help of the method of normal modes [3, 4]. This method is very effective. Its only deficiency is that this is still a method of analysis of weak stability (with respect to small— infinitesimal—disturbances). The main steps of the method of normal modes are the following. (1) Impose a small (infinitesimal) disturbance A(r, t) on the basic state in study X0(r, t), and linearize the governing equation. (2) Expand the disturbance in terms of some suitable set of normal modes. For such an expansion to be possible, the set must be complete and have the space and time dependencies separated, e.g., Aðr, t Þ = k Ak ðrÞeβk t where βk is a separation constant (spectrum of the linearized operator), which must be determined, and summation may be replaced with integration over k. (3) Seek a nontrivial solution of the equation for the amplitudes Ak(r), which satisfies the necessary boundary conditions, and analyze the characteristic ðrÞ ðiÞ values of the separation constant βk X 0 , EP = βk þ iβk for various modes k. ðrÞ (4) The condition of stability of the basic state is for βk , which is called the growth rate or amplification factor, to be negative for all k. The mode km with the greatest value of the growth rate is called the “most dangerous” mode. The mode kn for ðr Þ which βk = 0 is called the marginal or neutral. (5) The mode k is stationary if ðiÞ ðiÞ βk = 0 and oscillatory if βk ≠ 0. (6) If for the marginal mode kn we also have ðrÞ ðrÞ ðiÞ ∇k βk = 0, that is, the spectrum lies in left half-space of βk , βk with just one point on the imaginary axis, two options are possible: the mode is marginally ðiÞ ðiÞ indifferent if ∇k βk = 0 or convectively unstable if ∇k βk ≠ 0. Although unstable in the laboratory frame, the latter mode is stable in the reference frame moving with ðiÞ the speed v = ∇k βk . Modifications of the method are possible. An important one comes in the case where the inhomogeneous state is characterized by two different scales—smaller (inner) scale and larger (outer) scale. In this case the separation constant may be represented as a function of the low-dimensional modes, e.g., 1D modes k. An important example of this type of modification is the morphological stability of the front of crystallization of substances. The latter may be presented in a form of a thin surface (or plane) with small corrugations (disturbance), which, in the case of instability, give rise to dendrites growing on the front; see Sect. 7.4.

1.5

Analysis of Dynamic Stability in Terms of Normal Modes

13

Example 1.1 Stability of Diffusion with Nonlinear Sources Find equilibrium concentration X in a box with the constant diffusion coefficient D, inside which there is a nonlinear source of species F = X(a - X), where a depends on the external conditions, like temperature and pressure. The box of fixed length is centered at the origin with the sides kept at zero concentration. Distribution of the concentration X in the system is described by the diffusion equation: dX = D∇2 X þ F ðX Þ dt

ð1E:1Þ

Concentrations of the basic states are as follows: X0 = 0 or X0 = a. Applying the method of normal modes to the analysis of their stability, we write: X = X0 + A(r, t). Then, the linearized equation takes the following form: ∂F 0 dA X A = D∇2 A þ dt ∂X

ð1E:2Þ

Expanding the disturbance in the Fourier series: Aðr, t Þ =

A k k

Re eikr eβk t ,

ð1E:3Þ

which complies with the boundary conditions, and substituting the expansion into (1E.2), we obtain the characteristic equation for the separation constant: βk =

∂F 0 X - Djkj2 ∂X

ð1E:4Þ

The spectrum of the normal modes is discrete (Why?); its continuous extension is depicted in Fig. 1.2. Inspecting the spectrum and taking into account that ∂F(X0)/ ∂X = (a - 2X0), we can see that, first, the separation constant is a real number, that is, there are no unstable oscillations or convective instabilities in the system; second, stability of the basic states depends on the sign of a: if a > 0, the state X0 = a is stable and the state X0 = 0 is unstable, while if a < 0, it is the other way around; hence, X0 ≥ 0 for the stable state, which satisfies the physical requirement (Why?); third, the k ≠ 0 modes are “more stable” than the km = 0 (the “most dangerous”) mode; fourth, the marginal mode kn is as follows: jkn j  kn =

j aj : D

ð1E:5Þ

The method allows us to make further observations about stability of the system. Indeed, as a changes sign, the stable state switches from one branch to another one

14

1

Thermodynamic States and Their Stabilities

Fig. 1.2 The growth rate of the basic states of the diffusion equation with nonlinear sources: stable, (1); marginal, (2); unstable, (3). Inset: exchange of stabilities between the basic states X0 = 0 and X0 = a; stable states—double lines; marginal state—dotted circle

(transcritical bifurcation or exchange of stabilities). Moreover, the fact that km = 0 is the “most dangerous” mode tells us that stability of the basic state will be broken almost uniformly in the system and its instability will be manifested by the waves with the wave vectors |k| < kn inside the sphere of radius kn centered at the origin of the Fourier space {k}.

Exercises 1. Verify in Example 1.1 that all terms of the series (1E.3) comply with the boundary conditions of the problem. 2 3 2. Analyze stability of the nonlinear heat equation: dX dt = λ∇ X þ αX - X . 3. Give examples of other hidden variables in thermodynamics.

References 1. E.A. Guggenheim, Thermodynamics. An Advanced Treatment for Chemists and Physicists (Elsevier, North Holland, Amsterdam, 1967) 2. S.R. de Groot, P. Mazur, Non-Equilibrium Thermodynamics (North Holland, Amsterdam, 1962) 3. P. Manneville, Dissipative Structures and Weak Turbulence (Academic Press, Inc., San Diego, CA, 1990) 4. P. Collet, J.-P. Eckmann, Instabilities and Fronts in Extended Systems (Princeton University Press, Princeton, NJ, 1990)

Chapter 2

Thermodynamic Equilibrium of Phases

In this chapter, we define a phase as a homogeneous stable state of matter and establish conditions of equilibrium between the phases. Then, we consider various classifications of phase transitions and conclude with the description of Gibbsian ideas about the interface separating phases at equilibrium.

2.1

Definition of a Phase and Phase Transition

There are many physical situations which may be called phase transitions. They are always associated with significant changes of properties in the physical system. If water is cooled below zero degrees Celsius at the atmospheric pressure, it solidifies and stops flowing from one vessel to another. However, if water is heated above a hundred degrees Celsius, it turns into steam, which occupies as much space as the vessel boundaries allow. If a bar magnet is heated above a certain temperature, called Curie temperature (approximately 770 °C for an iron magnet), it abruptly loses its property to attract small pieces of steel. If a piece of steel is quenched rapidly, its hardness increases dramatically. Under the microscope one can see that the characteristic plates, called martensite, appear which have not been there before the quench. There are, however, significant differences between the examples of phase transitions above. Water can be supercooled below 0 °C and superheated above 100 °C, steam can be supercooled below 100 °C, and ice can be superheated above 0 °C.1 However, one cannot superheat a magnet above its Curie temperature. Phase transitions may have critical points where the phase transition pattern changes from what it was away from this point.2 For instance, in compression experiments, below

1

At least slightly. Do not confuse with critical points in mathematics, which occur when the first derivative of a function vanishes.

2

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Umantsev, Field Theoretic Method in Phase Transformations, Lecture Notes in Physics 1016, https://doi.org/10.1007/978-3-031-29605-5_2

15

16

2

Thermodynamic Equilibrium of Phases

the critical temperature of water, there are two distinct fluid phases, liquid and vapor (steam), while above this temperature liquid changes its state (density) continuously. A phase transition may also have a triple point where properties of three phases somewhat equilibrate. These differences bring up a distinction between various kinds of transitions discussed in Chap. 3. Although the concept of phase transitions was conceived and developed in the realm of physics and chemistry, it has long since crossed these borders and is now used in many other branches of science, e.g., biology, sociology, and cosmology. For a substantive discussion of phase transitions, we need a reliable definition of a phase. So far, we have been using an intuitive one based on the physical properties of the state of a system. However, the context of our discussion merits a rigorous definition. We suggest the following: A phase is a homogeneous part of a system distinguishable by a set of intrinsic properties, which has attained a state of thermodynamic equilibrium under the specified external conditions.

A few comments are in order here: 1. Of course, “a part of a system” may be the entire system, however, importantly that it is “a homogeneous part.” In the literature, one can find definitions of a phase that are based on the ability of phases to coexist.3 There are two problems with such a definition. For one, some phases do not allow coexistence, e.g., of a second-order transition (see below). For another, such a definition implies that some kind of inhomogeneity is necessary for a phase to exist. This is incorrect. 2. Under certain conditions, a phase may be spatially nonuniform. For example, air in the gravitational field of earth changes density with altitude, or a dielectric material in the electrical field has inhomogeneous distribution of polarization. But these inhomogeneities are imposed on the system from outside and may be easily removed by removing the external fields. The system may have other inhomogeneities which are harder or impossible to remove, such as internal interfaces between the coexisting phases or boundaries between domains. These inhomogeneities must not be a part of the definition of a phase. 3. Homogeneity of a system depends on the scale of observations. Therefore, the same system may be identified as a phase on a larger scale and as an inhomogeneous structure on the smaller scale of observation. 4. Small fluctuations, e.g., thermal, do not violate homogeneity of a phase. 5. “A thermodynamic equilibrium” does not necessarily mean globally stable equilibrium; a phase can be in the state of just local equilibrium—a metastable phase. However, an unstable (although equilibrium) state of a system is not a phase. For instance, the term “labile equilibrium,” which is often used in chemistry to characterize a transition state of a molecule, does not correspond to any phase of matter.

3

See §81 of Ref. [1] for an example.

2.2

Conditions of Phase Equilibrium

17

6. To be called a phase, we do not require a system (or its part) to be at equilibrium under any “external conditions,” only “under the specified” ones. Certain phases may exist under specific conditions (e.g., adiabatic) but not exist under others (e.g., isothermal). For example, a solid phase, which is stable in a closed system, may fall apart if the system is exposed to the environment. In thermodynamics, we define special functions of state as entropy S and free energy (Gibbs-G or Helmholtz-F) which clearly identify equilibrium states and even distinguish between the equilibrium states of different levels of stability (see Sect. 1.2). For example, in a system closed for thermal exchange with the environment, the entropy must attain maximum at a stable equilibrium state. If the system is open to the thermal exchanges, then the equilibrium states are identified by the minima of the free energy. One has to keep in mind that the maxima and minima of the respective thermodynamic functions are taken not with respect to the thermodynamic variables (temperature T, pressure P, volume V, energy E, enthalpy H, etc.) but with respect to the internal—hidden—variables, which characterize different states for the same set of the thermodynamic variables; see (1.5a, b, c).

2.2

Conditions of Phase Equilibrium

Now let us consider the state of equilibrium between two phases of matter, which we will be calling α and β. As known, the condition of mechanical equilibrium is as follows: Pphase α = Pphase β

ð2:1Þ

The condition of thermodynamic equilibrium comes from zeroth and second laws (see Sect. 1.1): T phase α = T phase β

ð2:2Þ

Another consequence of the second law of thermodynamics is equality of the chemical potentials of the phases at equilibrium. The concept of the chemical potential is more complicated (less intuitive) than the concepts of temperature or pressure. As known, the physical quantities of temperature and pressure allow dynamical interpretations as the averages of the kinetic energy and momentum flux where the averaging can be made over time or ensemble (due to ergodicity of the system). This makes it possible to develop an instrument to measure these quantities: a thermometer or a pressure gauge. The problem with the chemical potential is that it has an entropic contribution, which is not a function of the coordinates and velocities, that is, cannot be time averaged. This renders a chemical potential meter impossible. Chemical potential can be measured relative to a standard state by integrating along a reversible path joining the standard state with the

18

2 Thermodynamic Equilibrium of Phases

state of interest. In different physical systems, the role of the chemical potential is played by different physical quantities. In an open, one-component system, which will be of primary interest for us in this book, the chemical potential is the molar Gibbs free energy G. Thus, the condition of phase equilibrium in an open, one-component system is expressed as follows: Gphase α ðP, T Þ = Gphase β ðP, T Þ

ð2:3Þ

It will be beneficial for the further discussions to introduce a jump quantity as follows: ½Q  Qphase β ðP, T Þ - Qphase α ðP, T Þ

ð2:4Þ

½QE  ½QðP, T E Þ:

ð2:4aÞ

and

where the second relation is taken on the equilibrium line. Evidently, the processes of transformations from one phase to another requires a “driving force” to proceed. Equation (2.3) allows us to define [G] as the driving force of a phase transformation. A convenient, geometrical way of describing phase transitions is with the help of a phase diagram—a map of regions in the plane of thermodynamic parameters where each phase of the substance is the most stable. These regions are separated by the equilibrium line(s) known as phase boundaries. Equation (2.3) describes a phase boundary in the (P, T ) plane of the phase diagram; it may be expressed as the equilibrium temperature that varies with pressure: T = T E ðP Þ

ð2:5aÞ

P = PE ðT Þ

ð2:5bÞ

or pressure—with temperature:

One can also define the latent heat: LðP, T Þ  - ½H  = T

∂ ½G - ½G = - T ½S - ½G ∂T

ð2:6aÞ

and transformation volume: ½V  =

∂G ∂P

ð2:6bÞ

2.2

Conditions of Phase Equilibrium

19

For the jumps of the specific heat and isothermal compressibility, we obtain the following, respectively (see (1.4a, b) for definitions): 2

∂ ∂L ½G = 2 ∂T ∂T

ð2:7aÞ

∂ 1 ½V  þ βT,phase β ½V  V phase α ∂P

ð2:7bÞ

½C P  = T ½β T  = -

Differentiating (2.3) along the equilibrium line (2.5a), we find the ClapeyronClausius equation: ½V  dT E = - TE E dP LE

ð2:8Þ

where (Why?): LE = T E

∂G ∂T

= - T E ½SE

ð2:8aÞ

E

The slope in the left-hand side of (2.8) is positive for crystallization of most of the pure substances and is negative for crystallization of water, antimony, bismuth, germanium, and gallium. The latter is because the liquid phases of these substances are denser than the solid ones ([V]E > 0). Moreover, differentiating (2.6a), first, with respect to T along the equilibrium line (2.5b): dL ∂L ∂L dP = þ dT E ∂T ∂P dT E

ð2:9aÞ

then, with respect to P along the T = const line: ∂L ∂ =T ½V  - ½V  ∂P ∂T

ð2:9bÞ

and applying (2.7a) to the first term and (2.8), (2.8a), and (2.9b) to the second term in (2.9a), we obtain Plank’s formula: ½∂V=∂T E ∂ ln ½V E L 1 dL = ½C P  þ L E = ½CP  þ E 1TE TE dT E ½V E ∂ ln T E

ð2:10Þ

where the last term related to the jump of the thermal expansivity may be greater or smaller than one, which reduces importance of this formula compared to the Clapeyron-Clausius equation.

20

2

Thermodynamic Equilibrium of Phases

Fig. 2.1 Gibbs free energy plots of phases α and β and the (P, T ) phase diagram of a system with constant specific heat and isothermal compressibility (see Example 2.1)

Example 2.1 Gibbs Free Energy of a System with Constant Specific Heat and Isothermal Compressibility Consider the Gibbs free energies of phases α and β presented as follows: Gphase α ðP, T Þ = - C P T ln T - 1 þ e - βT P

ð2E:1aÞ

Gphase β ðP, T Þ = Gphase α ðP, T Þ þ g - sT - ve - βT P

ð2E:1bÞ

where g, s, v are constants; see Fig. 2.1. Verify that the specific heat at constant pressure and isothermal compressibility of the phases are equal and independent of temperature and pressure and that the phase equilibrium temperature, latent heat, and transformation volume respectively are as follows: T E ðP Þ = LðP, T Þ =

g-

1 βT

ve - βT P s

1 - βT P ve -g βT

½V  = ve - βT P

ð2E:2aÞ ð2E:2bÞ ð2E:2cÞ

Example 2.2 Gibbs Free Energy Jump Between Two Condensed Phases Constrains of Example 2.1 are too stringent for most of the condensed systems undergoing phase transitions, for instance, crystallization. Indeed, the specific heat

2.3

Ehrenfest Classification of Phase Transitions

21

of the solid and liquid phases usually has significant temperature dependence. However, the temperature dependence of the latent heat may be negligible. Eq. (2.7a) shows that for such system [S] is temperature independent also. Then, applying (2.8a) to (2.6a) we obtain that: ½G = LE

T - TE : TE

ð2E:3Þ

where TE is the melting point in the case of crystallization. Relation (2E.3) can be used close to T = TE even if the condition of negligible temperature dependence of the latent heat is not true.

2.3

Ehrenfest Classification of Phase Transitions

Clearly, the concept of a phase transition is associated with some kind of singularity in the behavior of a system. Therefore, a phase boundary is a locus of bifurcations. There are different classification systems for phase transitions; the most popular one is due to P. Ehrenfest. It is based on the discontinuity of the appropriate thermodynamic potential—G, F, or S—with respect to the appropriate thermodynamic variable, P, T, V, or E. According to this classification, if at the transition point (phase boundary) the first derivative(s) of the thermodynamic potential with respect to its variable(s) experience discontinuity, such transition is called first order. If the first derivatives are continuous but the second one experiences discontinuity, the transition is called the second order. Using the definition of a jump quantity (2.5), the Ehrenfest classification may be expanded further: a transition is called of nth-order if the nth derivative is the first one to have a nonzero jump at the transition point. Then, for a transition to be of the second order, the latent heat and transformation volume must vanish, but the jumps of the specific heat and/or isothermal compressibility cannot be zero in the same system. Notice that for a transition to be of the first order, the jumps of the specific heat and isothermal compressibility can be zero at the same time; see Example 2.1. Also notice that the derivatives of the thermodynamic potentials must be taken with respect to the thermodynamic variable, not the hidden ones. The Ehrenfest classification did not prove particularly fruitful for transition of degree higher than the first. Another classification, which is due to M. Fisher, divides all transition into two categories: the first order and continuous, the latter being closely associated with the critical phenomena. However, in this book, the Ehrenfest classification will be used due to its convenience.

22

2.4

2

Thermodynamic Equilibrium of Phases

Phase Coexistence and Gibbsian Description of an Interface

The old and new phases do not need to alternate each other—they can coexist. An important comment about the condition of phase coexistence needs to be made because it may not be identical with the conditions of phase equilibrium (2.1)–(2.3). Indeed, two phases of a substance, which undergoes a first-order transition, may coexist, e.g., liquid and solid may coexist under the conditions of a closed system. However, two phases of a substance that undergoes a second-order transition, e.g., para- and ferromagnetic, cannot coexist at equilibrium under any conditions. In the case of coexistence, we are dealing with a heterogeneous system, which consists of at least two bulk phases separated by a transition region called an interface. Although the phase-separating interface is a part of the greater heterogeneous system, which includes the adjoining bulk phases, it can be considered as an independent system (but not a phase) characterized by extensive and intensive quantities, which vary from those of the bulk phases. Examples of the former are volume, mass, energy, etc.; examples of the latter are mass and energy densities, temperature, pressure, and chemical potential. An illuminating approach to a heterogeneous system with a physical interface, which is due to Gibbs, is to consider the interface as a system of zero volume—a dividing surface.4 Then, the densities of physical quantities lose physical sense, and the extensive quantity that characterizes the interface becomes its area. An important intensive (excess) quantity, which arises within such approach, is the interfacial (surface) tension σ. It is defined as the excess energy per unit area of the interface compared to the two-phase heterogeneous system without the interface or the force which minimizes area of the interface. The interfacial tension is the most important equilibrium property of an interface, although there are others, which characterize complex relations between the interface and the contiguous phases, e.g., its anisotropy. The conditions of equilibrium for a flat interface are the same as those without it, that is, the equalities of the pressure, temperature, and chemical potential across the system, Eqs. (2.1)–(2.3). For a curved interface, the condition of mechanical equilibrium is more complicated: Pphase β = Pphase α þ 2σK

ð2:11Þ

where K is the curvature of the interface. This is the celebrated Laplace’s pressure equation for a spherical bubble at equilibrium.

4

An interface does not need to be flat.

Reference

23

Example 2.3 The Gibbs-Thomson Equation Find the equilibrium temperature of two phases separated by a curved interface. Consider a semispherical bump on a flat phase separating interface. Let us use the Clapeyron-Clausius Eq. (2.8) to find the difference in the equilibrium temperatures between the two surfaces. Then: ½V E dT E 1 =dP = - dð2σK Þ TE LE L

ð2E:4Þ

where L = LE/[V]E is the latent heat per unit transformed volume. Integrating this differential equation from K = 0 (flat interface) to K = 1/R (semispherical bump), we obtain the Gibbs-Thomson equation: T E ðK Þ = T E ð0Þe -

2σK=L

≈ T E ð0Þ 1- 2

σK L

ð2E:5Þ

where the second equality is for a case of small curvature K ≪ L/σ.

Exercises 1. From the following list of thermodynamic systems, select the ones which are examples of a phase: (a) ice water that is, ice cubes in a glass of water; (b) a bar magnet that is, ferromagnetic alloy broken into magnetic domains; (c) solution of a magnetic salt in strong magnetic field. 2. Assuming that the latent heat and transformation volume between the liquid and solid phases of a pure substance are not functions of temperature or pressure, derive the melting point versus pressure relationship. Hint: use the results of Example 2.1. Give examples of metals, which obey and disobey this relationship. 3. During the equilibrium solidification of liquid bismuth at 1 atm, the extraction of heat is stopped when 50% of the metal is solidified. What will happen if now you will try to increase the pressure holding the temperature constant? 4. At the atmospheric pressure, the latent heat of water is 79.7 cal/g, the specific volume of ice is 1090 cm3/g, and that of water is 1000 cm3/g (by definition). (a) How much pressure do you need to apply to decrease the melting temperature of water by 1 °C? (b) What is the melting temperature on top of the Mount Everest? 5. Verify that the specific heat at constant pressure and isothermal compressibility of the phases in Example 2.1 are equal and independent of temperature and pressure. 6. Verify the Clapeyron-Clausius relation (2.8) for the system in Example 2.1. 7. Determine the domain of applicability of (2E.3) in Example 2.2 of a system with significant temperature dependence of the latent heat.

Reference 1. L.D. Landau, E.M. Lifshitz, Statistical Physics (Elsevier, North Holland, Amsterdam, 1967)

Chapter 3

Examples of Phase Transitions

In this chapter we are looking at various examples of prominent phase transitions, which are essential for formulating materials’ properties. In Part II, they will help us to define important hidden parameters to characterize and quantify the systems and transitions. Special attention is paid to the magnetic transition because it will be considered in detail in Sect. 8.7.

3.1

Crystallization (Freezing)-Melting

Our experience tells us that if cooled or compressed sufficiently, water turns into a solid. Same is true about practically all liquids, including simple ones like liquid metals. In the process of changing from liquid to solid, called crystallization, properties of matter like density, specific heat, electrical resistivity, etc. do not change significantly. This tells us that the nature and magnitude of the interatomic/ intermolecular forces do not change in crystallization. The most significant difference between a solid and a liquid is that a solid is rigid, that is, it does not flow easily. The rigidity of solids is due to periodic arrangement of the atoms or molecules into a crystalline lattice, which is characterized by the long-range order: atoms in solids have their preferred position, that is, solid is less symmetric than liquid. In other words, a solid compared to liquid is a state of broken symmetry. Interesting to note that solids and liquids also possess certain degree of short-range order, which characterizes local atomic arrangements there. The short-range order does not change much in crystallization or melting. According to the definition introduced above, the solid and liquid states are homogeneous phases of matter. From the thermodynamic standpoint, freezing/melting is a thermal-driven phase transition, which takes place at a particular temperature Tm called the melting point. It is accompanied by a discontinuous change in volume, [V] = VL-VS, which is usually positive (water and a few semimetals are exceptions), and by a latent heat, L = HL-HS, which is always positive. According to the Ehrenfest © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Umantsev, Field Theoretic Method in Phase Transformations, Lecture Notes in Physics 1016, https://doi.org/10.1007/978-3-031-29605-5_3

25

26

3 Examples of Phase Transitions

classification, the discontinuities in V and H, both of which relate to the first derivatives of the free energy, are the signatures of a first-order transition; see Sect. 2.3. If the system is away from the melting point, the transition is driven by the driving force; see Sect. 2.2: ½G = Lm

T - Tm Tm

ð2E:3Þ

Freezing of simple liquids is largely driven by entropic factors. We can see that by looking at a bunch of hard spheres as a model system. All spontaneous phase transformations in isolated systems must result in increase of entropy. In other words, at a certain energy, the crystalline phase must have a higher entropy than the liquid phase. However, no laws of physics are violated here. Sure enough, the configurational entropy of the crystalline phase is lower than that of the liquid phase. “We can obtain a rough estimate of the difference in configurational entropy between the two phases by treating the fluid as a system of non-interacting particles moving freely in a volume V and the solid as a system of localized (and hence distinguishable) particles in which each particle is confined by its neighbors to a region of order V/N around its lattice site. It turns out that the configurational entropy per particle of the solid lies below that of the fluid by an amount equal to kB” [1]. However, this loss in configurational entropy is more than offset by the increase in translational entropy on freezing. As the particles order, their excluded volume decreases, and hence they can explore a larger fraction of the volume without overlapping with other particles—this causes the translational entropy to increase. The opposite transition of melting is of great interest for us also. Notice that there is experimental asymmetry of the melting and freezing: supercooling of a liquid is commonplace, while superheating of a crystal is next to impossible. Thermodynamically, the asymmetry is explained by existence of a nucleation barrier for freezing and absence of such for solids due to their defect structures. Kinetically, when solid forms, release of the latent heat raises its temperature and the temperature of the surrounding liquid in a process called recalescence. Obviously, a symmetric process of lowering the temperature due to absorption of the latent heat during melting takes place. However, the process of dissipation of free energy breaks the symmetry: it increases the amount of heat release in freezing (enhances recalescence) but decreases the absorption of heat in melting; both processes retard transformation; see Sect. 13.2. Also notice that surprisingly effective is the phenomenological Lindemann criterion of melting, which says that solids melt when the root mean square (RMS) fluctuations of the constituent particles exceeds 15 ÷ 20% of the interparticle separation, although it completely ignores the properties of liquids. As we discussed in Sect. 2.4, the solid and liquid phases do not need to alternate—they can coexist separated by an interface.

3.2

3.2

Martensitic Transition

27

Martensitic Transition

Although many transformations can take place in the solid state, few of them are more important for the field of metallurgy than the martensitic transformation. Martensitic transformation is defined as a polymorphic phase transition in solids that involves a regular reconstruction of the crystalline lattice in which the atoms do not interchange places and are displaced only a fraction of an interatomic spacing [2]. The most studied martensitic reaction is that found in ferrous alloys; however, nickel- and copper-based alloys also display this type of transformation. The key features of the martensitic transformation are the following. A. It is a diffusionless transformation in the sense that each crystal of the parent phase transforms into a martensitic crystal (product phase) of the same chemical composition. This means that no long-range diffusion is required; however, some short-range reshuffling may be needed. B. Martensitic transformation is accomplished by the shearing of discrete volumes of material, which usually results in a platelike morphology with orderly distribution of the plates with a small ratio of thickness to other linear dimensions in a structural element. C. In a martensitic transformation, there is a definite orientation relation between the parent and product phases. Plates of the parent and product crystals lie along a particular type of plane in the matrix called the habit plane. It is usually possible to find a few crystallographically equivalent variants to the orientation relation resulting from martensitic transformations; see Fig. 3.1. Fig. 3.1 Parent phase (gray cube) and crystallographically equivalent variants (blue, red, and black parallelepipeds) of the product (martensitic) phase

28

Examples of Phase Transitions

1

Af 0.75

Resistance ratio

Fig. 3.2 Electrical resistance changes during the cooling and heating of an Fe-Ni alloy [2], illustrating the hysteresis between the martensite reaction on cooling and the reverse transformation on heating

3

MS AS 0.5

0.25

Mf 0

Md Temperature qC

Ad

D. Martensitic transformation is very rapid. A cooperative movement of many atoms in a crystal makes the parent phase mechanically unstable. It occurs with the velocity of sound waves and is practically temperature independent. E. The martensitic transformation is a strongly temperature-dependent reaction. On cooling, martensite starts forming spontaneously at a particular temperature Ms; amount of the martensite formed depends on how far the alloy is cooled below the Ms; no more martensite forms below the temperature Mf < Ms; see Fig. 3.2. F. Martensitic transformation is “reversible” in the sense that a parent phase will be recovered on heating. The reverse reaction begins at a temperature above Ms and ends at even higher temperature; if the parent phase is austenite, these temperatures are called As and Af. A parent and product phases can be repeatedly obtained on heating and cooling—shape memory alloys. The reversibility of martensitic transformation is accompanied by temperature hysteresis. G. Elastic deformation above Ms may result in the formation of the martensitic phase at a temperature high for the spontaneous transformation. The highest temperature at which martensite may be formed under applied stress is called Md > Ms. In single crystals, the direction of the applied stress makes a difference. Deformation of martensite also allows the reverse reaction to start at a temperature Ad which is lower than As. H. Application of the plastic deformation at any temperature in the transformation range (Ms, Mf) increases the amount of the product phase and may even complete the martensitic reaction.

3.3

3.3

Magnetic Transitions

29

Magnetic Transitions

Being placed in an external magnetic field H, many substances become magnetized in the direction of the applied field and the induced magnetization: M

∂G ∂H

,

ð3:1Þ

T

which is the magnetic moment per unit volume of the substance, increases. If the field is reduced to zero, the induced magnetization M also decreases to zero. This phenomenon is called paramagnetism. It can be explained by the existence of permanent magnetic dipole moments of atoms in the substance, which are randomly oriented and cancel each other out in the absence of the external field. If the sample is placed in the external magnetic field, the dipole moments tend to line up with the field, which gives the sample the net magnetization. However, there are materials which pose spontaneous (permanent) magnetization in the absence of the external field. Such materials are called ferromagnetic, and the phenomenon is called ferromagnetism. Reaction of a ferromagnetic sample on application of the external field depends strongly on the initial state of the material and its temperature. If it is completely demagnetized initially, then, on application of a weak but growing external field, its magnetization increases slowly, then the increase accelerates and slows down again when the magnetization reaches its saturation value Msat in strong enough external magnetic fields; see Fig. 3.3. The magnetization curve is antisymmetric with respect to the applied field and can be characterized by the isothermal susceptibility:

i F susceptibility

magnetization M

Fig. 3.3 Magnetization and susceptibility versus applied external field for a ferromagnetic material

external field H

30

3

Fig. 3.4 Magnetic hysteresis with the magnetic saturation, permanent magnetization, and coercivity

Examples of Phase Transitions M

Msat Mper

-Hcoer Hcoer

H

-Mper -Msat

χ

∂M ∂H

,

ð3:2Þ

T

However, if at some point of the magnetization process the applied external field decreases, the magnetization curve does not retrace itself. Instead, it passes above the original one, and when the external field is reduced to zero, the permanent magnetization Mper remains in the ferromagnetic material. To demagnetize the sample completely, one has to apply the external field in the opposite direction, called coercivity Hcoer. When the field reaches large negative values, the sample will be magnetized to (-Msat). If the field again rises to large positive values, the magnetization curve comes back to the value of Msat having described a hysteresis loop; see Fig. 3.4 cf. Figure 3.2. Physical properties of ferromagnetic materials depend strongly on their temperatures. They are characterized by the Curie temperature TC, that is, the temperature above which (T > TC) the permanent magnetization Mper vanishes and the material turns paramagnetic. Below the Curie temperature (T < TC), the permanent magnetization grows as the temperature decreases all the way to the absolute zero. In the paramagnetic domain, that is, for T > TC, dependence of the susceptibility on temperature is expressed by the Curie-Weiss law: χ0 =

C , T - TC

ð3:3Þ

where the isothermal zero-field susceptibility is used: χ 0  lim χ: H→0

ð3:4Þ

3.3

Magnetic Transitions

31

In the ferromagnetic domain, that is, for T < TC, it is more difficult to measure the susceptibility because the substance can have finite magnetization even at H = 0. The coercivity Hcoer also decreases with the increasing temperature and vanishes above the Curie point. The specific heat of a ferromagnetic material CH is greater than that of the non-ferromagnetic material CM: χ ðCH- C M Þ = T

∂M ∂T

2

ð3:5Þ H

and has a singularity (sharp increase) at the Curie point. All these features of the thermal behavior of the ferromagnetic materials are schematically depicted in Fig. 3.5. The Weiss mean-field theory of ferromagnetism says that a ferromagnetic substance consists of atoms or molecules with nonzero magnetic moments, each of which experiences the orienting influence not only of the external field H but also of the combined effect of all other magnetic moments of the substance, which is proportional to the magnetization M. Then, the effect of the external field is replaced by that of the effective field: Heff = H þ λM,

ð3:6Þ

inverse susceptibility F

magnetization M coercivity Hcoer

Fig. 3.5 Schematic depiction of the experimental temperature dependence of the magnetic properties of a ferromagnetic material

specific heat CH

where the mean-field constant λ > 0 is due to the exchange interactions between the atoms (antisymmetric property of the electronic wave function). The orienting influence of the effective field is opposed by the thermal agitation, which grows

0

TC

temperature T

32

3 Examples of Phase Transitions

with temperature. Now consider a ferromagnet with a concentration N of magnetic atoms per unit volume. In the magnetic field Heff, such system has the following energy levels: U = - MHeff = - gmJ μB H eff

ð3:7Þ

where μB is the Bohr magneton, g is the electronic gyromagnetic factor, and mJ is the azimuthal quantum number. The latter is a projection of the electronic magnetic moment on the effective magnetic field Heff and has values J, J-1, . . ., -J. The equilibrium populations of these levels can be calculated by the Einstein formula: N ðmJ Þ  e

- k UT B

ð3:8Þ

with the normalization condition: N = JmJ = - J N ðmJ Þ. Mostly, magnetization of ferromagnets is due to the spins of the atoms, not their orbital momenta. For electronic spins, J = ½, hence, there are two energy levels, g = 2, and the resultant magnetization of N atoms is M = μB{N(-½)-N(+½)}. Then, using (3.6–3.8), we arrive at the celebrated Curie-Weiss equation for magnetization: M = NμB tan h

μB ðH þ λM Þ , kB T

ð3:9Þ

which describes many characteristic features of the ferromagnetic behavior of the substances. For in depth analysis of this equation, we present it in the scaled form: m = tan h

mþh , t

ð3:10Þ

where the scaling of magnetization m, temperature t, and magnetic field h are as follows: m=

M k T H ;t= B 2 ;h= : NμB λNμB λNμB

ð3:11Þ

As max tan hx = 1, we can see from (3.11) that the physical meaning of the scale factor Msat = NμB is the saturation magnetization, that is, the magnetization where all the spins are oriented in the same direction. To find the physical meaning of the scale factor for temperature, we solve (3.10) at h = 0 using the properties of tan hx: - tan hð - xÞ = tan hx ≈

x3 , for 0 < x ≪ 1 3 1 - 2e - 2x , for x ≫ 1 x-

3.3

Magnetic Transitions

33

For t > 1, there is only one solution m = 0; for t < 1 there are three solutions, two of which are the following: mper ≈ ±

3ð1 - t Þ, for t → 1 - 0:

ð3:12Þ

This can be interpreted as spontaneous magnetization of the substance in zero field at: T < TC 

λNμ2B kB

ð3:13Þ

where TC is the Curie temperature. For (h = 0, t → 0), the permanent magnetization approaches its saturation value of msat = ±1. For the entire domain (h = 0, t < 1), the solution of (3.10) can be obtained numerically. It is shown in Fig. 3.6 and has the temperature dependence similar to the experimental one; see Fig. 3.5. For nonzero fields, (3.10) can be resolved as an implicit expression for the magnetization versus the field strength at various temperatures: h=

t 1þm ln - m: 2 1-m

ð3:14Þ

This formula correctly describes ferro-paramagnetic magnetization in the external fields; see Fig. 3.7. To obtain the susceptibility of the material, we differentiate (3.14) to yield the following:

2.5

magnetization m

1

0.8

2

0.6

1.5

0.4

1

0.2

0.5

0

0 0

0.2

0.4

0.6

temperature t

0.8

1

inverse susceptibility F-1

Fig. 3.6 Zero-field magnetization and susceptibility of the ferromagnetic state

34

3

Fig. 3.7 Magnetization versus the field strength at various temperatures— above (black), at (red), and below (blue) the Curie temperature

Examples of Phase Transitions

1

magnetization m

0.5

0

-0.5

-1 -2

-1

0

1

2

external field h

1 dh t - 1: =  χ dm 1 - m2

ð3:15Þ

For (h → 0, t > 1), taking into account that m → 0, we obtain the CurieWeiss law: 1 = t - 1, for t > 1: χ0

ð3:16Þ

Comparing (3.16) with (3.3), we find an expression for the Curie constant: C=

T C Nμ2B : = λ kB

ð3:17Þ

Notice that this formula works for the paramagnetic materials also. For (h → 0, t < 1), using (3.12) for (3.15), we obtain that: 1 ≈ 2ð1- t Þ, for t → 1 - 0: χ0

ð3:18Þ

Compare (3.16) with (3.18), and notice that due to the preexisting magnetization in the ferromagnetic region the susceptibility for T < TC is exactly one-half of that for T > TC at the same “distance” from the Curie point. For the entire domain (h = 0, t < 1), the zero-field susceptibility can be obtained numerically from (3.14) and

3.3

Magnetic Transitions

35

Fig. 3.8 Susceptibility versus field strength above (black) and below (blue) the Curie temperature

20

susceptibility F

16

12

8

4

0 0

0.02

0.04

0.06

0.08

0.1

external field h

(3.15); it is shown in Fig. 3.6. In Fig. 3.8 is depicted the nonzero-field susceptibility at temperatures above and below the Curie temperature. Compare it with Fig. 3.3, and notice that it has similar field dependence as the experimental one, except for the small fields. The latter is due to the domain structure of the ferromagnetic materials. For the specific heat, applying (3.5), we obtain the following: c0 - cm ≈

0,

for t > 1

3=2,

for t → 1 - 0

ð3:19Þ

which is also observed experimentally; see Fig. 3.5. An important observation can be made from Fig. 3.7. For t ≤ 1, there is a region of coexistence of multiple (two or three) orientations of magnetization for the same applied external field. Equations (3.6) and (3.7) provide a hint on how to select the correct orientation observed experimentally. According to these equations, the magnetization, which has the same orientation as the external field and the largest magnitude, has the least amount of energy and, hence, is preferred.1 Thus, the magnetization parallel to the applied filed represents the stable branch (of solutions), while out of the two magnetizations in the opposite direction the greater one is the metastable branch and the smaller one is the unstable branch. The metastable branches terminate at the spinodal points, which correspond to the condition χ → 1. Then, from (3.15), it follows that:

1

In fact, the free energy should be used in (3.7), but the result is the same.

36

3

Fig. 3.9 Phase diagram of a ferromagnetic material. CP—critical (Curie) point

Examples of Phase Transitions

1

external field h

0.5

ferro-

para-

CP

0 0 temperature t

-0.5

ferro-

0.5

1

para-

-1

p mspi = ± 1 - t

ð3:20Þ

and from (3.14), (3.20), we can find that: p 3= t 1- 1-t p 2 p ln hspi = ± þ 1 - t ≈ ð1 - t Þ 2 , for t → 1 - 0: 2 1 þ 1-t 3

ð3:21Þ

In Fig. 3.9 are depicted the spinodal lines together with the boundary (h = 0, t < 1). According to the Ehrenfest classification (see Sect. 2.3), the boundary (h = 0, t < 1) is the line of the first-order phase transition points because the magnetization (the first derivative of the free energy with respect to the field, see (3.1)) experiences a jump when crossing this line. The point (h = 0, t = 1) is a critical point (lambda point, to be exact) of a second-order phase transition because crossing it in the (h = 0) direction the magnetization stays continuous, see (3.12), but the susceptibility (the second derivative of the free energy with respect to the field) experiences a jump; see (3.2), (3.16), and (3.18). The lines of the spinodal points “destroy” the ferromagnetic states and turn them into paramagnetic ones. Hence, these lines present boundaries between the weak (|h| < |hspi|) and strong (|h| > |hspi|) fields. Thus, Fig. 3.9 is a phase diagram of a ferromagnetic substance. Furthermore, the region of coexistence of opposite orientations between the spinodal lines on the phase diagram of Fig. 3.9 hints at the possibility for ferromagnetic substances to have domain structure, which consists of the domains of uniform magnetization separated by the interfaces where the magnetization changes orientation over a small distance. In this case the magnetization loop on the (h, m) diagram, Fig. 3.7, which consists of the stable and metastable branches and the vertical drops

3.4

Ferroelectric Transition

37

at the spinodal points, is indicative of the hysteresis loop of the real materials. Then, as the lines of the spinodal points on the phase diagram terminate the coexistence region, the spinodal field (3.21) plays a role of the coercivity, and the scale factor of the applied field in (3.11) represents the zero-temperature coercivity of the material: H C = λNμB :

ð3:22Þ

Dependence of the spinodal field hspi on the temperature t expressed in Fig. 3.9 is qualitatively similar to the experimentally observed dependence of the coercivity on temperature in Fig. 3.5.

3.4

Ferroelectric Transition

A ferroelectric crystal exhibits an electric dipole moment even in the absence of an external electric field. In the ferroelectric state, the center of positive charge of the primitive cell does not coincide with the center of its negative charge. Since the dielectric behavior of these materials is in many respects analogous to the magnetic behavior of ferromagnetic materials, they are called ferroelectrics. The first solid which was recognized to exhibit ferroelectric properties was the sodium-potassium double salt of tartaric acid with the chemical formula NaKC4H4O64H2O. The salt was first prepared in 1672 by an apothecary Pierre Seignette of La Rochelle, France, hence, the name Rochelle salt. It has the peculiar property of being ferroelectric only in the temperature region between -18 °C and 23 °C [3]. A necessary, but not sufficient, condition for a solid to be ferroelectric is the absence of a center of symmetry because a centrosymmetric crystal does not possess polar properties, that is, cannot spontaneously polarize. Based on the rotational symmetry of crystals, there are 21 classes in total, which lack a center of symmetry. Of these 21 classes, 20 are piezoelectric, i.e., these crystals become polarized under influence of the external stress. Ten out of 20 piezoelectric classes exhibit pyroelectric properties, that is, spontaneous polarization. The value of the spontaneous polarization, however, depends on temperature. The ferroelectric materials are a part of the group of the pyroelectrics. However, they have the additional property that the polarization can be reversed by an application of the electric field. Thus, we can define a ferroelectric crystal as a pyroelectric crystal with reversible polarization. As mentioned above, ferroelectrics draw their name from ferromagnetics due to similarity of the thermodynamic behavior (see Sect. 3.3). Indeed, a ferroelectric can be spontaneously polarized, that is, polarized in the absence of the external field. The direction of the spontaneous polarization may be altered by application of an external electric field in the direction opposite to the polarization. In general, a ferroelectric crystal consists of a number of domains where the polarization has a specific direction, while it varies from one domain to another. Because of the domain structure, the ferroelectric polarization versus external field relationship displays hysteresis, which has an explanation similar to that of the ferromagnetic one; see

38

3

Examples of Phase Transitions

Fig. 3.4. That is, when an electric field is applied to the crystal, the domains with polarization components along the applied-field direction grow at the expense of the “antiparallel” domains; thus, the polarization increases. When all domains are aligned in the direction of the applied field, the polarization saturates, and the crystal becomes a single domain. Further increase in the polarization with increasing applied field may occur as a result of the regular crystal polarization effects. Rotation of the domain vector polarization may also take place if the external electric field does not coincide with one of the possible directions of spontaneous polarization. When the applied field is reduced, the polarization of the crystal decreases, but for zero applied field, there remains the remanent polarization. In order to remove the remanent polarization and to make the polarization zero, a field in the opposite direction is required; it is also called the coercive field. The ferroelectric properties depend on temperature. They disappear above a critical temperature, called the ferroelectric Curie temperature. Associated with the transition temperature, the ferroelectric/non-ferroelectric transition may be of the first or second order where the former is characterized by the latent heat and the latter by the specific heat discontinuity. A theory similar to the Weiss mean-field theory of ferromagnetism (see Sect. 3.3) can be worked out for the ferroelectricity. However, such theory is less successful in describing the experimental effects because there are important differences between the ferroelectric and ferromagnetic phenomena. One of them is that the spontaneous polarization in the ferroelectric state is associated with spontaneous electrostrictive strain in the crystal. At the transition temperature, a change in crystal structure is therefore observed. This difference stems from the fact that magnetization of ferromagnets is due to the spins of the atoms while polarization of ferroelectrics is due to the ionic displacements in the primitive cells of the crystalline lattices.

Exercises 1. Derive Eq. (3.9). 2. Verify Eq. (3.21).

References 1. J.-P. Hansen, I.R. McDonald, Theory of Simple Liquids: With Applications to Soft Matter, 4th edn. (Elsevier Academic Press, Oxford, 2009) 2. G.B. Olson, W.S. Owen, Martensite (ASM International, 1992) 3. F. Jona, G. Shirane, Ferroelectric Crystals (Dover, New York, 1993)

Chapter 4

Isothermal Kinetics of Phase Nucleation and Growth

So far, we have looked at the cases of thermodynamic equilibrium of phases. As we learned in Chap. 2, if the conditions in the system change (e.g., temperature drops), then a previously stable homogeneous state may become metastable (or even unstable) and transform into a stable one. In the transformation process, the initial parent phase is progressively replaced by a product phase. In his seminal treatise, Gibbs [1] identified two different scenarios of how a metastable (or unstable) phase may transform into a stable one. Both scenarios involve reaction of the system on infinitesimal changes inspired by thermal fluctuations. In the first scenario, the critical role is played by fluctuations large in degree but small in extent; this scenario is called nucleation. There are good reasons to separate this scenario into three consecutive steps of nucleation, growth, and coarsening. The first two steps of this scenario are considered in this chapter, and the third step is the subject of Chap. 5. In the second scenario, the critical fluctuations are infinitesimal in degree but large in extent; this is called spinodal decomposition and will be considered in Chap. 6.

4.1

JMAK Theory of Nucleation and Growth

Many cases of transformations during crystallization can be described within the framework of a rather simple but very effective Johnson-Mehl-Avrami-Kolmogorov (JMAK) theory [2–4]. However, the theory works under certain conditions, which seldom realize completely in experimental settings. The specific assumptions of the theory are the following. • Transformation occurs by nucleation and growth mechanisms in an infinite medium, that is, the linear size of the system is much larger than the distance between the nuclei of the new phase.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Umantsev, Field Theoretic Method in Phase Transformations, Lecture Notes in Physics 1016, https://doi.org/10.1007/978-3-031-29605-5_4

39

40

4

Isothermal Kinetics of Phase Nucleation and Growth

Fig. 4.1 Time evolution of the nucleation and growth of the particles of the new phase. (i), initial stage of free growth; (ii), later stage with impingement

• Nucleation occurs randomly and homogeneously over the entire untransformed portion of the material. Once appeared, a nucleus, which is a site of nucleation, cannot disappear. The initial size of the nucleus is zero. • Growth is isotropic, that is, it occurs at the same rate in all directions independent of the extent of transformation. • Impingement of particles takes place when the growing centers compete for transformation in the same geometrical space. Once the old phase transforms into the new one, there are no further transformations in that region of space. We will consider the process in the volume V of 3D space where Vα(β) is the volume of the old (new) phase in the system, so that V = Vα + Vβ = const; see Fig. 4.1. We wish to obtain a relation which shows how the volume of the new phase changes with time. All equations of the theory will be derived with the help of the concept of extended volume V eβ , which is the volume of the new phase that would form if the entire sample was still untransformed. Notice that although the actual transformed volume changes between zero and V, the extended volume may grow to infinity (Why?). The concept of extended volume will allow us to derive the equations without using a more complicated concept of the particle size distribution function (cf. Sect. 4.2.2). First, let’s ask a question: what portion of the new extended volume dV eβ will contribute to the actual new β-phase volume dVβ? One can see that this portion is equal to the portion of the untransformed material in the total volume Vα/V at the time, that is, dV β =dV eβ = V α =V. Then: dV β =

V - Vβ e dV β V

ð4:1Þ

If the above reasoning did not convince you that this equation is correct, you can try the concept of virtual volume V vβ = V eβ - V β , that is, the portion of the extended

4.1

JMAK Theory of Nucleation and Growth

0

t

t+dt

j(t)

nucleation time

41

v (t)

growth time

t

current time

Fig. 4.2 Time evolution of the nucleation and growth of the group of particles of the new phase, which nucleate during the same time interval between τ and τ + dτ

volume, which lies in the previously transformed material and, hence, is not actually contributing to the transformation process; see Fig. 4.1(ii). Then, ask the question: what is the ratio of the virtual and extended volumes? It is the same as the ratio of the volume of the new phase to the total volume of the system: dV vβ =dV eβ = V β =V. And again you will arrive at (4.1). Equation (4.1) can be solved as follows: V β = V 1- e - V β =V e

ð4:2Þ

This equation shows that in the beginning of the transformation the extended volume is equal to the actual one, but at the end, the extended volume approaches infinity, while the actual one approaches the total volume of the system V. Now we need to find the rate of unobstructed nucleation and growth of the particles of the new phase, which is a simpler problem than the original one. It can be described by the rate of nucleation j(t) at time t, which is the number of new nuclei per unit volume and time, and the rate of linear growth v(t) of the β-phase nuclei surrounded by the untransformed material, Fig. 4.2. The number of new nuclei, which actually appear during (τ,τ + dτ), is dN=Vαj(τ) dτ. However, the number of the new nuclei, which would have appeared in the extended volume during the same time period, is greater: dNe = Vj(τ)dτ. The particles of the extended volume of the new phase grow in the form of spheres of radius R with the isotropic uniform growth rate v  dR/dt. At time t, the group of particles nucleated during (τ,τ + dτ) will contribute the following amount to the extended volume: dV eβ ðτ; t Þ =

4π 3

t τ

3

t

4π VjðτÞ 3

vðτ0 Þdτ0 dN e =

τ

3

vðτ0 Þdτ0 dτ

ð4:3Þ

where the integral in brackets is the radius of the particles at the current time t. To add the contributions of all groups of nuclei appeared at all time moments, we integrate (4.3) between 0 and t and obtain the following expression: V eβ ðt Þ =

4π V 3

t 0

j ðτ Þ

t τ

3

vðτ0 Þdτ0 dτ:

ð4:4Þ

42

4

Isothermal Kinetics of Phase Nucleation and Growth

Equations (4.2) and (4.4) solve the posed problem in a very general setting. Let us consider a special case closely related to experiments on crystallization. We assume that the nucleation occurs uniformly over the entire transformation time, that is, j is a constant (stationary nucleation), and that the growth rate v is also constant in time. Then, (4.4) can be readily integrated to yield the following: V eβ ðt Þ = Kt 4

ð4:5Þ

where K is called the Avrami constant: K=

π 3 jv : 3

ð4:5aÞ

Substituting (4.5) into (4.2), we obtain the desired relation for the actual volume of the new phase: V β ðt Þ = V 1- e - Kt

4

ð4:6Þ

This function has a typical sigmoidal shape, see Fig. 4.3, easily recognizable on experimental plots of many transformations. It can help us identify some of the essential steps of the transformation process. (i) The initial slow rate is due to the time required to form a significant number of nuclei of the new phase. (ii) During the intermediate period, the transformation is rapid as the nuclei grow into particles and consume the old phase, while the new nuclei continue to form in the remaining

Fig. 4.3 Volume fraction of the transformed (new) phase as a function of time for a system with K = 1

1

(iii)

volume fraction VE/V

0.8

(ii)

0.6

0.4

0.2

(i) 0 0

0.5

1

time t

1.5

2

4.1

JMAK Theory of Nucleation and Growth

43

parent phase. (iii) Once the transformation begins nearing completion, there is little untransformed material for nuclei to form therein. The already existing particles begin to impinge on one another, forming the regions where growth stops. Hence, production of the new phase slows down. Formula (4.6) can be applied to obtain quantitative information about the transformation process. Indeed, the Avrami constant in (4.5a) is a product of the nucleation rate and cube of the growth rate. It can be obtained if the experimental data are plotted in the double logarithmic coordinates: ln ln

1 = ln K þ 4 ln t 1 - V β=V

ð4:6aÞ

Further quantitative separation of the nucleation from the growth requires additional data. Now let us look at the constrains of the theory and think how we can relax them. First, the case when the nuclei grow not in the form of spheres but in the form of disks can be resolved easily and is left for self-study; see Exercise 2. Second, we can consider a case where the nucleation occurs instantaneously over the entire volume and the successive growth is time independent. Then, j = qδ(t) where δ(t) is Dirac’s delta function and q determines the number of nuclei in a unit volume. Then: π 3 qv : 3

V β ðt Þ = V 1- e - Lt ; L = 3

ð4:7Þ

Third, the case where the growth rate is not isotropic also can be resolved by JMAK theory. Indeed, in the case of anisotropic growth, its rate is a vector, which can be presented as a product vn(t) = c(n)k(t) of the directional c(n) and temporal k(t) terms. Then, instead of (4.4), we obtain the following: V eβ ðt Þ =

1 V 3

t

c3 ðnÞdσ ∙ S

0

jðτÞ

t τ

3

kðτ0 Þdτ0 dτ:

ð4:8Þ

where integration in the first integral is over the surface of a unit sphere S. Fourth, in the JMAK theory, we assume that the nuclei have zero size, while in fact the sizes of the nuclei are finite. It is more complicated to account for this effect. It will be considered in Sect. 4.2. Fifth, the nucleation and growth processes considered by JMAK theory are isothermal, that is, we assume that the latent heat of the transformation is either zero or dissipated instantly by the heat conduction in the material. Overall, the JMAK is a local theory, which does not take into account long-range fields, e.g., temperature or concentration, which may affect the transformation. However, this obstacle can be overcome by considering the diffusion-controlled Stefan regime when the growth rate follows a simple power-law relationship k(t) = k ∙ t-θ where θ = ½ (see Exercise 3 and Sect. 7.1). Another pathway to overcome this limitation is described in the next section.

44

4.1.1

4

Isothermal Kinetics of Phase Nucleation and Growth

Theory of Thermally Activated Nucleation and Growth

Temperature dependence of the process of nucleation and growth can be considered in the framework of a naïve theory of reaction rates developed by Eyring et al. [5]. The basic assumptions of the theory are the following: 1. The reaction is characterized by some initial configuration of atoms or molecules which proceeds to the final configuration by a continuous change of a coordinate, called reaction coordinate. 2. All configurations may be characterized by the free energy such that the initial and final configurations correspond to the free energy minima, that is, to the stable equilibrium states. 3. There is a critical configuration intermediate between the initial and final states along the reaction coordinate. It corresponds to the free energy maximum, called the energy barrier; see Fig. 4.4. The configuration is called the activated state. 4. The activated state is in the state of partial equilibrium with the initial and final states. 5. The activated state may decompose into the final or initial state along the reaction coordinate path with the probability that depends on the free energies of the states. According to these assumptions, the molecules surmount the free energy barrier independently of each other, and the rates of reaction from the initial α to the final β state and back are equal to the number of attempts to reach the activated state per unit time ν (frequency of the molecular vibration along the reaction path) multiplied by the probability of the fluctuation. If the latter is calculated by Einstein’s relation and, Fig. 4.4 Schematic depiction of the free energy variation along the reaction coordinate path in the theory of thermally activated nucleation and growth

g*

'ag*

gD 'agED gE

E

reaction coordinate D

4.1

JMAK Theory of Nucleation and Growth

45

for simplicity, the frequencies of molecular vibrations in α and β states are assumed to be equal, then: f α → β = ν e - Δa g f β → α = ν e - ðΔ a g

βα



=k B T

ð4:9aÞ

þΔa g Þ=kB T

ð4:9bÞ

where Δag is the height of the free energy barrier and Δagβα is the chemical potential difference between the states α and β, that is, the driving force of the transformation. Then, for the rate of nucleation from the α state, we obtain the following: j=

N ð1Þ ν N ð1Þ - Δa g =kB T f = e V α→β V

ð4:10Þ

where N(1) is the total number of molecules in the system ~1023. The difference of the two expressions in (4.9a, b) gives the net frequency at which the molecules transfer from α to β. Hence, the rate of advancement of a region of the stable state β surrounded by the metastable state α is following: v = a f α → β - f β → α = aν e - Δa g



=k B T

1- e - Δa g

βα

=kB T

ð4:11aÞ

where a is the molecular diameter. In the approximation of small driving force Δagβα, this relation takes the following form: 

v ≈ aν e - Δa g =kB T

Δa gβα kB T

ð4:11bÞ

Then, for the Avrami constant, (4.5a), we obtain the following: 3 π N ð1Þa3 4 f α → β 1 - f β → α=f α → β V 3 βα v N ð1Þ 4 - 4Δa g =kB T ν e = 2 1 1 - e - Δa g =kB T V

K=

≈2

v1 N ð1Þ 4 - 4Δa g =kB T Δa gβα ν e V kB T

3

3

ð4:12Þ

where v1 = πa3/6 is the molecular volume. According to Eyring et al. [5], the frequency of the molecular vibration may be found by equating its energy to the average energy of thermal fluctuations: hν = kBT. At 300 K it is ν ≈ 1013 s-1. Then, using the thermodynamic relation:

4

Fig. 4.5 The Avrami constant in the approximation of the theory of reaction rates

reduced Avrami constant K/K 0

46

Isothermal Kinetics of Phase Nucleation and Growth

6E-005

4E-005

2E-005

0 0

0.2

0.4

0.6

0.8

1

reduced temperature T/T m

Δa gβα = La

Tm - T , Tm

ð2E:3Þ

where La is the latent heat per molecule and Tm is the melting point, in the approximation of small driving force, we obtain the following expression: K =2

v1 N ð1Þ L3a k B T - 4Δa g =kB T T m - T e V Tm h4

3

ð4:12aÞ

In Fig. 4.5 this expression is presented in the reduced coordinates with the scaling factors: v1 N ð1Þ L3a kB T m , V h4 4Δa g Ξ : kB T m

K0  2

ð4:12bÞ ð4:12cÞ

For a condensed matter system, N(1) ≈ V/v1 and La = LM/NA where LM is the latent heat per mole and NA is the Avogadro number. Also, we may assume that the interface separating the α- and β-phases consists of a single row of the molecules in the activated state. Then, for the interfacial energy, we obtain the following relation: σ = 4Δag/πa2. Thus, the scaling factors can be expressed through measurable quantities as follows:

Classical Nucleation Theories

Fig. 4.6 Temperature-timetransformation diagram of a thermally activated system. The numbers are the percentage of the transformed phase α

47 1

reduced temperature T/ Tm

4.2

0.8

1% 50%

0.6

99% 0.4

0.2

1

2

3

4

5

6

ln(time)

K0 = 2

kB 3 πa2 σ L T ; Ξ= 3 M m kB T m h NA 4

ð4:12dÞ

Plotting (4.12a) in the double logarithmic coordinates of (4.6a), see Fig. 4.6, we obtain the TTT diagram of a thermally activated trnasformation: ln ln

T 1 - ln K 0 - ln xð1 - xÞ3 e - Ξ=x = 4 ln t; x = : Tm 1 - V β=V

ð4:13Þ

which was experimentally confirmed in many systems.

4.2

Classical Nucleation Theories

One of the limitations of the JMAK theory was the assumption that nuclei have zero size. In this section we consider the more advanced theories of nucleation, which overcome this limitation.

48

4.2.1

4

Isothermal Kinetics of Phase Nucleation and Growth

Gibbsian Theory of Capillarity

In Chap. 2 we established that if large pieces of phases are at equilibrium with each other, then the chemical potentials take on the same value in all phases; see (2.3). In the case of an open system of monatomic (one-component) substance, where the two phases α and β are separated by a plane interface, the role of the chemical potential is played by the specific (per unit mass) Gibbs free energy: gα ðP, T Þ = gβ ðP, T Þ

ð4:14Þ

If we need to calculate the total Gibbs free energy G of the entire two-phase system, a naïve resolution of the problem may be presented by the following formula: G = gα mα þ gβ mβ = gαðβÞ M

ð4:15Þ

where mα(β) is mass of the phase α(β) and M is the total mass: M = mα þ mβ

ð4:16Þ

However, there are two problems with this formula. First, we assume that the specific Gibbs free energies of the phases α and β at equilibrium are the same as those of the noninteracting homogeneous bulk phases α and β. When transformations affect finite amounts of matter, the thermodynamic calculations become more complicated because we have to take into account variations of pressure in the transformed region. Second, the two contiguous phases are separated by an interface, which makes a contribution into the total free energy of the system. In the classical thermodynamics of macroscopic objects, the interface between the phases represents a sheath of finite area S and zero thickness (Mint = 0, Vint = 0), which “wraps up” a phase and separates it from the contiguous ones [1, 6]. The free energy contribution of the interface is equal to the product of its area and the interfacial energy σ, which is the excess of the Gibbs free energy of the system compared to that of the homogeneous one, per unit area of the interface; see Sect. 2.4. To find conditions of equilibrium of the minority phase β in the majority phase α, we present the total Gibbs free energy of the system as follows: GðP, T Þ = F ðV, T Þ þ PV þ σS

ð4:17Þ

where P and T are the external pressure and temperature and F is the Helmholtz free energy of the system: F ðV, T Þ = mα f α ðvα , T Þ þ mβ f β vβ , T

ð4:18Þ

4.2

Classical Nucleation Theories

49

Here vα(β) is the specific volume (v = V/m), and fα(β) is the specific Helmholtz free energy of the α(β)-phase, respectively. Because the interface is “massless,” the α→β transformation does not increase or decrease the mass of the system; hence, M in (4.16) remains unchanged. However, the total volume of the system: V = V α þ V β = mα vα þ mβ vβ

ð4:19Þ

may change because vα≠vβ. For a system to be at equilibrium, its total Gibbs free energy must reach minimum with respect to all independent internal variables, which are (vα, vβ, mα, mβ) for the system of Eqs. (4.17)–(4.19).1 However, due to the condition (4.16), not all of them are independent. We can choose (vα, vβ, mβ) as independent. Then, the following partial derivatives of G are equal to zero: ∂f α ∂G = mα þ P =0 ∂vα ∂vα

ð4:20aÞ

∂f β ∂S ∂G = mβ þP þσ =0 ∂vβ ∂vβ ∂vβ

ð4:20bÞ

∂G ∂S = f β vβ , T þ Pvβ þ σ - f α ðvα , T Þ - Pvα = 0 ∂mβ ∂mβ

ð4:20cÞ

Notice that out of three variations of G with respect to the independent variables only the one with respect to mβ is entirely due to the phase transition; the other two are due to the pressure changes in the respective phases. Minimization of the total free energy also includes minimization of the interfacial energy contribution. In an isotropic system, this leads to the minimization of the surface area for constant volume of the nucleus.2 As is known, the solution of this problem is a sphere. Hence, the nucleus of the minority phase takes on the shape of a sphere of radius R with the following characteristics: S = 4πR2 ; V β =

4π 3 R 3

ð4:21Þ

Using (4.17) and (4.18) and the Legendre transform, see (F.16–18): gðPÞ = f ðvÞ þ Pv;

df dg = - P; =v dv dP

ð4:22Þ

we obtain from (4.20a) that the pressure in the majority phase is equal to the external pressure:

1 2

Do not confuse these variables with (P, T ), which are external variables of the system. We may think of the shape of the nucleus as an independent variable.

50

4

Isothermal Kinetics of Phase Nucleation and Growth

Pα = P

ð4:23Þ

Then, taking into account that Vβ = mβvβ, from (4.20b), (4.21), and (4.23), we obtain the celebrated Laplace pressure equation for a spherical bubble, cf. (2.11): Pβ = P þ

2σ R

ð4:24Þ

Furthermore, (4.20c) together with the Laplace Eq. (4.24) tells us that the specific Gibbs free energies of the majority and minority phases are equal at equilibrium: gβ Pβ , T = gα ðPα , T Þ,

ð4:25Þ

which makes this quantity the chemical potential of the system. However, there is a difference between (4.25) and (4.14): in (4.25), the pressure in the two phases is not the same. Taking into account (4.22–4.24), differentiation of gβ with respect to the pressure in (4.25) yields a relation: 2σ v = gα ðP, T Þ - gβ ðP, T Þ R β

ð4:26Þ

which is called the Gibbs-Thompson effect. This relation allows us to find the radius of the critical nucleus: R =

2σ , Δ~gðP, T Þ

ð4:27aÞ

that is, the nucleus of the β-phase which is at equilibrium with the “sea” of the αphase. Here: Δ~gðP, T Þ 

gα ðP, T Þ - gβ ðP, T Þ vβ

ð4:27bÞ

is called supersaturation. Notice that R* > 0 only if gα(P, T ) > gβ(P, T ), that is, the minority phase is stable. In this case, the majority phase is said to be supersaturated (or supercooled). Now we can ask a question: how much work needs to be done on the homogeneous α-phase in order to create in it a critical nucleus of the β-phase? The answer provided by thermodynamics is that, although the specific amount of work depends on the process, the smallest amount equals the difference in the Gibbs free energies between the final and initial states:

4.2

Classical Nucleation Theories

51

Fig. 4.7 ΔGn of a spherical nucleus of the minority phase in the “sea” of the unsaturated ðΔ~ g < 0Þ or supersaturated ðΔ~ g > 0Þ majority phase as a function of its radius R

DG

DG

Unsaturated ~ Dg0

*

0

R*

ΔGn = G P; T; vα ; vβ ; mα ; mβ - Mgα ðP; T Þ = mα gα ðP; T Þ þ mβ gβ ðP; T Þ þ σS - mα þ mβ gα ðP; T Þ = σSðRÞ - Δ~ g V β ðRÞ

R

ð4:28Þ

The last expression in (4.28) is plotted in Fig. 4.7 as a function of R for different values of the supersaturation Δ~g. In the unsaturated α-phase ðΔ~ g < 0Þ, ΔGn is a monotonic function of R meaning that the β-phase nuclei are not favored. In the supersaturated α-phase ðΔ~g > 0Þ, ΔGn→0 for R→0 and ΔGn < 0 for R→1, meaning that thermodynamically the system favors large nuclei of the β-phase. However, as σ>0, the function ΔGn(R) has a maximum at the critical radius R = R*: dΔGn  ðR Þ = 0 dR

ð4:29Þ

which means that although the critical nucleus is in the state of equilibrium with the “sea” of the α-phase, this is the state of unstable equilibrium. Using (4.21), (4.27a), and (4.28), we obtain the following: ΔG  ΔGn ðR Þ =

1 1 16πσ 3 σSðR Þ = Δ~ gV β ðR Þ = 3 2 3Δ~ g2

ð4:30Þ

This means that the favorable (R→1, β-phase) and unfavorable (R = 0, α-phase) states of the system are separated by a “barrier” of the height ΔG*, which needs to be overcome for the transformation to happen. The second law of thermodynamics (see Sect. 1.1) tells us that the critical nucleus cannot appear in the system as a result of a natural process because the free energy of

52

4 Isothermal Kinetics of Phase Nucleation and Growth

its creation is positive. This led Gibbs to conclude that thermodynamic fluctuations are needed in order to account for the appearance of the critical nucleus in a supersaturated system. Example 4.1 Crystallization of Ice Derive expressions of the radius of the critical nucleus and its Gibbs free energy excess for the process of crystallization of ice. Expressing the supersaturation through the latent heat per unit volume L and melting point Tm, see (2E.3): Δ~g = L

Tm - T Tm

ð2E:3Þ

we obtain the following: R =

2σT m Lð T m - T Þ

ΔG =

16πσ 3 T 2m

ð4E:1Þ

and

4.2.2

3L2 ðT m - T Þ2

ð4E:2Þ

Frenkel’s Distribution of Heterophase Fluctuations in Unsaturated Vapor

In the previous section, we considered Gibbsian theory of capillarity, which establishes existence of a critical nucleus in the α→β transformation, which serves as a barrier for the transformation and can be formed by the thermal fluctuations in the system. Let us test this assumption here by considering an ensemble of particles of different sizes and calculating the number of the critical ones there. As an example of a transformation process, instead of crystallization, we choose condensation of vapor into liquid phase and assume that clusters of molecules may appear in the vapor. Instead of its radius R, we will be characterizing the clusters by the number of their molecules n, which is related to the R as follows: n=

4πR3 3v1

ð4:31aÞ

where v1 is the molecular volume. Then, the cluster has excess of the free energy compared to that of the monomers:

4.2

Classical Nucleation Theories

53

ΔGðnÞ = - nΔμ þ σAðnÞ

ð4:31bÞ

AðnÞ = sn =3

ð4:31cÞ

where: 2

is the number of monomers on the surface of the cluster, 1=3

ð4:31dÞ

Δμ  μv - μl

ð4:31eÞ

s = 36πv21 is the surface-to-volume ratio, and

is the chemical potential difference between the vapor and liquid phases. For a crystallization process, we saw in Sect. 2.2 how one can express this quantity through the melting point temperature and the latent heat of the substance. For condensation, taking into account that μideal = kBT ln P + χ(T) [6] and considering the vapor as a slightly nonideal gas, it is more convenient to express the chemical potential difference between the phases through the pressures, the actual pv and that of the saturation point psat: Δμðpv , TÞ ≈ kB Tln

pv : psat ðTÞ

ð4:31fÞ

The ensemble of clusters in the system of volume V and total number of molecules will be characterized by the number of clusters of size n at time t, N(n, t); see Fig. 4.8. The largest member of the ensemble is the group of monomers; its number N(1, t) is approximately equal to the total number of molecules in the system ~1023. The total number of clusters in the system is as follows: 1

N ðn, t Þ = N ðt Þ

ð4:32aÞ

n=1

while the total number of molecules is as follows:

Fig. 4.8 Heterophase fluctuations in unsaturated vapor

Liquid

vapor

54

4 1

Isothermal Kinetics of Phase Nucleation and Growth

n Nðn, tÞ = N

ð4:32bÞ

n=1

Notice a simple relation of this quantity to the total volume of the liquid V: N = V=v1 . Another way to characterize the ensemble of clusters is by the probability distribution of clusters: N ðn, t Þ N ðt Þ

ð4:33aÞ

ρðn, t Þ = 1:

ð4:33bÞ

ρðn, t Þ  so that: 1 n=1

The quantity ρ is the number density, but not in the real three-dimensional space but in the one-dimensional cluster space specified by the quantity 0 < n 0

ð4:38Þ

Differentiating (4.31b) with respect to the number of particles, we obtain that for the cluster of the critical size the number of monomers is as follows:

4.2 Classical Nucleation Theories 40

'G*/k T B

free energy 'G/kBT

Fig. 4.11 Free energy excess of clusters in supersaturated vapor with σs = 10kBT and Δμ = 2kBT

57

30

20

37

kBT 36

dissoc growth 10 35

25

30

35

40

45

50

n*

-0 20

40

60

80

100

120

cluster size n

n =

2σs 3Δμ

3

ð4:39aÞ

and the free energy excess is as follows: ΔG =

ðσs=3Þ3 1 1 2 = Δμ n = σs n =3 3 ðΔμ=2Þ2 2

ð4:39bÞ

Cf. (4.30). Notice that the chemical potential difference between the vapor and liquid phases Δμ is twice the excess free energy per particle in the cluster and the interfacial energy σ is three times the excess free energy per particle on the surface of the cluster. Also notice that the excess free energy of any cluster can be conveniently expressed as follows; see Fig. 4.11: ΔGðnÞ = ΔG 3

n n

2=3

-2

n n

ð4:39cÞ

Strictly speaking, these equations are correct only in the cases where the critical clusters contain large number of monomers n ≫ 1 because the right-hand side of (4.39a) may not be an integer. Before considering the state of a system that leads to experimentally observed condensation of vapor into liquid, we will consider a state, sometimes called the state of constrained equilibrium, which may set in the vapor for a short, transient period

58

4 Isothermal Kinetics of Phase Nucleation and Growth

of time after the saturation has been switched from the negative value of (4.34) to the positive value of (4.38). After the supersaturation is established in the vapor, the supercritical clusters find themselves in advantageous conditions where their growth is thermodynamically motivated, while not much has changed for the subcritical clusters. In the state of constrained equilibrium, the cluster size distribution is governed by the same relations as in the case of the unconstrained equilibrium in unsaturated vapor, (4.35a) or (4.35b), and the rates of attachment and detachment— by (4.36) and (4.37a, b, c)—although now Δμ( pv, T ) > 0. In addition to (4.37c), another relation, which clearly expresses the essence of the critical cluster, can be found between the rates of attachment and detachment: bð nÞ

> f ðnÞ = f ðnÞ < f ðnÞ

for for for

n < n n = n n > n

:

ð4:40Þ

Example 4.2 The State of Constraint Equilibrium Verify (4.40) for the state of constraint equilibrium. Another form of the condition of the detailed balance, (4.36), is the following: f ðn- 1ÞN eq ðn- 1Þ = bðnÞN eq ðnÞ

ð4E:3Þ

Dividing (4E.3) by (4.36), we obtain the following: bðnÞ f ð nÞ =e ∎

2



bðnÞbðn þ 1Þ N eq ðn - 1Þ = N eq ðn þ 1Þ f ð n - 1Þ f ð nÞ =kB T

½ΔGðnþ1Þ - ΔGðn - 1Þ

> 1 for n < n = 1 for n = n < 1 for n > n

ð4E:4Þ

Furthermore, examine Fig. 4.9, and notice that ρcn(n) is practically identical with ρeq(n) for n ≪ n, has a minimum at n, and rises exponentially for n > n (Why?). To preserve the physical relevance of the constrained equilibrium state, we, after Volmer [7], assume that there exists a size n > n such that: N con ðn ≥ nÞ = 0

ð4:41Þ

As we stated above, the state of constrained equilibrium may exist for a short period of time after the supersaturation has been set in the vapor. Although physically feasible, such a state does not describe the process of nucleation because it does not produce a population of large clusters of the new phase; see (4.41). Now we will consider the nonequilibrium state of supersaturated vapor, which may condense into bulk liquid. In order to do that, let us ask the question: what are the processes that can change the number N(n, t) of the clusters of size n in the ensemble? According to the

4.2

Classical Nucleation Theories

59

rules of the model, a cluster of that size can be created either by attachment of a monomer to the (n - 1)-cluster or by detachment of a monomer from the (n + 1)cluster; see Fig. 4.10b. At the same time, the n-clusters can be destroyed by adding or losing a monomer. Other processes of cluster growth or decay are prohibited by the rules of the model. Taking into account the rates of attachment and detachment, adding up all the contributions, and approximating a difference with a differential, we obtain an equation: ∂N ðn, t Þ = f ðn- 1ÞN ðn- 1, tÞ - bðnÞN ðn, tÞ - f ðnÞN ðn, t Þ þ bðn þ 1ÞN ðn þ 1, tÞ: ð4:42Þ ∂t

Mathematically speaking, our model represents a Markov process where (4.42) is called the Kolmogorov or master equation. It is a gain-loss equation for the number of n-clusters where the first and fourth terms are the gain due to transitions from other states and the second and third terms are the loss due to transitions into other states. A very effective way of analyzing the master equation is by introducing a new quantity: J ðn, t Þ = f ðnÞN ðn, t Þ - bðn þ 1ÞN ðn þ 1, t Þ,

ð4:43Þ

which is called the net rate. It is equal to an excess of the number of clusters that transfers in the unit of time in the entire volume V from n-size to (n + 1)-size over the number of clusters that in the same time period and volume transfers back from (n + 1)-size to n-size. With the help of the net rate, the master equation can be written as follows: ∂N ðn, t Þ = J ðn- 1, t Þ - J ðn, t Þ: ∂t

ð4:44Þ

The set of Eqs. (4.43) and (4.44) for clusters of all sizes was proposed by Becker and Döring [8]. In a way, it represents the continuity equation for a flux of clusters in the 1D space of cluster sizes; see Fig. 4.10b. It should be supplemented with the boundary conditions on both ends of the space: N ðn, t Þ → N eq ðnÞ for n → 1

ð4:45aÞ

for n ≥ n

ð4:45bÞ

N ðn, t Þ = 0

The first boundary condition says that the population of small clusters being “so far away” from the critical one equilibrates very quickly to the constrained equilibrium distribution. Physical meaning of the second (Volmer) boundary condition for the nonequilibrium state, cf. (4.41), was explained by Szilard: he imagined that every cluster that succeeds in growing to size n is removed from the ensemble, broken down into n monomers, and returned into the ensemble.

60

4

Isothermal Kinetics of Phase Nucleation and Growth

First, let us study the properties of the net rate in the equilibrium state of unsaturated vapor and the state of constrained equilibrium in the supersaturated vapor. Due to the condition of detailed balance, (4.36), which applies to both cases, we have the following: J eq ðn, t Þ = 0:

ð4:46Þ

To solve the set of Becker-Döring equations, we will use the trick suggested by McDonald [9]. Let us divide all terms of the nonequilibrium flux Eq. (4.43) by the condition of the detailed balance for the constrained equilibrium state, (4.36): N ðn, t Þ N ðn þ 1, t Þ J ðn, t Þ = f ðnÞN eq ðnÞ N eq ðnÞ N eq ðn þ 1Þ

ð4:47Þ

Two observations are in order here. First, notice that we introduced the characteristics of the constrained equilibrium state into the equation for the nonequilibrium flux. Second, notice that the right-hand side of (4.47) is a difference of the two consecutive terms of similar structure. Using this observation, we can sum up Eq. (4.47) for all members of the ensemble and arrive at the relation: n^ - 1 n=1

N ð1; t Þ N n^; t J ðn; t Þ = =1 f ðnÞN eq ðnÞ N eq ð1Þ N eq n^

ð4:48Þ

where the last equality is obtained after application of the boundary conditions, (4.45a) and (4.45b). Physically, we can think of the process of condensation from a supersaturated vapor as consisting of the following stages. (1) After establishing the supersaturation in the vapor, there is a short period of time, during which the cluster size distribution function rapidly grows (inflates) mainly in the domain of supercritical clusters up to the level of the constrained equilibrium function. The flux of clusters during this stage is nearly zero. (2) The next, transient stage is establishing the nonequilibrium cluster size distribution function with the nonzero flux of clusters. It lasts for the time period called induction time. (3) After that, the nonzero flux may exist for a longer period of time, during which we may speak of the quasi-stationary state with constant flux of clusters. (4) Finally, the quasi-stationary state will be terminated when the supercritical clusters start depleting the population of monomers in the vapor. This stage may be described by the JMAK theory, which we considered in the previous section. In the state of quasi-stationary nucleation, its cluster size distribution function is independent of time. The flux J is not only independent of time but, due to (4.44), is also uniform, that is, independent of the size. Then, we obtain the following equation:

4.2

Classical Nucleation Theories

61 n^ - 1

J

1 =1 f ð n ÞN eq ðnÞ n=1

ð4:49Þ

Two observations can be made from (4.49). First, one can see that the characteristics of the stationary cluster size distribution function Nst(n) do not appear in the flux equation. However, this function can be recovered from the definition of the flux, (4.43). Second, (4.49) specifies the flux J in terms of the known functions f(n) and Neq(n). We can use the last observation for the calculation of the flux. Indeed, (4.49) shows that the flux depends on the finite sum of reciprocals of the products of f(n) and Neq(n). We can estimate the sum by its largest term, which is the one that belongs to the cluster of the critical size n. Then: J = f ðn ÞN eq ðn Þ

ð4:50Þ

One may think that estimating a sum by its largest term is not a very accurate approximation. Another way of summing up the terms of the series in (4.49) is to use the method of numerical calculations. However, there is a clever analytical method called the Laplace method or the method of steepest descend, which was used by Zeldovich in his theory [10]. It consists of four short steps. First, notice that if n ≫ 1, the sum in (4.49) can be replaced by an integral:

n^ - 1 n=1

∘≃

n^ - 1 1

dn∘. Second,

inspect Fig. 4.12, and notice that 1/Neq(n) is very small everywhere except in the neighborhood of nwhere it passes through a sharp maximum. Thus, little error will be made by taking f(n) as a constant evaluated at n. Third, the term 1/Neq(n) can be expanded in a Taylor series about n where it passes through its maximum. To do that, we expand the free energy excess (4.39c):

-1 Fig. 4.12 ρeq ðnÞ and f(n) for the same system as in Fig. 4.11

16

16

2x10

1x1016

12

8x1015

8

Ueq1(n) 15

4

4x10

n*

0 20

40

cluster size n

0 60

80

attachment rate⫻

inverse pdf

f(n)

62

4

ΔGðnÞ = ΔG þ

Isothermal Kinetics of Phase Nucleation and Growth 2

∂ΔG 1 ∂ ΔG ðn- n Þ þ ðn - n Þ2 þ ⋯ 2 ∂n2 ∂n

and notice that ΔG′(n) = 0 (Why?) and ΔG′′ðn Þ = -

2ΔG 2σs = n þ Z π wðnÞ

ð4:59aÞ

4.2

Classical Nucleation Theories 1

jst = 0

65

dn DðnÞρeq ðnÞ

-1

= ZDðn Þρeq ðn Þ:

ð4:59bÞ

Not surprisingly, the stationary flux in the Zeldovich theory, (4.59b), is a continuous analog of the discrete expression, (4.51). The property ρst(n) = ½ρeq(n) in (4.59a) is apparent if we notice that the lower limit in (4.59b) can be replaced with (-1) (Why?). As we can see from (4.59a and 4.59b), the stationary probability distribution function and the flux can be calculated if the diffusion coefficient D(n) is known. To find this coefficient, instead of the microscopic modeling, which was used in the Becker-Döring theory, see (4.37b), Zeldovich suggested to match the drift velocity in the configuration space to the macroscopic growth velocity of large spherical particles in the real space (the correspondence principle). To do that, we use (4.39c) for (4.57a) and obtain an expression for the drift velocity: wð n Þ =

n Δμ 1n kB T

1=3

DðnÞ

ð4:60Þ

Expressing the particle radius R through the number of molecules n and molecular volume v1, see (4.31b and 4.31c), and assuming for the particles the uniform isotropic linear growth rate v  dR/dt, we obtain the relationship for the macroscopic growth velocity: wðnÞ = v Avð1nÞ. This expression matches (4.60) in the domain of large clusters n ≫ n if the diffusion coefficient is as follows: D ð nÞ = v

k B T A ð nÞ Δμ v1

ð4:61Þ

Then, the stationary nucleation flux, (4.59b), can be expressed conveniently through the macroscopic growth rate as follows: p jst = 2ρð1Þv

σk B T - ΔGðn Þ=kB T e Δμ

ð4:62Þ

Moreover, the diffusion coefficient in (4.61) matches the microscopic rate of attachment in the Becker-Döring theory (4.37), that is, D(n) = f(n), if: v= p

p ðT Þ v1 Δμ Δμ=k T pv v 1 Δμ = p sat e B 2πm1 k B T k B T 2πm1 k B T kB T

ð4:63aÞ

where in the second expression the vapor pressure was replaced with a more general quantity of the chemical potential difference using (4.31e). Then, for the flux of growing clusters per particle in the ensemble, we obtain an expression:

66

4

jst = 2ρð1Þ

Isothermal Kinetics of Phase Nucleation and Growth

σ psat ðT Þ v1 - Δμðn =2 - 1Þ=k T B e 2πm1 k B T

ð4:63bÞ

where ρ(1) ≈ 1 and n ≫ 1. A qualitative comparison of these expressions with their counterparts from the theory of thermal activation, which we used with the JMAK theory, see (4.10) and (4.11b), reveals important features of the processes of nucleation and growth: (i) critical dependence of the stationary nucleation flux on the free energy of the barrier state and (ii) nearly linear dependance of the macroscopic growth rate on the chemical potential difference between the phases. However, a complete match of these approaches is not possible because they are based on incompatible sets of assumptions. The Zeldovich approach can tell us much more about the nucleation process than just providing us with the parameters of the stationary state. Indeed, recalling that w(n) in (4.56a) is the rate of cluster size change and expanding the free energy excess about the critical size, we obtain the following equation: Dðn Þ d2 ΔGðn Þ dΔn Δn =kB T dt dn2

ð4:64Þ

where Δn = n - n. This equation can be resolved to yield the following relation: n = n þ Δn0 e =τZ ,

τZ- 1 = 2πZ 2 Dðn Þ

t

ð4:65Þ

where Δn0 is the initial deviation of the cluster size from the critical value and τZ is called the Zeldovich time. Moreover, Eq. (4.54a and 4.54b) can be resolved for the nonstationary flux at the critical size [11]: jðn , t Þ = jst 1- e

- 2t=τ

Z

:

ð4:66Þ

This relation tells us that the induction time of the transient stage or time lag of nucleation, which we discussed above, is twice the Zeldovich time (2τZ). Example 4.3 Condensation of Vapor Calculate the quasi-stationary probability distribution function and the flux of the critical clusters for condensation of vapor at T = 273 K = 0 °C and pv = 4.5 kPa. For water molecules, v1 = 3.0 × 10-29 m3; m1 = 3.0 × 10-26 kg; s = 4.67 × 10-19 m2. From tables we find that psat = 0.61 kPa ≈ 0.006 atm and σ = 80 mN/m. Then, = 9:92 ≈ 10 and θ  ΔμkðpBvT, T Þ ≈ ln p pðvT Þ = 2. This allows us to calculate the Σ  kσs BT sat



size n =

2Σ 3 3θ

≈ 37 and free energy

ΔG kB T

=

ðΣ=3Þ3 ðθ=2Þ2

≈ 37 of the critical clusters.

For the rate of attachment of monomers, we get D(n) = f(n) = 8.78 × 108 s-1, and

for

the

Zeldovich

factor,

Z = 13

Σ 1 π n2=3

≈ 0:0533.

Then,

Exercises

67 - ΔGðn Þ

=k T - 17 B ≈ 8:53 × 10 ρeq ðn Þ = ρð1Þe . For the quasi-stationary probability, do   not forget that ρst(n ) = ½ρeq(n ). Then, the flux of the supercritical clusters per monomer particle is jst = ZD(n)ρeq(n) ≈ 4.0 × 10-9s-1.

4.3

Critique of Classical Nucleation Theories

At this junction, it is important to recognize the main assumptions made in CNT in order to assess feasibility of relaxing them. We will look at that in Part II. 1. The minority phase is assumed homogeneous all the way to the interface having the specific free energy of the transformed bulk phase, although, at a pressure different from the external one. 2. The interface, which separates the phases, is assumed to be infinitely thin. 3. The interfacial energy is assumed to be isotropic and independent of the radius of the nucleus. 4. The critical nucleus is assumed to be large (n*»1). 5. The nuclei attachment/detachment kinetics is assumed to proceed by monomers only. 6. The latent heat is assumed to be zero or the thermal conductivity is assumed to be infinite.

Exercises 1. What is the dimension of the Avrami constant? 2. Derive the JMAK equation for the case when the nuclei grow in the form of 2D disks. 3. Derive the JMAK equation for the case when the nuclei grow in the form of 3D spheres in the diffusion-controlled Stefan regime. 4. Calculate the scaling factors of the thermally activated JMAK relation in (4.12c) for iron. 5. Prove that the critical nucleus is in the state of unstable equilibrium with the “sea” of the α-phase. 6. Compare Fig. 4.8 with 4.1 and discuss the differences! 7. Verify Eq. (4.37c). 8. Verify Eqs. (4.51) and (4.52). 9. Verify Eq. (4.59a). 10. Verify the formula wðnÞ = v Avð1nÞ using (4.60) and (4.31b and 4.31c). 11. Derive Eq. (4.64). 12. Derive expressions for the size and total free energy of the 2D critical nucleus in the shape of (a) long cylinder, (b) thin disc, and (c) semispherical cap.

68

4

Isothermal Kinetics of Phase Nucleation and Growth

13. Find the radius of the critical nucleus in crystallization of pure Ni at 1000 K if Ni has the following properties: Tm = 1725 K (melting point); L = 3 × 109 J/m3 (latent heat); σ = 0.4 J/m2 (interface tension); vμ = 6.59 cm3/mol (molar volume). How many atoms does the nucleus have? 14. Calculate the Zeldovich factor for pure Ni at 1000 K; see Exercise 13. What is the physical significance of this factor? 15. Calculate the homogeneous nucleation rate in water vapor at 80 °C. 16. Calculate the homogeneous nucleation rate of ice from water at -10 °C. (The required properties of water and ice you can find on the Internet.)

References 1. J.W. Gibbs, The scientific papers, vol 1 (Dover, New York, 1961), p. 219 2. A.N. Kolmogorov, Bull. Acad. Sci. USSR. Ser. Math. 3, 355–359 (1937) (in Russian) 3. M. Avrami, J. Chem. Phys. 7, 1103–1112 (1939).; 8 (1940) 212–224; 9 (1941) 177–184 4. W.A. Johnson, R.F. Mehl, Trans. Metall. Soc. AIME 135, 416–442 (1939) 5. S. Glasstone, K.J. Laidler, H. Eyring, The Theory of Rate Processes, 1st edn. (New York and London, 1941) 6. L.D. Landau, E.M. Lifshitz, Statistical Physics (Elsevier, North Holland, Amsterdam, 1967) 7. M. Volmer, Kinetik der Phasenbuildung/Ed (T. Steinkopf, Dresden, 1939) 8. R. Becker, W. Döring, Ann. Phys., Bd. 24, S. 719 (1935) 9. J.E. McDonald, Am. J. Phys. 30, 870 (1962).; 31, 31 (1963) 10. Y.B. Zeldovich, Acta Physicochim. USSR 18, 1 (1943) 11. V.I. Kalikmanov, Nucleation Theory (Springer, Heidelberg, 2013)

Chapter 5

Coarsening of Second-Phase Precipitates

In Chap. 1 we established that a system is unstable thermodynamically until its free energy has reached a minimum. On the other hand, a particle of the second phase has substantial amount of excess free energy associated with its interfacial area, a conclusion which we reached in the Gibbsian theory of capillarity; see Sect. 4.2.1. Therefore, the two-phase system consisting of precipitate particles of various sizes embedded into a sea of parent phase (matrix) is not stable. Hence, the decrease of the total area of the particles may serve as a driving force of the process of stabilization of the system. It is advantageous for smaller particles to dissolve and to transfer their mass to the larger ones since smaller particles contribute, on a per volume basis, more to the interfacial energy of the system than do larger particles. Such a process was termed Ostwald ripening or coarsening after the physical chemist Ostwald who originally described the process qualitatively [1]. Since the excess energy associated with the total surface area is usually small, coarsening typically manifest itself as the last stage of a first-order phase transformation process, at the end of which the system will have reached its lowest energy state of a single particle in the matrix. In 1957–1958 Lifshitz and Slezov (LS) presented a theory, which describes this process in detail [2]. In 1961 Wagner, following in the steps of Lifshitz and Slezov (contrary to the accounts in many textbooks), applied their method to a case of interface-controlled kinetics and obtained similar results. The most intriguing predictions of the LS theory were (i) the universal, self-similar nature of the coarsening process at long times (asymptotically) and (ii) the compactness (localization) of the particle size distribution function (pdf). The former means that the late stage of the process is independent of the initial conditions (is an attractor) and the latter that on every stage there is the largest particle. In what follows we will take an approach, which is slightly different from that of LS; however, for simplicity of comparing with the original work, we will retain the original LS nomenclature as much as possible. One of the important points of this chapter is consideration of self-similarity in evolution of the system, that is, the growth regimes when the state of the system at a later moment is similar to the state of the system at an earlier moment in time. Self© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Umantsev, Field Theoretic Method in Phase Transformations, Lecture Notes in Physics 1016, https://doi.org/10.1007/978-3-031-29605-5_5

69

70

5 Coarsening of Second-Phase Precipitates

similarity is related to the change of scale in the system F(x,t), which may be examined by the dimensional analysis: F ðx, t Þ = φðt Þψ ½x=f ðt Þ,

ð5:1aÞ

or the shift in the reference frame: F ðx, t Þ = φðt Þψ ½x- f ðt Þ:

5.1

ð5:1bÞ

Formulation of the Problem

Let us consider coarsening in the framework of a binary alloy where the α-phase matrix consists mostly of A-atoms with B-atoms serving as the solute while the βphase precipitates are enriched in B-atoms. If a certain number of B-atoms moves from a particle of a smaller radius to the one of larger radius, the total amount of the surface area of the two particles will be reduced. However, for the particles to grow, they need a flux of B-atoms from the matrix, supply of which is limited. Hence, it must be replenished from elsewhere. The system may use the subset of its smaller particles as a source of the solute for the larger ones and still have the total interfacial energy reduced.

5.1.1

Rate Equation

Consider the precipitate phase β as an ensemble of spherical particles of radius R and solute concentration Cβ(R, t); see Fig. 5.1. These particles are surrounded by the matrix phase α of concentration Cα(R, t). Coarsening can be described as the process of growth of particles by incorporating solute into them, which causes an associated depletion of the matrix, and concurrent dissolution of the particles, which causes increase of the matrix concentration field. Redistribution of the solute species in the matrix satisfies the Fickian law: ∂Cα ðx, tÞ = - ∇Jα , Jα = - D∇C α ðx, tÞ ∂t

ð5:2Þ

where Jα is the flux of solute transported across unit area in unit time and D is the intrinsic diffusivity of solute. The requirement of solute conservation on the interface of a particle of radius R yields the first boundary condition:

5.1

Formulation of the Problem

71

Fig. 5.1 Ensemble of spherical particles of various radii and solute concentrations surrounded by matrix phase α with the far field concentration Cα(1). Rm is the largest particle of the ensemble

Cβ ðR, t Þ - C α ðR, t Þ

dR = - J α ðR, t Þ dt

ð5:3aÞ

where the flux into the phase β is ignored. Since coarsening predominantly occurs after a near equilibrium state has formed, the concentration gradients, which are responsible for the diffusive growth of particles, are small. Therefore, the concentration field in the matrix can be approximated by the solution of the quasi-steady centrosymmetric diffusion Eq. (5.2) with ∂Cα/∂t = 0. A great deal of the success of the theory is due to the idea of a mean field (far field) of concentration Cα(1, t), that is, the concentration in the matrix phase far from the particle, which allows us to consider the local rate of growth/dissolution of each particle separately. Then, the flux of the solute on the spherical surface of radius R can be expressed as follows:

72

5

J α ðR, t Þ =

Coarsening of Second-Phase Precipitates

D ½C ðR, t Þ - C α ð1, t Þ R α

ð5:3bÞ

Combining (5.3a) and (5.3b), we obtain an equation, which describes the rate of growth or dissolution of a particle: C β ðR, t Þ - C α ðR, t Þ

dR D = ½Cα ð1, t Þ - C α ðR, t Þ dt R

ð5:4Þ

We see that the particle of radius R will grow or shrink depending on the differences between its surface concentrations and that of the far field. To evaluate the former, a few approximations must be introduced.

5.1.2

Condition of Local Equilibrium

In the coarsening regime when the growth rates are very small, there is plenty of time for the condition of chemical equilibrium to establish at the interface between the particle and the matrix. It can be obtained theoretically by equating the chemical potentials of both species, solute and solvent, on both sides of the interface; see Sect. 2.2. For a binary mixture we can consider the solute species only. The condition of global (complete) equilibrium, which implies a flat interface between the α- and βphases, is expressed as follows: μβ Cβ = μα Cα

ð5:5aÞ

where C α,β are the equilibrium concentrations, which are functions of temperature and pressure only; see Fig. 5.2. Across the curved interface, the condition of equilibrium takes the following form: μβR C β ðR, t Þ = μα fCα ðR, t Þg

ð5:5bÞ

Due to the Gibbs-Thompson effect, see (2.11) and (4.26), the chemical potential of the precipitate particle of the radius R relates to the chemical potential of the bulk precipitate phase as follows: μβR fC g = μβ fCg þ β

2σ β V R

ð5:6Þ

where σ is the interfacial tension and V is the partial molar volume of the bulk precipitate phase, which may depend on the concentration.

5.1

Formulation of the Problem

73

Fig. 5.2 Thermodynamics of the matrix phase α and precipitate particle phase β

Let us assume that the precipitate phase is chemically stiff, that is1: μβC 

∂μ ∂μβ ∂μα C β ≫ μαC  Cα > 0; = GCC : ∂C ∂C ∂C

ð5:7aÞ

Then, the concentration changes due to the Gibbs-Thompson effect are small and C β ðR, t Þ ≈ C β . As coarsening is a late stage of transformation when the system is close to the equilibrium, we may also assume that Cα - C α ≪ Cβ - Cα everywhere in the matrix; see Fig. 5.2. Then, expanding the chemical potential of the matrix phase in a Taylor series about the equilibrium reference state C α , restricting the expansion by the first two terms, and using (5.5a) and (5.6), the condition of equilibrium (5.5b) takes the following form: μαC C α ðR, t Þ - C α =

2σ β V , R

ð5:7bÞ

Although (5.7b) is an acceptable approximation in the right-hand side of (5.4), it is not acceptable in the left-hand side; Cα(R, t) ≈ Cα(1, t) is a better one.2 Applying these approximations to (5.4), we obtain the particle rate equation in the following form:

1 α 2

μC > 0 means that C α is outside the spinodal region of the matrix if such exists; see Fig. 5.2. See Sect. 5.3.8 for a discussion.

74

5

Coarsening of Second-Phase Precipitates

ð1- ΔÞ

dR D α = Δdt R R

ð5:8aÞ

Δ ðt Þ 

C α ð1, t Þ - Cα 0 μαC Cβ - Cα

ð5:8cÞ

is the characteristic size of particles, which is the ratio of the energy of addition of the solute atom to the surface (2σ) and the chemical potential difference of moving the β atom from the matrix to the particle: μαC Cβ - Cα =V > 0. Usually on a nanometer scale, the characteristic size α is the only property which cumulatively determines the thermodynamics of the alloy. The kinetic properties are determined by the diffusion coefficient D. Inspecting Eq. (5.8a), one can see that the particles with R > R* grow while those with R < R* dissolve where: R 

α Δ

ð5:9Þ

is the critical radius such that the particles with R = R* neither grow nor dissolve; see Fig. 5.3. Compare (5.9) with (4.27a) and Fig. 5.3 with Fig. 4.7, and notice that this is the same critical size, which controls the nucleation process and determines the flux of nucleating particles. However, now it plays a different role. Indeed, as the supersaturation (5.8b) is low now, the size and energy of the critical particle is large. Therefore, the probability of obtaining it by fluctuations is very low. Hence, the critical radius plays a role of the divider of the growth and dissolution. For an ODE of the first order to be resolvable, it needs an initial condition. For the problem of coarsening, the initial condition corresponds not to the time of the overall initiation of the transformation (see Chap. 4) but to the beginning of the asymptotic stage, that is, the moment t0 when, due to low supersaturation, all the nucleation events had subsided and the only two processes remaining are growth and dissolution of the existing particles. Hence, we set Δ(t = t0) = Δ0, R(t = t0) = R0 and look for a general solution of (5.8a) in the form R = R(t; Δ0, R0). An important feature of the coarsening problem, which makes it rather complicated, is that Δ is not a constant but a function of time, which must be found self-consistently with other requirements of the problem.

5.1

Formulation of the Problem

75

0.025 0.02

rate of growth dR/dt

0.015 0.01 0.005 0 -0.005

0

2

4

6

8

10

R*

particle radius R

12

14

-0.01 -0.015 -0.02 -0.025

Fig. 5.3 Rate of growth of the particles in the polydisperse ensemble as a function of its radius for Δ = 0.25. Radius and time are in the scaled units (α, α2/D), respectively

5.1.3

Mass Conservation Condition

As we stated above, Δ does not remain constant but decreases with time as more solute leaves the matrix and accumulates in the growing particles than arrives from the dissolving ones. To account for the decreasing supersaturation in the matrix, we introduce the volume fraction of the precipitate phase by ψ = Vβ/V and take into account that most of the matrix phase is far from the particles; see Fig. 5.1. Then, we write down the solute conservation condition in the following form: C 0 = ψCβ þ ð1- ψ ÞC α ð1, t Þ

ð5:10aÞ

where C0 is the overall concentration of the system, equal to the initial concentration of the matrix when it was void of the precipitate phase. Notice from Fig. 5.2 that C0 > Cα(1, t0). For convenience, the solute conservation condition can be rewritten in the following form: ψ þ ð1- ψ ÞΔðt Þ = or

C0 - Cα Q Cβ - Cα

ð5:10bÞ

76

5

ψ=

Coarsening of Second-Phase Precipitates

Q - Δðt Þ , 1 - Δ ðt Þ

ð5:10cÞ

which shows that the final volume fraction of the precipitate phase (Δ → 0) approaches the initial supersaturation of the system: ψ(t → 1) → Q.

5.1.4

Particle Size Distribution Function

As we saw in Sect. 4.2.2, an efficient way to characterize an ensemble of particles is to introduce a particle size distribution function. To describe the ensemble of evolving spherical particles of radii R, we introduce the particle radius distribution function f(R, t) normalized on the total number of the particles per unit volume of the system, cf. (4.32a and 4.32b) and (4.33a and 4.33b): 1

f ðR, t Þ dR =

0

N ðt Þ V

ð5:11Þ

With the help of this function, the average particle radius, total surface area of the precipitate phase, and its volume fraction are expressed as follows: Rðt Þ =

1

V N ðt Þ

Rf ðR, t Þ dR

ð5:12aÞ

R2 f ðR, t Þ dR

ð5:12bÞ

R3 f ðR, t Þ dR

ð5:12cÞ

0 1

Sðt Þ = 4πV 0

ψ ðt Þ =

4π 3

1 0

where the latter obeys the global solute conservation condition (5.10c). In Sect. 4.2.3 we saw that the particle size distribution function obeys the FokkerPlank equation, which describes fluctuations and thermodynamic drift of the ensemble in the 1D space of its size. In the late stage of transformation such as coarsening when the system is close to its equilibrium, the fluctuations, which were critical for the stage of nucleation, cease to be important. Then, the evolution equation for the particle radius distribution function takes the following form, cf. (4.55): ∂f ðR, t Þ ∂ ½vRt f ðR, t Þ = 0 þ ∂t ∂R

ð5:13aÞ

where the thermodynamic drift is described by the rate Eq. (5.8a), cf. (4.56a): vRt 

dR : dt

ð5:13bÞ

5.2

Resolution of the Problem

5.1.5

77

Initial, Boundary, and Stability Conditions

Equations (5.8a), (5.10b), and (5.13a), which describe evolution of the system, should be supplemented with the initial and boundary conditions. As we discussed above, we set up the initial conditions for R and Δ in the particle growth/dissolution Eq. (5.8a) at the moment of initiation of the asymptotic regime t0. For the particle radius distribution function, we set it equal to fo(R) at the same moment. Furthermore, although (5.13a) is of the first order with respect to R, we request that f(R, t) → 0 at both boundaries R → 0 and R → 1. We will see later that, in fact, the boundary conditions on f(R, t) must be even more restrictive, see (5.18). To describe the physical process delineated above, we are seeking a solution of Eqs. (5.8a), (5.10b), and (5.13a), with the set of initial and boundary conditions where, as time goes by, that is, for t → 1, the far field matrix concentration approaches its equilibrium value Cα ð1, t Þ → Cα ; the number of particles decreases, while their average size increases such that the volume fraction of the precipitate phase ψ grows but remains finite. Thus: Δðt Þ → 0

ð5:14aÞ

N ðt Þ → 0

ð5:14bÞ

To be precise, N ðt Þ → 1 ≪ N 0  V

1 0 f 0 ðRÞ

dR.

Rðt Þ → 1 3

ψ ðt Þ  N R =V → Q:

ð5:14cÞ ð5:14dÞ

Finally, if the problem allows multiple solutions, the one with the fastest rate of supersaturation decrease must be selected because the domain with the least saturation corresponds to the largest critical radius; hence, more droplets will be dissolving there, and it will dominate domains with larger saturations and smaller critical radii.

5.2

Resolution of the Problem

5.2.1

Unchanging Supersaturation

To educate an eye, one may integrate Eq. (5.8a) for the case of unchanging supersaturation, that is, Δ = const(t): DΔ2 1 RΔ t= -1 2 α α2 ð1 - ΔÞ

2

þ2

RΔ RΔ - 1 þ ln - 1 þ const α α

ð5:15Þ

78

5

Coarsening of Second-Phase Precipitates

16

particle radius R

14 12 10 8 6

R* 4 2 0 -60

-40

-20

0

20

40

60

80

100

120

time t Fig. 5.4 Particle radius versus time trajectory for constant supersaturation of Δ = 0.25. Radius is scaled with α, time with (α2/D), and the integration constant in (5.15) is arbitrarily set to zero

This solution (see Fig. 5.4) shows that p particles with R0 > R* grow and asymptotically approach the square root law R  2DΔt , while those with R0 < R* shrink. However, one needs to take into account that decreasing Δ(t) slows down the growing particles and allows the critical radius R* to ‘catch up’ with R, which changes the growth to dissolution.

5.2.2

Asymptotic Analysis of Bulk Properties

In what follows we divide the solution of the problem into two parts. First, we analyze the long-time (asymptotic) behavior of the average (bulk) properties of the system, which is basically a dimensional analysis and is different from the original LS analysis. Then we conduct the asymptotic analysis of the distribution function of the ensemble, which is identical to that of LS. To analyze the long-time evolution of the system, we introduce the nth integral moment of the particle size distribution function [3]:

5.2

Resolution of the Problem

79 1

I n ðt Þ 

Rn f ðR, t Þ dR

ð5:16Þ

0

and relate it to the bulk physical properties of the ensemble: I 0 ðt Þ =

N ðt Þ ; V

I 1 ðt Þ = Rðt ÞI 0 ðt Þ; Sð t Þ ; 4πV 3 I 3 ðt Þ = ψ ðt Þ 4π I 2 ðt Þ =

ð5:17aÞ ð5:17bÞ ð5:17cÞ ð5:17dÞ

The evolution equation for the nth moment takes the following form: dI n =dt

1

Rn ∂R ðf vRt Þ dR

0

Integrating it by parts we obtain that: dI n =n dt

1

Rn - 1 vRt f dR - ðRn vRt f ÞðR → 1, t Þ þ ðRn vRt f ÞðR → 0, t Þ

0

Let us require that for any j: Rj f ðR → 1, t Þ → 0

ð5:18Þ

Moreover, taking (5.14a) into account, we present the growth rate Eq. (5.8a) as follows: vRt =

D α : ΔR R

ð5:19Þ

and obtain the nth moment evolution equation in the following form: dI n = D nðΔI n - 2 - αI n - 3 Þ - α Rn - 2 f ðR → 0, t Þ dt For n = 0 using (5.17a) we have the following: dN = - αDV R - 2 f ðR → 0, t Þ dt that is:

ð5:20Þ

80

5

Coarsening of Second-Phase Precipitates

R2 dN αDV dt

f ðR → 0, t Þ ≈ -

ð5:21Þ

Then, for n > 0, we obtain an infinite hierarchy of equations: dI n = DnðΔI n - 2 - αI n - 3 Þ dt

ð5:22Þ

Applying it at n = 3 and using (5.14d) and (5.17d), we obtain a relation: dI 3 = 3DðΔI 1 - αI 0 Þ → 0, for t → 1 dt which yields that: Rðt Þ → R ðt Þ,

for t → 1:

ð5:23Þ

Furthermore, the hierarchy of Eq. (5.22) motivates the following ansatz: I n ðt Þ = F ðnÞðKt ÞνðnÞ ,

for any n

ð5:24aÞ

and Δðt Þ = αðKt Þθ

ð5:24bÞ

Then, we arrive at the functional equations: νðnÞ - 1 = νðn- 3Þ = νðn- 2Þ þ θ

ð5:25Þ

and a homogeneous linear recurrence relation of the third order: δνðnÞF ðnÞ = n½F ðn- 2Þ - F ðn- 3Þ,

for n > 0

ð5:26Þ

where F(n) is the minimal solution (due to a condition of stability of recurrence relations, e.g., see [4]) and δ=

K αD

ð5:27Þ

The first functional equation in (5.25) is resolved by the following: ν ð nÞ =

n - 1: 3

Then the second functional equation in (5.25) yields the following:

ð5:28Þ

5.2

Resolution of the Problem

81

θ= -

1 : 3

ð5:29Þ

Equations (5.14d), (5.17d), (5.24a), and (5.28) provide the following initial values for the recurrence relation (5.26): F ð1Þ = F ð0Þ, F ð 3Þ =

ð5:30aÞ

3 Q ≈ 0:24Q: 4π

ð5:30bÞ

One may try to solve the recurrence relation by the characteristic root technique where F(n) = Axn and arrive at the cubic characteristic equation: 1 n ðx- 1Þ - x3 = 0 δ ν ð nÞ

ð5:31Þ

which may be resolved by the Cardano formula [5]. However, this method does not allow us to identify the stable solution. Instead, we resolve the recurrence relation (5.26) for large n by considering n a continuous variable. In this case we obtain the differential equation: d ln F ðnÞ νðnÞ =δ dn n which can be resolved as follows: F ð nÞ = B

en=3 n

δ

ð5:32aÞ

where B is a constant. Given that this function has a minimum at n = 3, see Fig. 5.5, (5.32a) can be applied at n = 3 to yield the following: B≈

3 3 δ Q: 4π e

ð5:32bÞ

Hence, for large n: I n ðt Þ ≈

3 3 Q 4π n

δ

eδ Kt

νðnÞ

:

ð5:33Þ

On the other hand, let’s assume that: f ðR, t Þ  e - βðtÞR , for R ≥ Rm > R

ð5:34aÞ

82

5 0.5

recurrence relation coefficients F(n)/Q

Fig. 5.5 Recurrence relation coefficients scaled with the initial supersaturation Q as a function of the number n. Dashed line—solution (5.32a) of the differential equation

Coarsening of Second-Phase Precipitates

0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 0

1

2

3

4

5

6

7

8

9 10 11 12

integral moment number n

where β(t) is a decreasing function of time (this functional form is in compliance with the asymptotic (5.14b) and boundary (5.18) conditions) and estimates the tail of the nth moment: I tail n ðt Þ =

1 Rm

e - βRm  nþ1 : β

Rn f ðR, t Þ dR = n!

e - βRm βnþ1

ðβRm Þn - i n np 2πn ≈ i = 0 ðn - iÞ! e n

ð5:34bÞ

The last approximation arises after the application of Stirling’s formula. Compare (5.34b) with (5.33), and see that the proposed tail increases with n significantly faster than the full integral moment. Hence, the particle size distribution function must have a terminal point Rm beyond which it vanishes. Physically, it means that there is one and only one particle in the ensemble, which will survive the coarsening evolution; see Fig. 5.1. This particle was the largest at the inception of the asymptotic regime at t0 and remains such throughout the entire duration of the process. Now let us ask the question: how did the largest particle grow in the asymptotic regime? The answer is that its radius must have always been a fixed number greater than the critical radius. Indeed, if the particle grew at a slower rate, it would have eventually been overrun by the critical size; if the particle grew at a greater rate, it means that other particles (of slightly smaller sizes) also survived the evolution. Hence:

5.2

Resolution of the Problem

83

Rm = kR

ð5:35aÞ

and we can find the rate of growth of the critical size. Indeed, from (5.9) and (5.19) we obtain that: 1 dR 3 1 = 3αD 2 - 3 dt k k

ð5:35bÞ

Then, applying the principle of maximum growth to (5.35b), see Sect. 5.1.5, we obtain that: 3 2

k=

ð5:36Þ

and using (5.24b) for (5.35b) and (5.36), we obtain that: δ=

4 : 9

ð5:37Þ

Finally, applying (5.32) and (5.28) to (5.26) at n = 4, we can find the remaining coefficients in the recurrence relation: F ð2Þ = F ð0Þ þ

1 3 27 4

4= 9

4=

e 27 F ð3Þ ≈ F ð0Þ þ 0:038F ð3Þ,

3 3e =3 - 1 F ð nÞ = Q 4π n n

ð5:38aÞ

4= 9

for n > 3:

ð5:38bÞ

This concludes the evolutionary analysis of the polydisperse ensemble of the second-phase particles and allows us to describe the asymptotic behavior of the bulk properties: Rðt Þ ≈

4 αDt 9

1=3

,

ð5:39aÞ

ψ ðt Þ ≈ Q, Δ ðt Þ ≈ α

4 αDt 9

Sðt Þ ≈ 4πVF ð2Þ

ð5:39bÞ - 1=3,

4 αDt 9

ð5:39cÞ - 1=3,

ð5:39dÞ

84

5

N ðt Þ ≈ VF ð0Þ

Coarsening of Second-Phase Precipitates -1

4 αDt 9

f ðR → 0, t Þ ≈ F ð0Þ

3R 2αDt

f ðR, t Þ = 0, for R ≥ Rm =

ð5:39eÞ

, 2

ð5:39fÞ

,

3 4 αDt 2 9

1=3:

ð5:39gÞ

Equations (5.39a–g) show that the increase of the average size of particles is balanced by the decrease of their total number so that the volume fraction of the second phase remains practically unvarying. The driving force of the process, measured by the total surface area of the particles of the ensemble, decreases with time as t-1/3. It is important to note that the driving force may be effectively expressed by the supersaturation Δ. Example 5.1 Ideal Solution Model Find the characteristic size of particles α in the ideal solution model for the matrix phase. The free energy of an ideal solution has the following form: Gα ðCÞ = GA þ ðGB- GA ÞC þ Rg T ½Cln C þ ð1- CÞ lnð1- C Þ

ð5E:1Þ

where GA(B) are the partial molar Gibbs free energies of the species A and B and Rg is the universal gas constant. Then, using (5.7a) and (5.8c), we obtain the following: μαC =

Rg T Cα 1 - Cα

ð5E:2Þ

and β

2σV Cα 1 - C α α Rg T C β - C α

ð5E:3Þ

Yet, some of the properties of the ensemble are not calculated exactly because the quantity F(0) remains undetermined. Also, although we found all the moments of the pdf of the ensemble, we did not find its functional expression, which would be advantageous because it can be compared to the experimentally available data. In the next section, we will find the functional expression of pdf by transforming from the dimensional size-time space (R, t) to the dimensionless space (u, τ) suggested by Lifshitz and Slezov.

5.2

Resolution of the Problem

5.2.3

85

Asymptotic Analysis of the Distribution Function

In the dimensionless LS size-time space: u

Rðt ÞΔðt Þ Rðt Þ =  α R ðt Þ

ð5:40aÞ

is the dimensionless particle radius, τ  3 ln

Δ0 Δ ðt Þ

ð5:40bÞ

is the dimensionless time, and we define the particle size distribution function φ(u, τ) such that it is also normalized with respect to the number of particles per unit volume, cf. (5.11): 1 0

φðu, τÞ du =

N ðt Þ V

ð5:41Þ

Then: f ðR, t Þ dR = φðu, τÞ du which tells us that: f ðR, t Þ = φðu, τÞ ∂R u = φðu, τÞ

Δ α

ð5:42Þ

Then, the evolution equation for the pdf φ(u, τ) takes the following form (see Example 5.2 for the derivation): ∂τ φ þ ∂u ðvuτ φÞ = 0

ð5:43Þ

where we introduced the dimensionless drift velocity3: vuτ 

1 u-1 γ ðτÞ 2 - u , 3 u

ð5:44aÞ

and auxiliary function:

3

The similarity between the right-hand side of (5.44a) and the characteristic equation of the recurrence relation (5.31) is not accidental. The LS method of resolving the problem is based on finding the stationary points in the dimensionless size-time space.

86

5

γ ðτ Þ  -

Coarsening of Second-Phase Precipitates

D 4 dΔ Δ dt α2

-1

:

ð5:44bÞ

Example 5.2 Derivation of Evolution Equation in LS Space To transform (5.13) into a new space, we recall the transformation relations: ∂R ∙ = ∂R u ∂u ∙ þ ∂R τ ∂τ ∙ and ∂t ∙ = ∂t u ∂u ∙ þ ∂t τ ∂τ ∙ The coordinate transformation relations in the LS space are the following: Δ ; ∂R τ = 0 α R dΔ u dΔ 3 dΔ ∂t u = = ; ∂t τ = α dt Δ dt Δ dt ∂R u =

ð5E:4aÞ ð5E:4bÞ

Then, Eq. (5.13) transforms as follows: ∂t u ∂u ðφ ∂R uÞ þ ∂t τ ∂τ ðφ ∂R uÞ þ ∂R u ∂u ðvRt φ ∂R uÞ = 0 Using the coordinate transformation relations (5E.4) and taking into account that ∂Ru is a function of τ only, we obtain the following equation: -

1 dΔ Δ 3 dΔ ðu∂u φ þ φÞ þ ∂u φ ∂ φþ α dt α α dt τ

2

vRt = 0

ð5E:5aÞ

or - 3∂τ φ þ ∂u

Δ2 dΔ α dt

-1

vRt þ u φ = 0

ð5E:5bÞ

Expressing the rate Eq. (5.19) in the LS space (u, τ): vRt =

D 2 u-1 Δ ðτ Þ 2 α u

we obtain Eq. (5.43). Finally, let us show that vuτ = Δ dR 1 Δ2 dΔ du Δ þR = =3α dΔ 3 α dt dτ

ð5E:6Þ du dτ .

Indeed:

-1

vRt þ u



The beauty of the LS space is that in the asymptotic regime γ a = 27/4 is independent of τ. This allowed LS to resolve Eq. (5.43) by the method of characteristics. Indeed,

5.2

Resolution of the Problem

87

two independent first integrals of (5.43), which are called characteristics, are as follows: τ-

du = const1 , vauτ φ = const2 vauτ

ð5:45Þ

where: 3 -u 2

vauτ = -

2

uþ3 ≤ 0, 3u2

ð5:46aÞ

which means that in LS space all particles “dissolve,” that is, shrink compared to R*, except the largest one with um = 3/2, and u 0

du0 5 3 4 1 33 e - ln 5=3 ln - u þ lnðu þ 3Þ þ a = 2 - vu0 τ 3 2 3 2 : 1- 3u

ð5:46bÞ

Introducing a relationship between the constants in (5.45), the solution of (5.43) can be presented as follows: 1 χ τvauτ

φ= -

du vauτ

ð5:47Þ

where χ is an arbitrary function. Finally, applying this expression to the volume fraction of the precipitate phase: ψ ðt Þ =

4π α 3 Δ0

3

um



u3 φðu, τÞ du ≈ Q

0

we obtain that for the integral to be time independent the arbitrary function must have the following form: χ ðzÞ = Ae - z Thus: φ = Ae - τ PðuÞ where (see Fig. 5.6):

ð5:48aÞ

88

5

Fig. 5.6 The u-dependent part of the particle size probability distribution function, (5.48b)

Coarsening of Second-Phase Precipitates

2.5

probability distribution function

P(u) 2

1.5

1

0.5

0 0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

u=R/R crit 34 e

u

1 PðuÞ = - a e vuτ

0

du0 va u0 τ

=

u2 e

25=3 3 2

-u

- 1=1 - 2u

ð5:48bÞ

3

7 11=3ðuþ3Þ =3

,

u < um

0, u ≥ um :

is normalized as follows: um

PðuÞ du = 1

ð5:48cÞ

0

and the normalization constant is the following: -1

u

A=

α 3 Q Δ0 4π

-3

um 0

3

u e - vauτ

0

du0 va u0 τ

≈ 0:22Q

du

Δ0 α

3

ð5:48dÞ

Application of the solution (5.48a–d) to (5.41) brings another way to find the normalization constant: N ðt Þ = AVe - τ = AV

Δ Δ0

3

ð5:48eÞ

Also notice that the pdf in (5.48a) may be expressed in the following form:

5.3

Domain of Applicability and Shortcomings of the Theory

89

3

φ ≈ 0:22Q

ΔðτÞ PðuÞ α

ð5:48fÞ

which is completely independent of the initial conditions (Δ0). This constitutes a regime of self-similarity (5.1a) (“hydrodynamic regime” in the terminology of LS) of the evolution of the ensemble of particles. Finally, application of (5.48a) to (5.42) and comparison with (5.39f) allow us to find that: F ð0Þ ≈ 0:22Q,

ð5:49Þ

which concludes our analysis of the problem.

5.3

Domain of Applicability and Shortcomings of the Theory

Main results of the theory are expressed by (5.39a–g), (5.42), (5.48a–d), and (5.49). To derive these equations, a number of assumptions have been made. To validate the theory, these assumptions must be reviewed and critically assessed.

5.3.1

Quasi-Steady Diffusion Field

The assumption of the quasi-steady state of the diffusion field surrounding the particles resulted in vanishing of the derivative ∂Cα/∂t in (5.3a). It is true if the characteristic time of the diffusion of species over the distance comparable with the size of the particle is much less than the characteristic time over which the particle significantly increases its size; see (5.3a): t dif 

Cβ ðR, t Þ - C α ðR, t Þ R R2 R2  ≪ t gro  D DΔ0 jJ α ðR, t Þj

that is, if: Δðt Þ < Δ0 ≪ 1:

ð5:50Þ

This tells us that the theory applies to the systems of small supersaturation, hence, on the latest stage of the transformation process.

90

5.3.2

5

Coarsening of Second-Phase Precipitates

Local Equilibrium

The assumption of the chemical equilibrium on the particle/matrix boundary, which allowed us to derive (5.8a), is also true if the characteristic time of the diffusion of species over the particle size distances is much less than the characteristic time of particle growth or dissolution. Hence, it applies to systems of small supersaturation also.

5.3.3

Absence of Nucleation

The evolution equation for the particle radius distribution function (5.13a) does not account for the nucleation of new particles. The assumption that the nucleation has ceased also works on the late stage of the process when the supersaturation is small.

5.3.4

Mean-Field Approximation

The mean-field approximation and disregard of the spatial correlations between the particles, which allowed us to derive the flux Eq. (5.3b), are applicable to a system of small volume fraction of the second phase.

5.3.5

Disregard of Coalescence

Coalescence—merging of particles—is not considered in the problem. This assumption also works in systems of small volume fraction of the second-phase particles.

5.3.6

Spherical Shape

The precipitate particles are considered to be spherical. This is not a critical assumption, which can easily be replaced by a more realistic shape of a particle.

5.3.7

Asymptotic Regime

All results of the theory apply to the asymptotic regime of coarsening. To estimate the time to reach the asymptotic stage t0, we may assume the following timetable of

Domain of Applicability and Shortcomings of the Theory

Fig. 5.7 Schematic evolution of the system through the regimes of nucleation, growth, and coarsening

avarage particle size R

5.3

91

~t1/3 coarsening

~t growth

R*nucl nucleation 0 tinc t0

time

tend

the process. The system of supersaturation Q and N0 particles of practically equal sizes was prepared in volume V at the initial moment t = 0. Although the process of diffusion was going on, the average volume of the particles did not change until t = t0 when the asymptotic regime kicked in. The latter proceeded until t = tend when only one particle was left; see Fig. 5.7. Then, using (5.39a), we obtain the following estimates: t0 

QV QV ; t  αDN 0 end αD

ð5:51Þ

The times estimated by formula (5.51) are in the orders of years, which may be the reason why the asymptotic regime of coarsening is seldom found experimentally.

5.3.8

Small Particles

The LS theory does not treat the small particles correctly, which results in the infinite speed of their dissolution by Eq. (5.8a). It is important to examine the source of the mistreatment because it may be a source of the discrepancies between the experimental and theoretical results. The problem is that transforming from the local Eq. (5.4) to the field Eq. (5.8a) a number of approximations have been made.

92

5 Coarsening of Second-Phase Precipitates

First, we made an approximation of chemical stiffness of the precipitate phase (5.7a), which did not allow us to account for variations of the particle concentration that may take place during coarsening. Second, notice that to obtain the closed-form particle rate Eq. (5.8a), we made a few approximations regarding the field quantity Cα(R, t), about which we know that: Cα < C α ðR, t Þ < C 0 < C β

ð5:52Þ

Equation (5.7b), which we used, is just a linear approximation of the equilibrium condition (5.5b) and (5.6). Its first-order nonlinear approximation: C α ðR, t Þ = C α -

μαC þ μαCC

μαC μαCC

2

þ

β

4σV μαCC R

ð5:53Þ

yields a weaker dependence on the particle radius and may be a better approximation for the right-hand side of (5.4). However, even this approximation is not good for the left-hand side of (5.4) because yet it violates (5.52) in the domain of small sizes. That is why we used Cα(R, t) ≈ Cα(1, t) as an approximation for the left-hand side of (5.4).

Exercises 1. 2. 3. 4.

Derive relation (5.6). What does the condition C0 > Cα(1, t0) mean for (5.10a)? Verify that in the asymptotic regime γ a = 27/4 [Hint: apply (5.39c) to (5.44b)]. Prove that the most frequent particles in the asymptotic regime of coarsening are 1.13 times larger than the average size particles. 5. Calculate the characteristic particle size α, rate constant K, average radius, and volume fraction of the β-Cr precipitate particles after 10 hours of annealing of Ni-55 wt% Cr alloy at 1000 °C. Take Ni-Cr as an ideal solution with σ = 10 mJ/ β

m2, D = 6 × 10-21 m2/sec, V = 8 cm3 =mole, and other parameters from the phase diagram in Fig. 5.8. Take t0 = 0 and think about what it means.

References

93 Cr, wt%

T,°C 10

1900

20

30 1

5

40

50

4

3

60

70

80

90

2

1700 1500

L

1455°C

1345°C

(Cr)

1300 1100

(Ni)

900 700 590°C γ

500 Ni

10

20

30

40 50 Cr, at%

60

70

80

90

Cr

Fig. 5.8 Phase diagram of Ni-Cr binary alloy

References 1. W. Ostwald, Z. Phys. Chem 37, 385 (1901) 2. I.M. Lifshitz, V.V. Slezov, J. Phys. Chem. Solids 15, 35 (1961) 3. P.W. Voorhees, J. Stat. Phys. 38, 231 (1985) 4. P.M. Batchelder, An Introduction to Linear Difference Equations (Dover Publications, 1967) 5. A.G. Kurosh, Higher Algebra (MIR, 1972) (Translated from Russian)

Chapter 6

Spinodal Decomposition in Binary Systems

Spinodal decomposition is a process of unmixing, i.e., spatial separation of species, which takes place in thermodynamically unstable solutions, solid or liquid. In case of a system that consists of different species, it is advantageous to deal with the partial molar thermodynamic quantities instead of the densities (see Appendix H). Consider a binary solution that contains nA moles of species A and nB moles of species B isolated from the environment so that the total number of moles, n = nA + nB, and the molar fraction (concentration) of the species A(B), XA = nA/n (XB = nB/n), do not change. By the definition, the variables XA and XB are not independent because: XA þ XB = 1

ð6:1Þ

A regular solution is a popular model of a binary system. Its molar Gibbs free energy takes the following form: GS = X A GA þ X B GB þ Rg T ðX A ln X A þ X B ln X B Þ þ ΩX A X B

ð6:2Þ

Here Rg is the gas constant, GA and GB are the partial molar Gibbs free energies of the species A and B, and the total Gibbs free energy of the system is, of course, nGS. The first two terms in (6.2) represent the free energy of a mechanical mixing of the components A and B. Since after mixing the atoms are in a much more random arrangement, there will be a negative entropy contribution into the free energy, called the entropy of mixing, which is the third term short of the temperature T. The fourth term represents the free energy excess (positive or negative) due to interactions of pairs of atoms of the same or different kinds. This contribution is proportional to the number of A-B pairs expressed by the product of the molar fractions with the coefficient of proportionality:

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Umantsev, Field Theoretic Method in Phase Transformations, Lecture Notes in Physics 1016, https://doi.org/10.1007/978-3-031-29605-5_6

95

96

6

Ω = ZN Av ω,

Spinodal Decomposition in Binary Systems

ω = ωAB -

1 ðω þ ωBB Þ 2 AA

ð6:3Þ

where Z is the coordination number of A and B, that is, the number of nearest neighbors to A or B, NAv is the Avogadro number, and ωij designates the interaction energy of the i-j pair. Positive ω corresponds to the mixture where the like atoms attract more strongly than unlike atoms (potential energy of attractive forces is negative, so that stronger attraction between like atoms means the ωAB is a smaller negative number than the average of ωAA and ωBB). It is a good approximation when the atoms A and B have nearly the same radius. This model can be used for solids and liquids, although in liquids instead of Z and ω one has to use the average values. Using condition (6.1), the free energy (6.2) may be expressed as a function of one variable, e.g., X  XB; see Fig. 6.1a, b: GS ðX Þ = GA þ ðGB- GA ÞX þ Rg T ½Xln X þ ð1- X Þ lnð1- X Þ þ ΩX ð1- X Þ ð6:4Þ To estimate the roles of different contributions, first, notice that if Ω = 0 (ideal solution) then for all compositions and all finite temperatures the free energy of the solution is less than that of the mechanical mixture of the pure components because: - ln 2 ≤ X ln X þ ð1- X Þ lnð1- X Þ < 0

for 0 < X < 1

ð6:5Þ

This means that the first minute addition of solute to the pure substance will always dissolve to form a solution (not necessarily an ideal one). Due to downward convexity of the free energy function of the ideal solution, a single ideal solution is more stable than any mixture of the ideal solutions. Second, if Ω < 0 (ordering solution), the situation is pretty much the same because the type of convexity of the free energy does not change. However, if Ω > ΩC > 0 (phase-separating solution), the situation is more complicated because the free energy is not convex definite anymore; see Fig. 6.1b. The critical value of the interaction parameter Ω is defined as the one that changes the state of convexity of the function GS(X), (6.4), that is, the value of Ω which allows the function Rg T d2 GS = - 2Ω 2 X ð1 - X Þ dX

ð6:6Þ

to vanish exactly at one point. This point is called the critical point. Its concentration is as follows: XC =

1 2

ð6:7Þ

and the critical interaction parameter value is ΩC = 2RgT. It is accustomed to express the critical condition through the critical temperature of the system with the given strength of interactions Ω:

Spinodal Decomposition in Binary Systems 0.4

X*(1-X)

0.0 0.25

0.50

1

0.75

concentration X

(a) -0.4

X*lnX+(1-X)*ln(1-X)

-0.8

G

S

Gibbs Free Energy

Fig. 6.1 Regular solution model of a binary system. (a) Contributions of entropy of mixing (negative) and energy of interactions (positive) of species into the Gibbs free energy. (b) Gibbs free energy of the system with GA ≈ GB at T < TC. Dashed lines are the free energies of mixtures of the solutions. (c) Solid line, miscibility gap, (6.14); dashed line, spinodal curve, (6.13a); arrow, critical quench

97

stable

stable

unstable

b

a

(b) 0.00

E

Xa

0.25

0.50

0.75

concentration X

E

Xb

1.00

1.0

T/T

C

0.8

normalized temperature

6

0.6

0.4

(c)

0.2

0.0 0.00

0.25

0.50

concentration X

0.75

1.00

98

6

TC =

Spinodal Decomposition in Binary Systems

Ω Zω = 2Rg 2kB

ð6:8Þ

Notice that TC can be expressed through microscopic characteristics only. Although serving as a system’s coordinate, the mole fraction X is not a hidden variable, defined in Sect. 1.2. Indeed, as the overall composition of the solution X0 is set as the initial condition, the molar fraction of a stable state may not correspond to a local minimum of GS(X). This does not mean that the principle of minimum of the total free energy does not apply to this system but only that this principle must be supplemented with another condition or conditions specific to the physical nature of the variable X. Most importantly, a hidden variable can change freely while “searching” for the free energy minimum, but the value of the variable X is influenced by the mass conservation condition. The latter means that in a closed system the total number of moles of a species does not change. In a binary system of A and B species, the mass conservation can be expressed as follows: nA , nB = constðt Þ

ð6:9Þ

It applies equally to a system which was initially set as a heterogeneous or completely homogeneous one. To examine the nature of the spinodal instability, let us consider an initially homogeneous system at constant temperature T < TC and concentration X0. Because of the movement of the atoms in the solution, there are local composition fluctuations of very small values. Through these fluctuations the system or any part of it may decompose into a mixture of solutions characterized, at least initially, by compositions very close to that of the first homogeneous solution. If the Gibbs free energy of the mixture is lower than that of the homogeneous solution, then the latter is unstable and further decomposition may occur to produce states which have still lower free energy. Otherwise, the decomposition will not occur. Suppose that the molar numbers and fractions of the neighboring solutions are nα, nβ and Xα, Xβ. Then: X0 =

nαB þ nβB nαB nα nβB nβ = α þ = X α x þ X β ð1- xÞ n n n nβ n

ð6:10Þ

where x is the relative proportion (fraction) of α in the solution. Graphical representation of this result is called the lever rule. The molar Gibbs free energy of the two-solution mixture is as follows: Gm ðX 0 Þ = GSα x þ GSβ ð1- xÞ = GSα

X0 - Xβ X - X0 þ GSβ α Xα - Xβ Xα - Xβ

ð6:11Þ

This is an equation of a straight line passing through the two points with coordinates (Xα, GSα) and (Xβ, GSβ); a point on this line represents the free energy

6

Spinodal Decomposition in Binary Systems

99

of the mixture Gm of the overall composition X0; see Fig. 6.1b. It may be seen that if a free energy vs concentration GS(X0) is convex downward, the free energy of a two-solution mixture of neighboring compositions Gm(X0) is always higher than the free energy of the homogeneous solution GS(X0). Similarly, if GS(X0) is convex upward, Gm(X0) is lower than GS(X0). Consequently, the solution is stable if d2GS/dX2 > 0 and unstable if d2GS/dX2 < 0. The inflection point: d2 GS =0 dX 2

ð6:12Þ

separates the regions of stability and instability of the homogeneous concentration X0 with respect to small fluctuations. Hence, spinodal decomposition is a decomposition of a homogeneous solution resulting from infinitesimal fluctuations of its composition, and d2GS/dX2 may be called the module of stability of the solution. According to our definition in Sect. 1.2, (6.12) expresses the condition of the spinodal point. However, there is a significant difference: (6.12) depends essentially on the conservation condition (6.9). Equations (6.4) and (6.12) yield an expression for the spinodal curve, that is, the locus of the spinodal concentrations at different temperatures: X SαðβÞ 1- X SαðβÞ =

T 4T C

ð6:13aÞ

Obviously, the same equation may be interpreted as an equation for the spinodal temperature TS as a function of the overall concentration X0; see Fig. 6.1c: T S = 4T C X 0 ð1- X 0 Þ

ð6:13bÞ

Notice that for each T < TC there are two spinodal concentrations, X Sα < X C and X Sβ > X C , but TS(X0) is a single-valued function. The analysis of stability presented above may be expanded beyond the boundaries of small fluctuations of the neighboring regions (local stability) into the domain of global stability, that is, stability with respect to any compositional changes in the entire system. Indeed, applying results of the analysis of a two-phase equilibrium (Appendix H) to our system, we find that the condition of the global stability of the system is the common tangency between the molar fractions of the solutions X Eα and X Eβ . Then, inspecting Fig. 6.1b and the calculations of (6.10) and (6.11), we may see that if the overall composition of the solution X0 is between the points of common tangency, X Eα and X Eβ , then a mixture of the two solutions with these molar fractions and the relative proportion of α solution xEα = X 0- X Eβ = X Eα - X Eβ

is

more stable (has less total free energy) than the homogeneous one. For a regular solution with GA ≈ GB, the condition of common tangency, see (H.12), is as follows:

100

6

Spinodal Decomposition in Binary Systems

X EαðβÞ dGS E þ 2T C 1- 2X EαðβÞ X αðβÞ = Rg Tln dX 1 - X EαðβÞ

=0

ð6:14Þ

The temperature vs concentration graph of this condition, called the solubility curve or miscibility gap, is depicted in Fig. 6.1c. From the standpoint of Sect. 2.2, this is a phase diagram, and the solubility curve represents the equilibrium phase boundary because at the points of this curve the equilibrium states, one phase and two phase, exchange their stabilities. Example 6.1 Critical Point of a Binary System Show that near the critical point: p X C- X EαðβÞ ≈ 3 X C- X SαðβÞ

ð6E:1Þ

Let us represent the solubility and spinodal curves as follows: X EαðβÞ = X C ð1 þ εE ðT ÞÞ

ð6E:2aÞ

X SαðβÞ = X C ð1 þ εS ðT ÞÞ

ð6E:2bÞ

Then, taking (6.7) into account, in the vicinity of the critical point, (6.13) and (6.14) turn into the following: 1 T 2εE = ≈ 1 - ε2E 3 TC lnð1 þ εE Þ - lnð1 - εE Þ

ð6E:3aÞ

T = 1 - ε2S TC

ð6E:3bÞ

Comparing these equations, we obtain the desired result. Notice that Eq. (6E.3a, b) apply to both α and β sides of the curves.

Exercises 1. Find when a single ideal solution is more stable than any mixture of the ideal solutions. 2. Prove that at the critical point the locus of points defining the spinodal line reaches a maximum temperature.

Chapter 7

Thermal Effects in Kinetics of Phase Transformations

All stages of phase transformations are subjected to thermal effects, which stem from the necessity to redistribute internal energy in the material that undergoes the transformation. Consider solidification of a homogeneous one-component liquid; think of water changing to ice or molten metal turning in solid. To be crystallized, the atoms of liquid must be incorporated into the crystal, and the released latent heat must be transported away from the interface between the phases. The thermal effects may change the dynamical course of the transformation and even its outcome. Some of the effects are caused by the variation of material properties as a result of temperature gradients that develop during phase transformations, e.g., thermal stress due to thermal expansion. These effects are not considered in this book. Instead, we concentrate on the thermal effects related to the latent heat release and redistribution. In this Chapter they will be analyzed in the framework of a free-boundary (Stefan) problem. The Stefan problem has a special place in the theory of phase transformations because it was the first significant, mathematically rigorous, and physically realistic problem resolved exactly. The Stefan problem belongs to a class of so-called free-boundary problems, the essential new feature of which is existence of a phase-separating interface whose motion must be determined. One prominent thermal effect of phase transformations related to latent heat release and redistribution is called recalescence, that is, temperature increase at the interface and adjacent domains of the phases. Recalescence reduces the rate of transformation; it will be considered in Sect. 7.2.2. The most prominent thermal effect of this nature is appearance of dendrites, which are treelike microstructures common during crystallization of pure substances and alloys; see Sect. 3.1. Dendrites are formed when the liquid phase is supercooled below the melting point of a pure substance or liquidus temperature of an alloy. The supercooling of the liquid violates stability of a smooth interface (Sect. 7.3) and gives rise to the dendritic morphology (Sect. 7.4). The latter turns out to be the most efficient in removing the latent heat away from the interface.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Umantsev, Field Theoretic Method in Phase Transformations, Lecture Notes in Physics 1016, https://doi.org/10.1007/978-3-031-29605-5_7

101

102

7.1

7

Thermal Effects in Kinetics of Phase Transformations

Formulation of the Stefan Problem

The solid and liquid phases are separated by a boundary (interface or front of solidification), which may be described by its implicit equation: Ωðx, y, z, t Þ = 0

ð7:1Þ

Its evolution may be described by the normal component of the front velocity:1 vn 

∂n ∂t

ð7:2Þ

If there is a preferred direction of motion, the normal component of the front velocity may be expressed as follows: vn = v cos θ

ð7:2aÞ

where v is the velocity in that direction and θ is the angle between that direction and the normal. The front is moving in conjunction with the laws of physics (thermodynamics). In our problem, the main thermodynamic process that controls the solidification is the heat conduction. To describe it, we can use equation of continuity for the enthalpy H: ∂H = - ∇J ∂t

ð7:3Þ

where H = C(T - Tref), Tref is a reference temperature, and J is the heat flux for which we assume the Fourier law: J = - λ∇T

ð7:4Þ

with the thermal conductivity λ. Finally, the heat conduction equation takes the following form: C

∂T = λ∇2 T ∂t

ð7:5Þ

where the Laplacian in the Cartesian system takes the following form: 2

∇2 =

1

2

2

∂ ∂ ∂ þ þ ∂x2 ∂y2 ∂z2

ðC:14Þ

The tangential component remains unspecified by the Stefan problem; see discussion after (7.6).

7.1

Formulation of the Stefan Problem

Fig. 7.1 Propagation of a front of solidification and the accompanied isotherms in the presence of a supercooled melt. Enhancement of a bulge on a plane front constitutes its morphological instability

103

y

(i) : 

Js

(ii) : 

Jl

v

(iii) : 

v

x

As a solution of the problem, we want to know the temperature distributions in the solid T(Ωs) and liquid T(Ωl) phases, location of the front (7.1), and the rate of its evolution (7.2). As is known, to be soluble a PDE of the type (7.5) must be supplemented by the boundary and initial conditions. However, just the conditions on the outer boundary of the system are not sufficient to solve the problem of motion of the front because the front’s position in time is not known. All we know is that it is free to move in conjunction with the thermodynamic constraints. By specifying the conditions on the front (interface), we arrive at the free-boundary or Stefan problem.

7.1.1

Stefan Boundary Condition

In time Δt, as a result of the phase change in a layer of thickness Δn, an amount of heat LΔn will be liberated on the phase boundary (remember that L is the latent heat). This heat must be conducted away from the boundary by thermal fluxes into the solid and liquid: LΔn = (Jl - Js)Δt; see Fig. 7.1. Then, taking into account (7.4), we obtain the following: λs

∂T ∂T - λl = Lvn ∂n s ∂n l

ð7:6Þ

where ∂n = n ∙ — is the derivative in the direction normal to the front and pointing into the liquid. Notice that only the normal components of the flux take part in the conduction process; the tangential components help spread the heat along the interface, but not drive it away.

104

7.1.2

7

Thermal Effects in Kinetics of Phase Transformations

Thermodynamic Equilibrium Boundary Condition

We assume that on the phase boundary (7.1) the condition of thermodynamic equilibrium (2.2) sets in, that is: T f  T ðΩs = 0 - 0Þ = T ðΩl = 0 þ 0Þ:

7.1.3

ð7:7Þ

Phase Equilibrium

Precise magnitude of the temperature on the front Tf is subject to discussion. On the one hand, one may assume that there is enough time to establish the condition of phase equilibrium (2.3): T f = T E:

ð7:8aÞ

TE is the local equilibrium temperature, which, as we have seen in Sect. 2.4, depends on the curvature K of the interface and the surface tension σ according to the GibbsThomson equation: T E = T m 1- 2

σK L

ð2E:5Þ

On the other hand, as we found in Sect. 4.2.4, advancement of a phase-separating interface requires a “driving force,” which characterizes departure of the interface from the state of phase equilibrium. It may be described by the following relation between the normal component of the front speed vn and the temperature Tf of the front [1]: vn = μðT E- T f Þ

ð7:8bÞ

where μ is called the kinetic coefficient; it is a measurable quantity in the experiments on the kinetics of phase transformations, e.g., crystallization.

7.1.4

Initial Condition

Although many different problems of solidification may be laid out in the framework of a Stefan problem, in the opinion of the author, the most interesting and physically challenging one is the problem of initially supercooled pure liquid: T ðx, y, z, t= 0Þ = T 0 < T m :

ð7:9Þ

7.2

Plane-Front Stefan Problem

7.1.5

105

Far Field Condition

According to (7.3) temperature of a solidifying system does not spread infinitely fast. Hence, far from the front, the liquid will retain the initial temperature. This argument makes us adopt the following boundary condition: T ð Ω → 1Þ → T 0 < T m :

7.2

ð7:10Þ

Plane-Front Stefan Problem

For a one-dimensional Stefan problem (Sect. 7.1), equation for the front (7.1) turns into an equation for a plane: x = xf ð t Þ

ð7:11Þ

and the relation (7.2) between its position and interfacial velocity v(t) take the following form: v=

7.2.1

dxf dt

ð7:12Þ

Diffusion Regime of Growth

It follows from the general properties of the heat equation (7.3) that the Stefan problem has a similarity solution of the type (5.1a). Specifically, we are looking for a solution in the following form: T sðlÞ ðx, t Þ = AsðlÞ þ BsðlÞ erf ðξÞ

ð7:13Þ

x ξ= p 2 αt

ð7:13aÞ

where:

is the similarity variable,

106

7

Thermal Effects in Kinetics of Phase Transformations

α=

λ C

ð7:13bÞ

is the thermal diffusivity,2 and the coefficients As(l ), Bs(l ) must be specified by the free-boundary (7.6, 7.7, 7.8a), far field (7.10), and initial (7.9) conditions. The motion of the front is specified by the following: p xf = 2β αt v=β

ð7:14aÞ

α t

ð7:14bÞ

where β is a constant that needs to be determined. Satisfying the initial (7.9) and boundary3 (7.8a) and (7.10) conditions, we obtain the solution in the following form: T s ðx, t Þ = T 0 þ ðT m- T 0 Þ

erf ðξÞ erf ðβÞ

ð7:15aÞ

T l ðx, t Þ = T 0 þ ðT m- T 0 Þ

erfcðξÞ erfcðβÞ

ð7:15bÞ

Satisfying the Stefan boundary condition (7.6), we find that the coefficient β obeys the following solvability condition: p

π β2 T - T0 β e 1- erf 2 β = m 2 L=C

ð7:16Þ

Such regime is often called the diffusion (heat)-controlled growth. The temperature fields (7.15a, b) and the solvability condition (7.16) are depicted in Fig. 7.2. Examining the solution of the Stefan problem, we come to the following conclusions. First, we find an important combination of the parameters of the system called the Stefan number: St 

Tm - T0 L=C

ð7:17Þ

which determines the rate of growth in the diffusion-controlled regime. It expresses the relative supercooling of the melt and relates to the chemical potential difference between the phases:

2 3

For simplicity we consider the thermal conductivities of the solid and liquid to be equal. T(x = 0, t) = T(x → 1, t) = T0.

7.2

Plane-Front Stefan Problem

temperature

Stefan number=C(Tmelt-T0)/L

Tmelt

107 1

0.8

0.6

0.4

0.2

0 0

4

8

12

16

20

coefficient E

T0 solid liquid xf(t) 0

coordinate x

Fig. 7.2 Diffusion regime of growth: the temperature fields in the solid and liquid phases. Inset: the solvability condition of the supercooled liquid problem

St =

Δ~g QL

ð7:17aÞ

Q

L CTm

ð7:17bÞ

cf. (2E.3), where:

is the ratio of the thermal energy density CTm and the latent heat L. Second, for the diffusion-controlled growth to take place the Stefan number must be the following: St < 1 or T 0 > T m - L=C:

ð7:18Þ

Thus, only limited amount of supercooling (undercooling) results in the diffusion-controlled growth of the second phase.

108

7.2.2

7

Thermal Effects in Kinetics of Phase Transformations

Adiabatic Crystallization

One must wonder what would happen if we supercooled the liquid to the temperatures below the limit allowed by the diffusion-controlled growth, that is, to T0 < Tm - L/C. To identify the problem and remove the constraint (7.18), let’s recall that the latent heat is equal to the enthalpy change as a result of the old phase changing to the new one. This means that even if all the latent heat emanated at the front were to stay where it was released—no heat conduction involved—the temperature of the new phase would be less than the melting point. This conclusion brings us in conflict with the equilibrium interface condition for a plain front (7.8a). Then, we replace it with the kinetic BC (7.8b), which in combination with (7.7) takes the following form: Tf  Ts = Tl = Tm -

v μ

ð7:19Þ

Notice that we still assume the thermal equilibrium between the old and new phases. However, we do not assume equality of the chemical potentials of the phases, which is achieved at the specific temperature Tm only. Instead, we assume that the chemical potential difference at the interface serves as the local driving force of the interface motion. Also notice that now two free-boundary conditions, (7.6) and (7.19), depend on the speed of growth. This implies a relationship between the temperature at the boundary and its gradients; however, the latter cannot be used for finding the temperature field (see Exercise 3). For convenience, we also change the boundary condition at x = 0 and make this boundary adiabatic—no heat transfer: ∂T ðx = 0, t Þ = 0 ∂x

ð7:20Þ

However, the last modification could have been avoided. The problem (7.5, 7.6, 7.8b, 7.10, 7.19, 7.20) has a traveling wave solution of the type (5.1b) with the similarity variable: u = x - vt

ð7:21Þ

representing the Galilean transformation of the reference frame and the stationary rate of growth of the new phase: xf = vt at the speed of:

ð7:22Þ

7.2

Plane-Front Stefan Problem

Fig. 7.3 The temperature fields in the conditions of adiabatic crystallization

109

Tmelt

temperature

Ts/l

T0 solid liquid xf(t) 0

v = μ T m- T 0 -

coordinate x

L = v0 θ = constðt Þ C

ð7:23Þ

The temperature fields are (see [2] and Fig. 7.3) as follows: T s ðx, t Þ = T 0 þ L=C = const T l ðx, t Þ = T 0 þ ðL=CÞ exp -

1 θ u = T 0 þ ðL=CÞ exp - u lT lμ

ð7:24aÞ ð7:24bÞ

where: α v λ lμ = μL lT =

v0 = μ θ = St - 1 =

L α = C lμ

C ðT - T 0 Þ - 1 L m

ð7:25aÞ ð7:25bÞ ð7:25cÞ ð7:25dÞ

are, respectively, the thermal and kinetic lengths, velocity scale, and hypercooling of the melt. Compare (7.23) with (7.14b), and notice that now the speed of growth is independent of the heat conductivity λ but depends on the kinetic coefficient μ.

110

7

Thermal Effects in Kinetics of Phase Transformations

That prompted the researchers to call such growth regime kinetics-controlled. Important to notice that in this regime the Stefan number must be as follows: St > 1 or T 0 < T m - L=C:

ð7:26Þ

That is, the kinetics-controlled growth can be realized in conditions of strong supercooling (hypercooling) of the liquid.

7.2.3

Critical Supercooling Crystallization

One may ask a question: what would happen if the supercooling was exactly such that: St = 1 or T 0 = T m - L=C?

ð7:27Þ

The problem is that neither the diffusion-controlled nor the kinetics-controlled growth regimes provide an adequate solution because the former yields divergence of the constant β, while the latter yields zero velocity at this condition. It turns out [3] that at the critical supercooling, the crystallization problem has a solution where: v=

αv0 3t

1=3

xf  t 2=3 and the temperature field is expressed in Fig. 7.4.

Fig. 7.4 The temperature fields in the conditions of critical supercooling

ð7:28Þ ð7:29Þ

7.3

Ivantsov’s Theory*

7.2.4

111

General Observations

Notice that in all regimes of growth: ∂T ≫ ∂T . This is a result of the limited size ∂n l ∂n s of the growing crystal and unlimited amount of the liquid phase. The heat flow into the solid phase is small for the solidification of strongly supercooled liquids.

7.3

Ivantsov’s Theory*

In his celebrated paper [4], Ivantsov solved the three-dimensional Stefan problem. Compared to the general formulation of (7.1–7.10), Ivantsov considered a one-sided thermal problem where the heat flux is allowed in the liquid phase only while the solid phase remains isothermal. As we found from the one-dimensional analysis, such approximation can be made if we are looking for a limited-size solid growing in the unlimited-size liquid.4 In the framework of the one-sided thermal problem, the heat equation (7.5) spreads on the liquid phase only, and the heat balance (7.6) and thermodynamic equilibrium (7.7), (7.8a) boundary conditions take the following form: -λ

∂T = Lvn ∂n l

T f  T l ðΩl = 0Þ = T m :

ð7:30Þ ð7:31Þ

To solve the problem, Ivantsov adopted the following approach. As the front of crystallization is isothermal, all the points on the surface are connected by the following relation: dT =

∂T ∂T dt þ dn = 0 ∂t ∂n

ð7:32Þ

which transforms (7.2) to the following relation: vn = -

∂t T ∂n T

ð7:33Þ

Then, the Stefan BC (7.30) turns into the following:

4 In his next work [5], Ivantsov ‘discovered’ the phenomenon of constitutional undercooling, which was later rediscovered by Tiller, Jackson, Rutter, and Chalmers.

112

7

λ

Thermal Effects in Kinetics of Phase Transformations

∂T ∂T =L t ∂n T ∂n

ð7:34aÞ

or 2

∂T ∂n

=

L ∂T λ ∂t

ð7:34bÞ

Finally, Ivantsov replaced the constant in the right-hand side of (7.34b) with a general function of T: ∂T ∂x

2

þ

∂T ∂y

2

þ

∂T ∂z

2

= f ðT Þ

∂T ∂t

ð7:35Þ

and declared this equation valid for the entire liquid field. Notice that (7.34b) and (7.35) reveal the truly nonlinear nature of the Stefan problem (7.1, 7.2, 7.5, 7.6, 7.7, 7.8a, 7.8b, 7.9, 7.10), which can be easily overlooked because all its equations and conditions look linear. Ivantsov’s idea was that sometimes it is easier to find a solution of a PDE of the first order, although nonlinear, see (7.35), than of a linear PDE of the second order, see (7.5). Now, the strategy to solve the problem looks as follows: (i) Find a general integral of (7.35), which includes an arbitrary function and arbitrary constants. (ii) By prescribing relations between some of the arbitrary constants and eliminating them (envelope formation), the general solution should be reduced to a specific one. (iii) The arbitrary function must be determined from the heat equation (7.5). (iv) The arbitrary constants must be determined from the boundary conditions (7.10, 7.30, 7.31). The general solution of (7.35) can be found by the method of characteristics.5 It is as follows: C 21 þ C22 þ C23 ωðT Þ = C1 x þ C2 y þ C 3 z þ C 4 t þ C 5 C4 ωðT Þ 

dT f ðT Þ

ð7:36aÞ ð7:36bÞ

The general solution has many particular ones, which present interest in their own rights. The interface advancement according to many of them is proportional to the square root of time, similar to (7.14a). However, there is one particular solution 5

Known to Ivantsov as the Lagrange-Charpit method.

7.3

Ivantsov’s Theory*

113

where the interface advancement is linearly proportional to time, similar to (7.22). This solution is of special interest for us; it has the following envelop relation: C5 = 0;

C4 = mC 3

ð7:37Þ

A particular interesting method of enveloping was suggested by Horvay and Cahn [6]. It consists of introducing an auxiliary function: Υ ðC 1 , C 2 , C 3 Þ 

C21 þ C 22 þ C 23 ω = C 1 x þ C2 y þ C3 z þ mC 3 t mC 3

ð7:38Þ

where ω is considered a function of (x, y, z, t), and differentiating the new function with respect to its variables: C ω ∂Υ =2 1 = x; C3 m ∂C1

∂Υ C ω =2 2 =y C3 m ∂C2

C2 þ C2 ω ∂Υ = 1- 1 2 2 = z þ mt m ∂C 3 C3

ð7:39aÞ ð7:39bÞ

Substituting (7.39a) into (7.39b), we obtain an equation: z þ mt þ

m 2 ω x þ y2 = 4ω m

ð7:40Þ

which can be resolved as follows (see Exercise 4): 2

ωðT Þ = z þ mt þ m

x2 þ y2 þ ðz þ mt Þ2

ð7:41Þ

According to the outlined strategy, to find the explicit function T(x, y, z, t), we need to satisfy (7.5). For that, we derive from (7.41) the following expressions: ω0

z þ mt ∂T m2 1þ p ; = 2 ∂t \

ð7:42aÞ

ω0

∂T m z þ mt = 1þ p ; 2 ∂z \

ð7:42bÞ

∂T m x = p ; 2 \ ∂x

ð7:42cÞ

ω0 ω00

∂T ∂z

2

þ ω0

2

m x2 þ y2 ∂ T = ; 2 2 p\ 3 ∂z

ð7:42dÞ

114

7

ω00

∂T ∂x

2

þ ω0

Thermal Effects in Kinetics of Phase Transformations 2 2 m y2 þ ðz þ mt Þ ∂ T = ; p 3 2 2 ∂x \

ð7:42eÞ

p where the acute accent (′) means differentiation with respect to T, \ stands for the square root expression in (7.41), and the y-coordinate expressions are similar to those of the x-coordinate. Then, (7.5) turns into a closed ODE for ω(T ): ω

ω00 1 þ = 1, ðω 0 Þ2 α

ð7:43Þ

which has a first integral: Kω0

eω=α =1 ω

ð7:44Þ

where K is an integration constant. Using the rules of differentiation of the inverse functions, (7.44) yields an expression: dT eω=α =K ω dω

ð7:45Þ

which may be integrated further to obtain the following solution: T ðωÞ = K

eω=α dω ω

ð7:46Þ

Now we can perform step (iv) of the strategy. Considering ω a constant, (7.40) is an equation of a two-parameter family of paraboloids of revolution z = z(x, y, t; ω, m) with the tip radii:6 ω m

ð7:47aÞ

dz = -m dt

ð7:47bÞ

r= -2 advancing with the velocity:

Hence, the isotherms of the temperature field are confocal paraboloids of revolution, which may be labeled by the constants ω and m or the curvature R and velocity v of the tip. Choosing the family that advances in the positive z-direction (see Fig. 7.5) and using (7.2a), we obtain that:

6

Recall that the mean curvature of a surface is the average of the two principal curvatures.

7.3

Ivantsov’s Theory*

115

Fig. 7.5 Ivantsov’s temperature distribution (7.53) as a function of coordinate u (7.48) at Pe = 0.5 (7.49). Inset: Ivantsov’s solvability relation. Inset of the inset: moving paraboloid of revolution representing the front of a growing solid

v= -m

ð7:47cÞ

Notice that we do not seek to satisfy the initial condition (7.9) because we are looking for a stationary solution. At this juncture, to present the results of the study in the most illuminating form, it is advantageous to convert to the dimensionless variables. First, we introduce the dimensionless coordinate: u= and Péclet number:

1 z- vt þ R

x2 þ y2 þ ðz - vt Þ2

ð7:48Þ

116

7

Thermal Effects in Kinetics of Phase Transformations

Pe 

vR 2α

ð7:49Þ

which leads to the following: ω = - α Pe u

ð7:49aÞ

Then, noticing that at the front uf = 1, we reformulate the boundary conditions. The isothermal front condition (7.31) turns into the following: T ð 1Þ = T m :

ð7:50Þ

The heat balance condition (7.30) may be accounted for as follows. Using (7.36b) we obtain from (7.45) and (7.49a) the following: L dT e - Pe = f ðT Þ = ωf = - K α Pe λ dω

ð7:51Þ

The far field boundary condition (7.10) turns into the following: T ðu → þ 1 Þ → T 0 < T m :

ð7:52Þ

Finally, we can present the results as follows: T ð uÞ = T 0 þ

L Pe ePe E1 ðPe uÞ C

ð7:53Þ

where: 1

E 1 ð xÞ  x

e-x dx x

is the exponential integral function. Then, the celebrated Ivantsov’s solvability relation takes the following form: St = Pe ePe E1 ðPeÞ

ð7:54Þ

The temperature distribution (7.53) and the solvability relation (7.54) are depicted in Fig. 7.5.

7.4

7.4

Morphological Instability*

117

Morphological Instability*

The plane front of crystallization, which we analyzed in Sect. 7.2, is not stable with respect to appearance of undulations on its surface [7–9]. The physical reason for the morphological instability of the plane front is the presence of the supercooled melt in front of it. Another indicator of the morphological instability is opposing directions of the crystal growth and temperature gradient. Indeed, let us assume that a small bulge appears on the plain interface; see Fig. 7.1. At the bulge, in order to maintain continuity of the temperature field, its isotherms—lines of equal temperature— become squeezed (the intervals between them become smaller); hence, the temperature gradient becomes greater. Then, according to (7.6), this leads to an increase of the growth rate, which propels the bulge even further. Another way to see that a bulge on the front grows faster than the rest of the front is to look at the kinetic boundary condition (7.8b) if it applies to the problem. Indeed, according to this relation, as the tip of the bulge gets into the melt of temperature lower than that on the plain front, it starts growing faster. Apparently, enhancement of the bulge constitutes the “morphological” instability of the front of crystallization, which is the physical reason of the origin of the dendrites. To analyze the problem of stability of the front of crystallization in detail, we will use the method of normal modes described in Sect. 1.5. To simplify the analysis, we consider the case of the hypercooled liquid, which, as we saw above (Sect. 7.2.2), has a stationary solution [10]. Specifically, we examine stability of a plane front (7.11) moving at constant speed (7.22) with the temperature distribution defined by (7.24). For the morphological stability analysis, we must extend our formulation on at least one more dimension, while extension on the third dimension is optional. Then, the front equation (7.1) turns into x = xf(y, t), and the heat conduction in the liquid is governed by (7.5) where: 2

∇2 =

2

∂ ∂ þ 2 2 ∂x ∂y

ð7:55Þ

For the case of a strongly supercooled liquid, we may consider the so called one-sided model, which accounts for the heat conduction into the liquid phase only, because the heat flow into the solid phase is insignificant. Then, the heat balance condition (7.6) turns into (7.30). Furthermore, we transfer to the comoving reference frame, that is, the reference frame moving with the speed v of (7.23). The interface position in this frame we denote as follows: xf = hðy, t Þ and define the perturbation Ψ(x, y, t) of the temperature field T as follows:

ð7:56Þ

118

7

Thermal Effects in Kinetics of Phase Transformations

T ðx, y, t Þ = T 0 þ ðL=CÞ exp

v v h þ Ψ exp - x α α

ð7:57Þ

Such representation accounts for all possible alterations of the position of the front and variations of the temperature. Then, we can formulate the free-boundary problem for Ψ(x, y, t) (see Exercise 5): Ψt þ vΨx - α Ψxx þ Ψyy = ðv=αÞ αhyy þ vh2y - ht exp

αΨx - vΨ - αhy Ψy = vh2y - ht exp Ψ= θ-

lμ α

v þ ht 1 þ h2y

1=2

v h α

hyy

þ lc

1 þ h2y

v h , x > hðy, t Þ α ð7:58Þ

3=2

exp

v h α

,

x = hðy, t Þ

ð7:59Þ

Observe Eq. (7.59) and notice the appearance of a new length scale—the capillary length: lC =

CT m σ L2

ð7:60Þ

According to the method of normal modes, the linear instability of the basic state (7.22, 7.23, 7.24a, 7.24b, 7.25), which is expressed now as Ψ0 = 0, h0 = 0, v = αθ/lμ, is examined by superposing on the basic state the perturbations represented in terms of normal modes: Ψ h

=

~ ð xÞ Ψ eβtþiky , ~h

ð7:61Þ

substituting them into the system of Eqs. (7.58, 7.59), and linearizing the equations in the perturbation quantities. The solution for the perturbations is as follows: ~ ðxÞ þ v~h=α = ψ 0 e - px Ψ

ð7:62Þ

where constant ψ 0 is arbitrary and constant p > -v/α. The latter condition guarantees boundedness of the temperature field (7.57) for x → 1. This solution has a compatibility condition: v v2 þβ= þ p v - lμ β - αlC k 2 α α , αp2 þ vp - β þ αk2 = 0

ð7:63Þ

7.4

Morphological Instability*

119

0

(a) T 

E

E

(b)

k2 k

k1

k2

0

E

TU

E

T!U

E

E

unstable modes E

0

E

E

k k

0

1

0

T

0

1/U

Fig. 7.6 The linear spectrum of perturbations of the basic state (7.22, 7.23, 7.24a, 7.24b, 7.25). (a) The amplification rate β as a function of the wave vector k; (b) the marginal stability curve in the plane (θ, k)

which may be analyzed for the function β(k, θ) called the linear spectrum of the perturbations. It has two branches, β1 and β2. The branch β2 is always negative except for k = 0 and θ = 0 where it is equal to zero. For instance: β2 ð0, θÞ = -

α 3 θ ð1- 3θ þ . . .Þ ≤ 0: l2μ

ð7:64Þ

β1 may be positive or negative but is always equal to zero at k = 0. The marginal stability curve (β = 0) also has two segments, k1 = 0 and the cutoff wave number k2 where: k 22 =

θ 1lC lμ

θ

lC : lμ

ð7:65Þ

The linear spectrum (7.63, 7.64, 7.65) can be summarized by plotting the amplification rate β versus the wave number k for fixed values of θ as shown in Fig. 7.6a and the marginal stability curve in the plane (θ, k) as shown in Fig. 7.6b. If (7.65) yields the real value for k2, then all the normal modes of the branch β1 with k1 < k < k2 grow with time, that is, the front is unstable. However, if k2 is imaginary, then there are no unstable modes, that is, the basic state (7.22, 7.23, 7.24a, 7.24b, 7.25) is stable. Equation (7.65) shows that k2 is imaginary for the hypercoolings θ > Ki where the kinetic number is defined as follows:

120

7

Thermal Effects in Kinetics of Phase Transformations

Ki 

lμ αL = μσT m lC

ð7:66Þ

Thus, at large magnitudes of hypercoolings, the front experiences restoration of its morphological stability. The criterion of the absolute (or second) stability—there are no domains of instability for larger supercoolings—can be expressed as follows: θa =

lμ or Sta = 1 þ Ki lC

Ta = Tm -

L L C C

2

ð7:67aÞ

1 λ T m σμ

ð7:67bÞ

or va =

α lC

ð7:68Þ

Equation (7.65) also shows that for θ = 0 there are no unstable modes. In that case both branches of the spectrum coincide with β1, 2 ~ -2lCk3 for small k; see Fig. 7.6a. However, notice that for θ = 0 the stationary basic state (7.22, 7.23, 7.24a, 7.24b, 7.25) is not valid and must be replaced by (7.27, 7.28, 7.29). It is advantageous to analyze the absolute stability problem as an interplay of the length scales. This can be done by calculating the amplification rate for the “most dangerous” mode as the limit of the absolute stability is approaching. As the wave number km of this mode is small, from (7.63), we obtain the following: β1 lT - lC 2 l3T l -l l -l k 2þ3 T Cþ T C = α lμ lμ lμ lμ

2

k4 þ O k6 :

ð7:69Þ

Now one can see that the absolute stability boundary is approached when the thermal length lT = α/v becomes equal to the capillary length lC. A typical value of the kinetic number Ki is 102 for metals and 2 for organic liquids and white phosphorous. Thus, to reach the absolute stability boundary, one needs to hypercool the liquid strongly below the melting point. Equation (7.67b) shows that increasing the latent heat and thermal conductivity increases the hypercooling (promotes the instability), while increasing the specific heat, surface energy, and kinetic coefficient decreases the hypercooling (retards the instability). Another problem with large hypercoolings comes from the constraint that the temperature of the liquid state cannot fall below the limit of its thermodynamic stability.7 Besides, the capillary length must be much larger than the thickness of the interfacial transition

7

For example, against amorphization.

7.5

Dendritic Growth

121

region (see Chap. 9). Otherwise, when lT ≤ lC, the boundary conditions (7.6, 7.7, 7.8) are no longer valid, and the concept of the front ceases to be meaningful. Yet, all the abovementioned restrictions can be fulfilled for substances with low values of the latent heat.

7.5

Dendritic Growth

Dendrites are treelike structures, which appear very often during solidification of pure substances and alloys. Morphologically, they consist of the primary stems and secondary arms (branches). The primary stem grows with constant velocity and develops nearly parabolic shape of the tip. Occasionally tertiary branches appear during dendritic growth. Combined, the system of secondary and tertiary arms is called the side-branch structure. In the process of growth, the side-branch structure forms a prominent envelope inside which the structural coarsening is taking place. In this Chapter we are concerned with the so-called thermal dendrites, which are driven by the distribution of heat. Practically important dendritic structures of alloys have another important feature—microsegregation—uneven distribution of solute between the stems and arms of the dendrites. An illuminating approach to the experimental study of dendritic structures is crystallization of transparent materials. In Fig. 7.7 is depicted much celebrated result Fig. 7.7 Dendrite of succinonitrile. (From [11] with permission from the publisher)

122

7

Thermal Effects in Kinetics of Phase Transformations

Fig. 7.8 Morphological diagram of dendritic growth in cyclohexanol at different levels of supercooling (Tm - T0): (a) -0.005 °C; (b) -0.2 °C; (c) -0.4 °C; (d) -2.35 °C; (e) -3.55 °C; (f) -10 °C. (From [12] with permission from the publisher)

of crystallization of succinonitrile by Huang and Glicksman [11], which captures many of the features of the dendritic structure formation. In Fig. 7.8 is presented experimentally observed morphological diagram of dendritic growth in supercooled cyclohexanol [12]. At small supercoolings a rounded crystal grows from the melt (Fig. 7.8a); at larger supercoolings a rounded crystal develops arms growing in crystallographic directions (Fig. 7.8b); at still larger supercoolings, full-fledged dendrites grow from the melt forming a prominent

7.5

Dendritic Growth

123

envelope (Fig. 7.8c); at greater supercoolings the dendritic structure becomes denser (Fig. 7.8d) and denser (Fig. 7.8e) and disappears completely at rather high level of supercooling, which manifests restoration of the stability at large supercoolings (Fig. 7.8f).

7.5.1

Analysis of Classical Theories of Crystallization

Theoretical analysis of the problem helps us understand that the physical reason for the dendritic growth is morphological instability of the plane front (see Sect. 7.2), hence, the presence of the supercooled melt in front of it (see Sect. 7.4). As we found from our analysis in Sect. 7.4, the level of morphological instability depends very much on the magnitude of the supercooling in the liquid. As an outcome of the plane-front instability, a needlelike paraboloidal crystal may start growing in the melt (see Sect. 7.3). The smooth surface of the paraboloid is subjected to the same conditions of morphological instability, which is the reason for appearance of the secondary and tertiary arms. Although the theoretical analysis of the problem of dendritic growth is illuminating, it is limited to simplified situations. Indeed, analysis of the morphological instability of the front is limited to small perturbations. Ivantsov’s theory of growth of a paraboloidal crystal provides a solution for a stationary advancing, shape-preserving front of crystallization. But it has the following deficiencies: 1. The theory allows us to find the product of the speed of the crystal and the radius of its tip, see (7.49) and inset of Fig. 7.5, but does not allow us to determine these quantities separately, while the experiments always result in definite values of the speed and tip radius—the operational point of a dendrite. 2. The theory applies to cases of limited magnitude of the supercooling (St < 1), see inset of Fig. 7.5, while experimentally it is possible to observe dendritic growth for larger supercoolings (St ≥ 1). 3. The theory does not have any description of the side-branch structure of a dendrite. The deficiencies of the classical theories bring to mind two ways of resolving the problem of dendritic growth: either other physical principles must be included into the description or one needs to find a way of obtaining a more accurate solution. In the next section, we describe the method of numerical simulations, which follows the latter route.

124

7

7.6

Numerical Simulations of Dendritic Growth

7.6.1

Thermal Effects in Kinetics of Phase Transformations

Cellular Automata Method

Essentially, a cellular automata method (CAM) [13] consists of dividing the whole system into cells of linear size lCA and volume ΩCA and averaging the field quantities in each cell; see Fig. 7.9. For the method to work, the cell size lCA should be much smaller than the essential length scales of the problem in question, e.g., the radius of curvature of the front. To derive the CAM dynamic equations for the problem of a moving front of crystallization, we compute the thermal (enthalpy) balance in each cell. For this we integrate the heat conduction equation (7.3) over the volume of the cell: C

∂ ∂t

ΩCA

T d3 x = -

ΩCA

ð7:70Þ

J ds

The integral in the left-hand side we can present as TΩCA , where T is the temperature averaged out over the volume of the cell. The integral in the righthand side is the sum of the fluxes on all boundaries of the cell ΣΩCA J. However, (7.70) does not present the full enthalpy balance of a cell with the front in it because it does not take into account the latent heat release due to the advancement of the front. The latter can be accounted for by including the Stefan BC (7.6) into (7.70). Indeed, as the front moves through the cell, it releases amount of heat: Fig. 7.9 Cellular automata method. A simulation cell, which contains the front that separates the solid and liquid phases

x

u

W=1

6 _ _ T; Z

W=0

S

W=0

W=1

liquid

vn n

W=1

K 6

l 'x

r

:CA solid

W=0

6

0

W=1

'x

6 W=0

d

z W=1

7.6

Numerical Simulations of Dendritic Growth

125

QΩCA = vn S0 L

ð7:71Þ

where S0 is the area of the piece of interface in the cell. Then, adding (7.71) to (7.70) and introducing a new variable, portion of solid in the cell: ω

ΩS ΩCA

ð7:72Þ

where ΩS is the volume of the solid phase in the cell ΩCA, the heat balance equation can be rewritten as follows: C

∂ω ∂T Ω = - Σ CA J þ L ΩCA ∂t ∂t ∂ω S = 0 vn Ω ∂t CA

vn = μ T m- T - 2

σT m K L 0

ð7:73aÞ ð7:73bÞ ð7:73cÞ

where the equation that relates vn to the average temperature in the cell T and local curvature of the piece of interface K0 was obtained by combining (7.8b) with (2E.5). Then, (7.72, 7.73) is a system of simultaneous equations for the new cell variables T and ω. However, the system is not complete—the missing information has been lost “during” the averaging-over-the-cell (coarse-graining) procedure. First, we need to compute the quantities S0 and K0; they can be found using values of the function ω in the cell of interest (0) and the surrounding cells (front tracking). Second, we need a transition rule, which indicates when (on what time step) the interface appears in 0-cell if before the cell was void of the interface. This rule also can be established based on the knowledge of the function ω in the surrounding cells. Hence, we can see that CAM depends heavily on the neighborhood of the 0-cell—{ωi}. The extent (nearest neighbors, next-to-nearest, etc.) and symmetry (e.g., square or triangular lattice) of the neighborhood enrich the method at the expense of the computational resources (There is no free lunch!). The rules themselves depend on the physics of the transition process (nucleation, etc.) and properties of the system (e.g., anisotropy). Let us consider a 2D system (you can think of it as being uniform in the third dimension) with crystallographic symmetry that corresponds to the square cell grid ( is the fastest growth direction), for which we will be using the nearest neighbor (left, right, up, down) neighborhood; see Fig. 7.9. One may consider more complicated anisotropy of the system than its crystallography, but for the sake of simplicity, we will not be doing that. Furthermore, we will rule out nucleation in front of the interface, which leaves only one mechanism of transformation— advancement of the phase interface. Then the transition rules may be realized in the CAM by introducing one more, subsidiary variable—the cell type W, which multiplies the right-hand side of (7.73b). By definition, W0 = 1 if 0 < ω0 < 1 (two-phase

126

7 Thermal Effects in Kinetics of Phase Transformations

cell, i.e., with the interface inside it) or if ω0 = 0 (liquid cell) but ωi = 1 for at least one of (i = l, r, u, d); all other cells, that is, with ω0 = 1 or 0, have W0 = 0; see Fig. 7.9. There is a number of ways of how one can calculate the functions S0 and K0 given the state of the neighborhood fωi g. For instance, one can assume that the interface describes an arch of a circle in the neighborhood and use fωi g to deduce its length, radius, and angle of inclination in the center cell. Below, we will be using the following formulae for the curvature and slope:

K0 =

d dz

ðdx=dzÞ0 1þ

;

ðdx=dzÞ20

dx dz

= 0

W0 2

i = l,0,r

ðωiu - ωid Þ

ð7:74Þ

Now the system (7.72, 7.73a, 7.73b, 7.73c, 7.74) can be dimensionalized using, respectively, the following length, time, velocity, and temperature scales: lμ =

λ , μL

τμ =

l2μ , α

v0 =

lμ μL = , τμ C

L , C

ð7:75Þ

and completely “cellularized,” that is, discretized in space and time: þ1

u, d

T0 = T0 þ a

T i - 4T 0

þ Δω0

ð7:76aÞ

i = l, r

ωþ1 0 = ω0 þ Δω0 Δω0 = bW 0 ξ0 St- T 0 -

ð7:76bÞ 1 K Ki 0

ð7:76cÞ

where a = λΔt/CΔX2, b = v0Δt/ΔX, and ξ0 = S0/lCA are the numerical parameters and the supercooling St and kinetic number Ki are defined, respectively, by (7.17) and (7.66). For the sake of simplicity for ξ0, we will be using its isotropic average value of one. The method may be expanded to include the effects of anisotropy and presence of other components. The system of equations (7.76) must be supplemented with the boundary conditions: the no-flux condition was imposed on the left and right boundaries, which correspond to i = 0 and i = I, and on the boundary j = 0, which remains in the solid state at all times of the simulations, while the condition of constant temperature T = T0 was imposed on the boundary that remains in the liquid state far away from the front of solidification. Initially the system was assumed to consist of a small amount of solid phase in the shape of a thin plane slab with a small perturbation on its surface, immersed into a large volume of liquid phase supercooled to the level of St > 0. For the initial perturbation, two cases were considered: (i) ripples of random amplitude and wavelength and (ii) a bump of the size of a single cell. Calculations by

7.6

Numerical Simulations of Dendritic Growth

127

(7.76) must be performed for each time step tk; however, one does not need to retain them all: for the next-step forward computation, one needs just the current time-step data. So, to save on the computer memory, one can rewrite the new time-step data into the current time-step file and continue the cycle of computations. Periodically, the structural measurements must be conducted for future analysis.

7.6.2

Simulation Results

In Figs. 7.10 and 7.11 are presented numerical calculations using the system (7.76) for different initial conditions and supercoolings St [14]. A. Wavelength Selection In case of ripples on the surface of a thin crystal, Fig. 7.10(i)a, they grew in amplitude and quickly started changing the wavelength—Fig. 7.10(i)b—due to strong thermal interaction between the growing bulges. Eventually, the growing crystal selected the most dominant wavelength, which varied with the magnitude of the supercooling, Fig. 7.10(i)c.

Fig. 7.10 Simulations of solidification by (7.76) with different initial conditions. (i) Ripples of random amplitude and wavelength; (ii) a bump of the size of a single cell

128

7

Thermal Effects in Kinetics of Phase Transformations

Fig. 7.11 Morphological diagram of the dendritic growth at different magnitudes of supercooling St: (i) 0.6, (ii) 0.75, (iii) 0.85, (iv) 0.95, (v) 1, (vi) 1.5

B. Stationary Non-parabolic Shape of the Tip In case of a small bump on a flat surface of a crystal growing into a supercooled liquid, Fig. 7.10(ii)1, simulations showed that the surface is not stable, that is, the bump turned into a needle crystal growing in the direction perpendicular to its surface with the shape of the tip independent of the initial size of the bump, Fig. 7.10(ii)2. After an initial transient, a stationary state settles in the system, which is characterized by the tip advancing at constant speed and unchanging shape, Fig. 7.10(ii)3. On the later stages of growth, the needle turned into a dendrite with a neck attaching the needle to the bulk and the side branches, which did not change the shape of the tip, Fig. 7.10(ii)4. The latter was found to be described by the function x = κz1/n with n < 2 and stable with respect to the perturbations of the side branches. The temperature distribution in the crystal was found to be almost uniform and below the melting point, while in the liquid phase it was quickly dropping and

7.6

Numerical Simulations of Dendritic Growth

129

approaching the initial temperature of the melt with the highest gradient found at the tip, cf. Fig. 7.5. C. Morphology of the Dendritic Structure Later, on the surface of the needle, there appeared a periodic system of secondary branches growing in the perpendicular direction, Fig. 7.11(i). The first branch appeared at a distance zs from the tip of the needle, which may be called the stationary length. Near the tip, the secondary branches appear at equal distances ds from each other and are connected to the stem of thickness 2δs, which practically did not change with the distance from the tip. The newly formed branches thickened slowly and continuously grew sideways speeding up to vs but not reaching the speed of growth of the tip of the needle v. The side branches had morphology similar to that of the main crystal, that is, consisting of the tip, stem, and a neck connecting to the main stem, with the difference that the branches are not symmetric: they are more developed in the direction of the growth of the main stem than in the opposite direction, Fig. 7.11(ii). On well-developed secondary branches, we observed the appearance of the branches of higher order—tertiary branches growing in the direction parallel to the direction of growth of the main stem. D. Evolution of the Side-Branch Structure As a result of interactions of the thermal fields, the neighboring branches set in the state of competitive growth, which leads to the coarsening of the structure of the side branches. This phenomenon consists of the slowing down and subsequent halting of growth of every other side branch; see Fig. 7.11(iii). The neighboring branches, on the contrary, increase the speed of growth as a result of the increased separation. The “perished” branches did not remelt (their surface temperature did not exceed the melting point) but eventually were incorporated into one of the two adjacent branches. In simulations, such evolution of the side-branch structure was observed for 0.6 ≤ St < 1; see Fig. 7.11(iv, v). E. Dendritic Envelope Formation Evolution of the side-branch structure can be described by the function χ(t, z)— the x-coordinate of the dendrite interface at the distance z from the tip at time t. In the reference frame of the tip, evolution of the side-branch structure takes the form of a travelling wave of increasing amplitude and phase velocity equal to the rate of growth of the tip v, which started at some point away from the tip. With time, the system settles down into a state—let’s call it asymptotic—where the crystal breaks down into a set of characteristic domains. In the first domain, the function χ does not depend on time (stationary domain) and increases from zero at z = 0 (the tip) to χ = δs at z = zs (the boundary of the stationary domain), Fig. 7.11(i). Treating z as a parameter, we conclude that at this point the stationary state χ(t, zs) loses stability and turns into a periodic state with the period ts = ds/v. It is said that a Hopf bifurcation takes place at z = zs. The envelope χ s1(z) is drawn by the maxima of χ(t, z), the tips of the branches, while χ m1(z)—the minima—the stem of the dendrite, Fig. 7.12. As z increases, the former grows significantly, while the latter grows insignificantly,

130

7

Thermal Effects in Kinetics of Phase Transformations

Fig. 7.12 Diagram of period doubling in the dendritic side-branch coarsening

which corresponds to practically constant thickness of the stem 2δs. Their difference lx = χ s1(z) - χ m1(z) constitutes the length of the side branches at the point z, Fig. 7.12 F. Period Doubling in Dendritic Side-Branch Structure Coarsening At z = z1 the side-branch structure coarsens. The coarsening happens when out of two neighboring branches one increases the speed of its growth—the active branch—and the other one decreases, the passive branch, Fig. 7.11(ii). As a result, the spatial period of the structure doubles. The function χ(t, z) does not selfreproduce during ts; instead, two periods are required. In other words, the time period doubles and becomes 2ts. One can say that at z = z1, the bifurcation of period doubling—a Feigenbaum bifurcation—takes place. Now, the function χ s1(z) loses stability and splits into an ascending branch χ s2(z), corresponding to the tips of the active branches of the dendrite, and the branch χ m2(z), which corresponds to the passive branches of the dendrite whose lengths practically do not change, Fig. 7.12. According to the Feigenbaum theory, increase of the complexity of a system occurs by way of successive bifurcations of period doubling till the system reaches the state of chaos. To verify this scenario, a dendritic crystal was grown at St = 0.85. It turned out that, after reaching the asymptotic state, the side-branch structure underwent the second bifurcation of period doubling at some point z = z2; see Fig. 7.11(iii). On the diagram of χ(z), Fig. 7.12, this bifurcation is accompanied by splitting of the branch χ s2(z), from which emerge the ascending χ s4(z) and non-ascending χ m4(z) branches. However, the dendritic envelope remains a smooth curve. The further stages of coarsening proceed, apparently, by way of repeating bifurcations of period doubling until the side branches gain the branches of their own—the tertiary branches—and the entire dendritic morphology acquires properties of a chaotic structure. G. Absolute Stability With the increase of the supercooling from 0.4 to 1, the rate of growth of the side branches vs increased, and the distance between their tips ds decreases, that is, the

7.6

Numerical Simulations of Dendritic Growth

131

side-branch structure becames denser, Fig. 7.11(iv–vi). The latter was a result of the decrease of the thermal length lT = 2/vs of the growing branches. As St = 1, the structure of the side branches changes its character: instead of the system of long branches attached to the stem, a cellular structure appeared on the side surface of a growing crystal. In fact, the whole concept of a dendritic stem disappears. Further increase of the supercooling to 1.5 and above leads to the growth of crystals with morphologically smooth interfaces, Fig. 7.11(vi). Such change of the morphology of growing crystals is called the absolute stability or restoration of the stability at large supercoolings; see (7.67) in Sect. 7.4. The transition from the dendritic to smooth morphologies at deep supercoolings of St > 1 can be explained as follows. At small supercoolings (St < 1), a purely diffusion-limited regime of growth of the new phase sets in the system, which depends significantly on the removal of heat. At large supercoolings of St ≥ 1, significant amount of the latent heat emitted at the front can be neutralized in a small volume of the of the melt near the front, thus, removing the need in branched dendritic morphologies, which promote rapid heat removal from the front at smaller supercoolings. H. Comparison with Experiments Many of the simulated features of the dendritic growth were observed experimentally. For instance, the mechanism of the side-branch structure coarsening was observed in experiments on dendritic solidification, e.g., crystallization of succinonitrile by Huang and Glicksman [11]; see Fig. 7.7. The morphological changes of the dendritic structures with the change of the supercooling were observed experimentally in crystallization of supercooled cyclohexanol [12]; see Fig. 7.8.

7.6.3

Conclusions

The simulation results allow us to draw a few important conclusions regarding formation of the dendritic structure. 1. Little bumps on a smooth interface turn into long stems covered by side branches. The reason for that is interaction of perturbations through the thermal filed. The wavelength selection of the perturbations is a result of a strong long-range interaction through the temperature field between the structural units. 2. Stationary tip does not have parabolic shape and form. 3. The side-branch structure is highly periodic with the period depending strongly on the supercooling. 4. The side-branch structure has a well-defined envelope. 5. At large supercoolings (St ≥ 1), the side branches disappear altogether—attainment of the absolute stability. 6. Moving away from the tip, the side-branch structure coarsens by doubling of its period.

132

7

Thermal Effects in Kinetics of Phase Transformations

Importantly, coarsening of the dendritic side-branch structure does not follow the Ostwaldian scenario when the structural units of the new phase amount to a small fraction of the old one and the diffusional (thermal) interaction between them is weak; see Chap. 5. Coarsening of the dendritic side-branch structure is an example of the high-volume fraction limit of the coarsening mechanism when the interaction of the diffusion (thermal) fields is very strong. The main characteristic feature of this mechanism is the doubling of its structural period as opposed to the self-similar regime of the particle ensemble evolution in the process of Ostwald ripening where the average size is proportional to the cube root of time. However, we can draw a few conclusions regarding the cellular automata method itself. It efficiently captures instability of a smooth interface to capillary wave perturbations and “translates” this instability into the long needles with rather smooth tips. To produce the side branches, CAM has, so to speak, built-in fluctuations of sufficient amplitude to inspire the side branches. These fluctuations are related to the coarseness of the CAM grid, which also brings about grid-related anisotropy of the method. These features of the CAM do not allow us to resolve the subtle small-scale issues of the dendritic tip stability but make it an effective tool in large-scale microstructural modeling.

7.7

Rate of Growth of New Phase

At this juncture we have accumulated a number of theories of the rate of growth of a new phase from the supersaturated old one. To summarize, we plot in Fig. 7.13 the speed of growth of the new phase from the unlimited amount of the supercooled old phase using the Stefan number (supercooling) as a measure of the driving force of the process. The mechanism of thermally activated growth considered in Sect. 4.1.1 describes isothermal crystallization as a process of crossing of a potential barrier by the molecules.8 Introducing the dimensionless parameter: Ω

v1 L kB T m

ð7:77Þ

which relates the latent heat per molecule (v1L ) to the average thermal fluctuation (kBT), the rate of the isothermal growth, see (4.11a) and (4.12c), can be expressed as follows: v = aν e - Ξ=4 1- e - Q Ω St

8

ð7:78Þ

The mechanism of condensation from nearly ideal gas considered in Sect. 4.2.2 is less theoretically motivated and will not be considered here.

Rate of Growth of New Phase

133

speed v

D/l

C

7.7

0

0.5

1

1.5

1+Ki

2

supercooling St Fig. 7.13 Sketch of the speed of growth of a new phase versus the supercooling by various mechanisms. Green curve—thermally activated isothermal growth; black lines—thermally effected growth; black curve—dendritic branch. Solid lines—stable regimes; dash-dotted lines—unstable regimes of growth: diffusion-controlled for St < 1, kinetics-controlled for St > 1, critical supercooling for St = 1 (black-filled circle [3]). Brown circle—Mullins-Sekerka-Voronkov instability [8, 9]; red circle—the point of restoration of (absolute) stability [10]

where a is the molecular diameter and ν is the frequency of the molecular vibration along the reaction path. Release of the latent heat and its conductive redistribution gives rise to two important effects. First, removal of the heat from the growing front significantly retards the rate of growth so that for small driving forces (St < 1) the possibility of stationary motion of the front is eliminated completely, see (7.14b, 7.16), while for large driving forces (St > 1) it is restored: v = v0 ðSt- 1Þ

ð7:23Þ

but on significantly lower level than in the isothermal conditions. Comparing (7.78) with (7.14b, 7.16, 7.23), see Fig. 7.13, we find that being much slower at small supercoolings (St ≪ 1), the rate of the thermal growth approaches that of the isothermal growth at large supercoolings (St > 1). This happens because at large supercoolings the latent heat is neutralized by the initial supercooling of the old phase, and the need in the conductive removal of the heat vanishes. Second, the presence of a supercooled melt in front of the moving interface violates its stability and gives rise to the dendritic growth, which is presented in Fig. 7.13 by the speed of its tip (computed numerically). However, at rather high supercoolings of (St > 1 + Ki), see (7.67a), the stability of the smooth front is restored, and we observe disappearance of the dendritic morphologies.

134

7

Thermal Effects in Kinetics of Phase Transformations

Moreover, comparing (7.78) with (7.8b) and using the principle of correspondence, we find an estimate of the kinetic coefficient: μ=

aν Ωe - Ξ=4 Tm

ð7:79Þ

Exercises 1. Verify that (7.13) is a solution of (7.5). 2. Verify that the solution of (7.16) diverges for St → 1. 3. Quick and dirty. A simplified resolution of the problem of adiabatic crystallization may look like the following. Combining the boundary conditions (7.6) and = LμðT m- T Þ, (7.19), we arrive at the equation for the temperature field: - λ ∂T ∂n L n=lμ which may be resolved to T = T m - C θe . Compare this expression with the exact solution (7.24b) and explain the difference. 4. Determine thickness of the crystal after 10 s of plane-front crystallization of pure Ni if the initial and boundary temperature of the melt is T = 1500K. 5. Molten lead is cooled down to 125 °C. What type of the crystallization regime should you expect? What is the expected rate of crystallization in this case? 6. Prove that the second root of (7.40) represents a family of paraboloids of revolution orthogonal to (7.41). 7. Verify the free-boundary problem for Ψ(x, y, t) expressed by (7.58) and (7.59). 8. Hot melt at the temperature above melting point is solidifying on a cold wall at the temperature below the melting point. Do you expect dendritic morphology in this case or not? Explain you answer.

References 1. M.E. Glicksman, R.J. Schaefer, Investigation of solid/liquid interface temperatures via isenthalpic solidification. J. Cryst. Growth 1, 297–310 (1967) 2. J. Rosenthal, The theory of moving source of heat and its application to metal treatment. Trans. Am. Soc. Mech. Eng. 68, 849–866 (1946) 3. A. Umantsev, Motion of a plane front during crystallization. Sov. Phys. Crystall. 30(1), 87–91 (1985) 4. G.P. Ivantsov, Temperature field around spherical, cylindrical, and needle-like crystal growing in supercooled melt. Dokl. Akad. Nauk SSSR 58, 567–569 (1947) 5. G.P. Ivantsov, Growth of spherical and needle-like crystals in binary alloys. Dokl. Akad. Nauk SSSR 83, 573–576 (1952) 6. G. Horvay, J.W. Cahn, Dendritic and spheroidal growth. Acta Metall. 9, 695–705 (1961) 7. W.W. Mullins, R.F. Sekerka, Morphological stability of a particle growing by diffusion or heat flow. J. Appl. Phys. 34, 323–329 (1963)

References

135

8. W.W. Mullins, R.F. Sekerka, Stability of a planar interface during solidification of a dilute binary alloy. J. Appl. Phys. 35, 444–451 (1964) 9. V.V. Voronkov, Condition of formation of cellular front of crystallization. Solid State Phys. 6(10), 2984–2988 (1964) 10. A. Umantsev, S.H. Davis, Growth from a hypercooled melt near absolute stability. Phys. Rev. A 45(10), 7195–7201 (1992) 11. S.-C. Huang, M.E. Glicksman, Fundamentals of dendritic solidification, II: Development of sidebranch structure. Acta Metall. 29, 717–734 (1981) 12. D.E. Ovsienko, G.A. Alfintsev, V.V. Maslov, Kinetics, and shape of crystal growth from the melt for substances with low L/kT values. J. Cryst. Growth 26(2), 233–238 (1974) 13. A. Umantsev, V. Vinogradov, V. Borisov, Mathematical model of growth of dendrites in a supercooled melt. Sov. Phys. Crystallogr. 30(3), 262–265 (1985) 14. A. Umantsev, V. Vinogradov, V. Borisov, Modeling the evolution of a dendritic structure. Sov. Phys. Crystallogr. 31(5), 596–599 (1986)

Part II

The Method

Chapter 8

Landau Theory of Phase Transitions

We start this Chapter with an introduction of a concept of an order parameter (OP) as a hidden variable responsible for symmetry changes during the transition. To identify equilibrium thermodynamic states in an open system, we choose the Gibbs free energy as a function of temperature, pressure, and order parameter (the Landau potential). Using the concept of the order parameter, the phase transition is considered as a mathematical bifurcation or a catastrophe of the Landau potential. The “catastrophic” approach helps us to classify the phase transitions and see how different forms of the Landau potential are applicable to different cases of phase transitions. We also look at the special lines and points of the phase diagram from the point of view of the “catastrophic” changes of the order parameter. With this Chapter we start building the beams and columns configuration of field-theoretic method (FTM) in Fig. P.1. We conclude the Chapter with the analysis of the external field on the phase transition, using the properties of conjugation between the field and the order parameter. Although it is important to know the shortcomings of the Landau theory, e.g., it excludes effects of the fluctuations, it is imperative to say here that this theory not only presents a correct general picture of the phase transitions, but it will also provide for us a reasonable approach to the complicated dynamical problem, which, in many cases, has not been surpassed by other approaches.

8.1

Phase Transition as a Symmetry Change: The Order Parameter

As we concluded in Chap. 2, phase transitions are always associated with significant changes of physical properties, that is, discontinuities, which are always problematic for a theoretical description. Landau [1] suggested an approach to phase transitions, which avoids this problem. His approach uses the concept of “hidden variables” in thermodynamics; see Sect. 1.2. A hidden variable is an internal variable which © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Umantsev, Field Theoretic Method in Phase Transformations, Lecture Notes in Physics 1016, https://doi.org/10.1007/978-3-031-29605-5_8

139

140

8 Landau Theory of Phase Transitions

affects the properties of the system, hence its thermodynamic functions, even when the thermodynamic variables—e.g., P, T, V, E—are set by the conditions outside the system. A hidden variable is a measure of deviation of the system from the state of equilibrium. For instance, it may be the reaction progress variable in a system where a chemical reaction occurs or the composition of a mixture of phases in a system of given pressure, temperature, and number of particles. In a way, Landau’s idea aims to clarify that a phase transition looks discontinuous only in the space of the thermodynamic variables, but if another variable, inner or hidden, is introduced, then the thermodynamic potentials of the system are continuous (even analytic) functions of the set of thermodynamic and inner variables. In the theory of phase transitions, it is accustomed to call this variable an order parameter (OP), in recognition of the fact that many phase transitions are associated with some kind of ordering. It is important to understand that the OP is not equivalent to the thermodynamic variables because the latter can be set in the system arbitrarily, while the former, at the thermodynamic equilibrium, becomes a function of the latter and takes on the value, which delivers extremum to the respective potential at the given values of the thermodynamic variables—P, T, V, and E, see (1.5a, b). The concept of OP is very helpful in defining a phase, see Sect. 2.1, because the “homogeneity” and “equilibration” should be understood in the sense of the OP. Then, a phase is defined as a locally stable, homogeneous in the OP part of a system. The OP may represent a physical quantity, that is, be described by a dimensional variable. For mathematical convenience, it may be scaled and have lost its physical interpretation, although initially it had one. Nevertheless, in many cases, the mathematical simplicity outweighs the physical transparency. Landau realized that many phase transitions are associated with symmetry changes in the system. Hence, the OP can be defined as a measure of the symmetry difference between the phases before and after the transition. Actually, the physical meaning of the OP matters less than its symmetry properties. The OP can be defined such that it is zero in the highsymmetry phase and nonzero (positive or negative) in the low-symmetry phase. The OP should describe the phase transition, and its changes should reflect the symmetry changes essential for the transition. The Landau approach allows us to bridge the gap between the atomistic theories and macroscopic observations. Indeed, the OP of a specific system can be derived as volume average of a microscopic quantity, which characterizes the transition—the coarse-grained quantity (see Appendix A). In that regard, the concept of the OP in thermodynamics is similar to the concept of the wave function in quantum mechanics. The OP can be a scalar, a vector, or a tensor; tensorial properties of the OP should reflect the tensorial properties of the microscopic quantity that characterizes the transition. The OP can be a complex or a multicomponent quantity, depending on the physical nature of the transition. Identification of the physical nature of the OP is the first step in the description of a phase transition. The OP is apparent for some transitions and unclear for others. For instance, for a martensitic transition (Sect. 3.3), which has “mechanical” origin, the OP is a crystalline lattice parameter change. For the ferromagnet/paramagnet transition (Sect. 3.4), the OP represents

8.1

Phase Transition as a Symmetry Change: The Order Parameter

141

spontaneous magnetic moment of the sample. The OP of a crystal/melt transition (Sect. 3.1) is not so obvious: it is the amplitude of the lattice periodic component of the density function. For a liquid-gas transition, the difference in densities of the phases plays the role of the hidden variable; however, it is not an order parameter because the symmetry of the phase does not change after the transition. Another reason not to consider density as an OP is that dynamically density obeys equations which differ from the dynamical equations that we will derive for the OP in Chaps. 10 and 11. The thermodynamic properties of the system will be described by the coarsegrained thermodynamic potential which is a function of the OP (η) and whose minima are associated with the equilibrium states of the system. Which thermodynamic potential is the most appropriate for the theory that describes the phase transition? One can choose any appropriate thermodynamic potential: the Gibbs free energy G as a function of (P, T, η), the Helmholtz free energy F as a function of (V, T, η), or the entropy of the system S as a function of its internal energy E and (V, η). For the thermodynamic consistency of the description of the system, the potential must obey the laws of thermodynamics (Sect. 1.1). In their most natural forms, the first and second laws are expressed through the internal energy E and entropy S of the system, which may seem to indicate that these potentials should be chosen for the phase transition description. In fact, this is not the case because, first, the thermodynamics provides an easy recipe for how to convert one thermodynamic potential into another one—the Legendre transformation (see Appendix F). Second, the problem of intuitive clarity of the potential is paramount for the Landau theory because it is a phenomenological theory. From the standpoint of the second argument, the Gibbs free energy, G(P, T, η), has a significant advantage in that it naturally represents an open system as opposed to S(V, E, η)—representing a closed one. Then, according to the laws of thermodynamics, an equilibrium state of an open system with given P and T corresponds to the minimum of the Gibbs free energy. Hence, it will be found among the critical points of the free energy as a function of the hidden variable, η: ∂G ∂η

=0

ð8:1aÞ

P,T

As known, stability of such equilibrium is determined by the sign of the second derivative of the free energy with respect to the hidden variable: 2

∂ G ∂η2

>0

ð8:1bÞ

P,T

Another advantage of identifying the OP with a hidden variable is that the first derivative of the free energy of a nonequilibrium state with respect to the OP plays the role of the thermodynamic driving force toward the equilibrium, which will help us to derive the dynamical equations for the OP in Chaps. 10 and 11.

142

8

Landau Theory of Phase Transitions

Ability to differentiate a thermodynamic potential is related to its analyticity, which means that, in the space of the thermodynamic variables plus OP, in the vicinity of the transition point, the potential can be represented by a Taylor polynomial. According to Taylor’s theorem, if the first n + 1 derivatives of the function G(η) exist at the point η = ηðP, T Þ, then: 1 00 1 000 G ðηÞðη - ηÞ2 þ G ðηÞðη - ηÞ3 þ ⋯ 2! 3! GðηÞ = ð8:2aÞ 1 þ GðnÞ ðηÞðη - ηÞn þ Rn ðηÞ n! GðηÞ þ G0 ðηÞðη - ηÞ þ

where Rn(η) is the nth degree remainder, which is of the order of the (n + 1)st derivative of G estimated at some point between η and η. For small values of ðη - ηÞ, the remainder may be dropped. How small must the difference ðη - ηÞ be for this approximation to be valid? As known, the accuracy of the approximation depends on the degree of the Taylor polynomial. Although the accuracy of the approximation increases with the degree of the polynomial, its “convenience” decreases. Thus, one must find a rational compromise to the accuracy/convenience trade-off. This problem is resolved only if the terms that make comparable contributions into the free energy approximation remain in the polynomial. For the point η = η to be associated with the symmetry change, it must be a critical point ðG0 ðηÞ = 0Þ ; see (8.1a). Therefore, the linear term in the expansion (8.2a) must vanish. What is the highest degree of the polynomial nm and which terms in the expansion (8.2a) should be retained are the core questions of the phase transition modeling. Obviously, they cannot be covered here completely; however, some of them may be answered based on the following criterion. Often one sees in the literature a statement that “the expansion (8.2a) is valid only for small values of η” without specification of what this means. That is not accurate and must be corrected. According to Taylor’s theorem, the requirement must be that the remainder Rnm ðηÞ is smaller than any of the nonvanishing terms of the polynomial in the interval of interest of the OP. The boundaries of the interval (ηl, ηr) are either known from the physical constraints of the problem or should be found from the mathematical analysis of the free energy. Once we have them, the criterion takes the following form: max

ηl < η < ηr

1 ðnm Þ G ðηÞðη - ηÞnm > > max jRnm ðηÞj ηl < η < ηr nm !

ð8:2bÞ

To take full advantage of Taylor’s formula, the polynomial expansion should be truncated at the lowest possible order nm. The number nm determines many properties of the transition, most notably its kind—first or second; see Sect. 2.3. For instance, a martensitic transition is recognized as a first order. Then, one can use the relation GðηÞ = G0 þ 1=2G00 ðηÞη2 and interpret it as Hooke’s law with G00 ðηÞ as the bulk modulus.

8.2

Phase Transition as a Bifurcation: The Free Energy

8.2

143

Phase Transition as a Bifurcation: The Free Energy

Many properties of the expansion (8.2) can be understood on the basis of the mathematical theories of bifurcation and catastrophe [2, 3]. Let’s identify the minimal set of properties of a function G(η) that can be used as the Landau-Gibbs free energy to describe a phase transition from a phase α to phase β. It must have at least two minima of different magnitudes—the local (lmin) and the global (gmin)— separated by a local maximum (lmax), which represents the free energy barrier state t: GðηÞ ≤ Gt  lmaxGðηÞ Gβ  gminGðηÞ ≤ Gα  lmin η η η

ð8:3Þ

A polynomial of the degree not less than four possesses these properties. Without any loss of generality, we can assume that (8.2) is an expansion near the highsymmetry phase with η = 0. Then, the OP can be scaled such that G′ ′ ′ ′(0) = 6, and we obtain the Landau potential [1]: 1 2 1 GðP, T, ηÞ = G0 ðP, T Þ þ AðP, T Þη2 - BðP, T Þη3 þ η4 2 3 4

ð8:4Þ

where G0(P, T ) is the Gibbs free energy of the high-symmetry phase and the model coefficients A and B are assumed to be smooth functions of P and T. Then, A(P,T ) and B(P,T) determine all the properties of the transition and can be used as the system’s thermodynamic variables instead of (P, T ). Notice that the Landau potential is symmetric with respect to the B = 0; indeed, if B changes sign, the simple reflection η → -η will restore the potential. The equilibrium values of the OP can be found among the critical points of the Landau potential (8.4): ∂G ∂η

= η A- 2Bη þ η2 = 0

ð8:5Þ

A,B

Unsurprisingly, η0 = 0 is a solution of this equation. In addition to this root, the equilibrium set contains two more solutions: η± = B ±

B2 - A

ð8:6Þ

Equation (8.6) has real solutions—equilibrium states—only if A ≤ B2. At: A = B2

ð8:7Þ

144

8

Fig. 8.1 Phase diagram of the system described by the Landau potential (8.4). The triple symbols identify levels of stability of the states in the respective regions of the phase diagram: lower symbol, globally stable; middle symbol, locally (meta)stable; upper symbol, unstable. Curves: red, spinodals: (1) low symmetry, (8.7); (2) high symmetry, (8.9); blue, phase boundaries: (3) (8.12); (4) (8.14); purple, constraint of the tangential potential, (8.27). LCP, the Landau critical point (8.16)

A

Landau Theory of Phase Transitions

1

0

0

-+ 0 + -0

-0 +

LCP

+ 0 --

0 + --

4

C

3

2 B

0 -+

ηþ and η - are identical, and for A > B2 the solutions η ± are complex. Thus, in the (B, A) plane, the curve (8.7) separates the regions with one and three equilibrium states; see Fig. 8.1. As known, equilibrium states may be stable or unstable; see Sect. 1.2. The thermodynamic principle of the minimum Gibbs free energy provides the recipe for the analysis of stability of the states in open systems. For the Landau potential (8.4), condition of stability, (8.1b), turns into the following: 2

∂ G ∂η2

= A - 4Bη þ 3η2 > 0

ð8:8Þ

A,B

Analyzing (8.8) we can see that the stability of the state η0 depends on the sign of A only: η0 is stable if A > 0 and unstable if A < 0. Hence, in the (B, A) plane, the region of stability of the state η0 is separated from the region of instability by the following line: A=0

ð8:9Þ

For B > 0, the state ηþ is always stable when it exists, i.e., A < B2, the state η is unstable when: 0 < A < B2

ð8:10Þ

8.2

Phase Transition as a Bifurcation: The Free Energy

145

and stable when A < 0. For B < 0 the stability conditions for the states ηþ and η switch places. The regions of existence and stability of the states η are shown in Fig. 8.1. The analysis that we have conducted so far identifies only the local stability of the equilibrium states η, i.e., stability with respect to small perturbations. However, the locally stable states may differ by the amount of the free energy: the one with the least amount is called globally stable. To determine which state, η0 or η ± , is globally stable, we need to calculate the following: 1 2 A - Bη ± A - 2Bη ± 4 3 3=2 1 2 2 4 2 2 = G0 þ AB - A - B ∓ B B2 - A 4 3 3

G ±  G A, B, η ± = G0 -

ð8:11Þ

and compare it to G0. Depending on the magnitude of B, three different cases are possible. Case 1 B  0, the Landau condition. This condition may be required by the symmetry constraints of the transition. In this case, there is only one (real) solution p η0 = 0 for A > 0 and three real solutions η0 = 0, η ± = ± - A for A < 0; see (8.6). From (8.8) we can see that the state η0 is stable for A > 0 and unstable for A < 0; the states η ± are stable in the domain of their existence, i.e., A < 0. From (8.11) we can see that G± < G0, that is, the global stability of the equilibrium states is identical with the local one. Case 2 B > 0. The bifurcation structure of the equilibrium diagram of the system is different from that of case 1. The most important change is in the appearance of the domain of coexistence of the equilibrium states, η0 and η ± , (8.10). Outside this domain, case 2 is similar to case 1 with only one state, η0 , existing for A > B2 and three states (η ± and η0) for A < 0. Inside the domain of coexistence, (8.10), exchange of the global stability between the states is taking place. To analyze this “process,” we need to equate G+ to G0 in (8.11). Representing the solution in the following form: A = kB2

ð8:12aÞ

we transform (8.11) into the following: 1 2 2 2 k þ ð1 - kÞ3=2 - k þ = 0 4 3 3 which has the only solution:

ð8:12bÞ

146

8

k=

Landau Theory of Phase Transitions

23 32

ð8:12cÞ

Thus, on the boundary A = (8/9)B2, (8.12a, 8.12c), the states η0 and ηþ exchange their stabilities so that η0 is globally stable for A > (8/9)B2 and ηþ—for A < (8/9)B2, although both states are locally stable on both sides of the boundary. On the boundary itself, we have the following: ηþ = 2η -

ð8:12dÞ

Let us analyze this case further. Using (8.11) we can also find the condition for the states ηþ and η - to exchange the global stability: G - - Gþ =

4 B B2 - A 3

3=2

=0

ð8:13Þ

This equation has a true root: B = 0, A ≤ 0:

ð8:14Þ

and a spurious root (8.7) because both states are unstable for A < B2. Hence, (8.14) represents the true boundary of stability. Case 3 B < 0. This case is analogous to case 2 if the state η - replaces the state ηþ . The substitution (8.12a) and the solution (8.12c) apply to case 3 also (Why? Hint, √(x2) = |x|). Eq. (8.14) represents the boundary between cases 2 and 3. The separating curves in the (B, A) plane, see Fig. 8.1, is an example of a phase diagram, see Sect. 2.2, in the Landau theory. In the process of analyzing stabilities of the equilibrium states, we found two types of bifurcation loci: Type I, the states exchange their stabilities, global versus local; Type II, at least one of the states loses (or gains) the local stability, that is, becomes absolutely unstable (locally stable). The first type is called the equilibrium phase boundary; see Sect. 2.2. The second type is known as the spinodal (to be precise, the mean-field spinodal); see Sect. 1.2. In the case of two parameters (B, A), these loci are curves; in the case of multidimensional parameters, the loci are surfaces or multidimensional hypersurfaces. In the Landau theory, the equilibrium phase boundaries are obtained by equating expressions (8.11) of the respective phases; the spinodals are found by equating ∂2G/∂η2 in (8.8) to zero and solving for the state of interest. Thus, for the Landau potential (8.4), the curve (8.12) is the η0 =η ± phase equilibrium boundary, and (8.14) is the equilibrium boundary between the phases ηþ =η - ; (8.7) and (8.9) are the spinodals of the low-symmetry η ± and high-symmetry η0 phases, respectively. At the spinodal boundary (8.7), the state ηþ loses its stability and real value, while at the boundary (8.9) the state η0 loses its stability only. Furthermore, comparing ∂2G/∂η2 for the stable and metastable phases:

8.2

Phase Transition as a Bifurcation: The Free Energy 2

∂ G ∂η2

= A, B

147

A p 2 B2 - A þ B B2 - A

η=η0 η=ηþ

for for

ð8:15Þ

we find that ∂2G/∂η2 of the stable phase is always greater than that of the metastable phase. Compared to cases 2–3, case 1 has additional symmetry, which is revealed in the degeneracy of the equilibrium. For the system described by the Landau potential (8.4), at A > 0, the stable state (phase) may have only one value η0 , while at A < 0 the stable state is represented by two different values: ηþ and η - . These conditions, A > 0 and A < 0, are separated by the “Landau critical point”; see Fig. 8.1: B  0, A = 0

ð8:16Þ

Relevance of this point to real physical transitions has not been confirmed yet. The Landau potential G(A, B; η) (8.4), as a function of the OP, is shown in Fig. 8.2 for different values of the parameters A and B. In many situations it may be useful to express the conditions of equilibrium and boundaries of stability in the plane of the OP and one of the model parameters, A or B. Eq. (8.5) expresses the first one and (8.8) (equated to zero) the second one. These conditions are depicted in Fig. 8.3. Thus, we constructed a thermodynamic potential (8.4), which satisfies all the basic properties of a free energy describing a phase transition in the system. We managed to accomplish this by guessing a polynomial with the properties satisfying (8.3). However, there is a way to arrive at the same end point from another direction.

(a)

G-G0

(b)

G-G0

A>0

0.2

B=0

B=1

0.2

A=0

A>8/9 0.1

0.1

B K

0.0

B K

0.0

A0 0 >0 ηþ = B ηþ =22B/3 η± = 0 η- = 0

ηf [G] B4/(322) 0 0 0

[∂AG] B2/2 23B2/32 0 0 [∂BG] -2B3/3 -27B3/34 0 0 -1 -2 -1/2 0

∂AA G

2

+1 24B/3 0 0

2

∂AB G

-1 -27B2/32 0 0

2

∂BB G

1/(22B2)

3

∂AAA G

Table 8.1 Jumps of the derivatives at the phase transitions of different orders for the special values of the parameters A and B of the Landau potential, (8.4). Initial state is η0 ; the state after transition is ηf = ηþ or η -

8.3 Classification of the Transitions 151

152

8.4

8

Landau Theory of Phase Transitions

The Tangential Potential

One of the problems of the Landau potential (8.4) is that its OP is not unitless. Indeed, from (8.6, 8.11), one can see that: kηk = kBk = kAk1=2 = kGk1=4

ð8:25Þ

where ||Q|| is the unit of Q. Another problem is that the equilibrium magnitude of the OP varies together with the parameters A and B, see (8.6) and Fig. 8.3b, while the physical nature of many phase transitions is such that the OP magnitudes of the globally stable states on both sides of the phase boundary are nearly constant, that is, do not change much while the parameter A or B varies. Crystal/melt transition is one of the examples, see Sect. 3.1. In this case, it is convenient to associate η0 with the liquid phase and ηþ with the solid phase and require that the OP on the branch ηþ does not change its magnitude in the domain of its stability: ηþ = constðA, BÞ  C ≠ 0 for A < B2

ð8:26Þ

Requirement (8.26) destroys independence of the parameters A and B and yields the envelope constraint: ð8:27Þ

A = 2CB - C2

Then, for the free energy jump between the equilibrium states ηþ and η0 , we obtain the following: ½G = Gþ - G0 =

1 2 1 C A- C2 6 2

for A < C 2

ð8:28Þ

A great advantage of the constraint (8.27) is the simplicity of relation (8.28) for [G](A, C) as compared to (8.11). Then, (8.27, 8.28) can be immediately resolved for the parameters A and B expressed through C and [G]. Notice that (8.27) is the equation of the tangent line to the low-symmetry spinodal of the Landau potential (8.7) at the point B = C, see Fig. 8.1; hence, the name tangential potential. Rescaling the OP as follows: η = C~η,

C 4 = 2W

ð8:29Þ

we can rewrite the free energy, (8.4), in the following form [4]: 1 GðW, ½G, ~ηÞ = G0 þ Wω2 ð~ηÞ þ ½Gνð~ηÞ 2 where:

ð8:30Þ

8.4

The Tangential Potential

153

ωðxÞ  xð1- xÞ

ð8:30aÞ

is the generating function and x

ν ð xÞ 

ωðx0 Þdx0

0 1

= x2 ð3- 2xÞ:

ð8:30bÞ

ωðx0 Þdx0

0

is the biasing potential. As we realized, (8.28) is more convenient for the phase transition analysis than (8.11). But convenience comes at a price. First, the constraint (8.27) does not allow for the Landau critical point condition, (8.16). This means that the potential (8.30) cannot describe a second-order or weak first-order transition. Second, as we can see on the diagram in Fig. 8.1, the purple straight line, (8.27), is the tangent to the red curve 1, (8.7), at (B = C, A = C2), that is, the former never crosses the latter. Hence, both branches, η ± , exist as stable or unstable states (i.e., are real) for all values of the parameter A. In the domain of the stability of the branch ηþ (B > 0, A < B2), the branch η - = A=C is unstable. At the point of tangency, the branches ηþ and η - bifurcate (exchange their values) so that for (B > C, A > C2), the branch η - = C is unstable and the branch ηþ = A=C is locally stable. The branch η0 = 0, however, remains globally stable above and below the bifurcation point. For the purposes of phase transition modeling, it is more convenient to reconnect the branches ηþ and η - at the point of tangency and relabel them as follows: C~η1  C =

ηþ , for A ≤ C2 η - , for A ≥ C

; C~ηt  2

A = C

η - , for A ≤ C 2 ηþ , for A ≥ C2

ð8:31Þ

Also, we redefine the free energy jump for the reconnected branches: D  ½G = G1 - G0 , G1  G W, D, ~η1

ð8:32Þ

This relation applies in the entire domain of variation of the parameters of the potential (8.30). Eqs. (8.11), (8.27)–(8.29), (8.32) establish the relations between the coefficients (W, D) and (A, B). For the potential (8.30)–(8.32), the condition of equilibrium of the phases ~η1 and η0 takes the following form: DE = 0

ð8:33Þ

It identifies the parameter D as the “driving force” for the transition. In the vicinity of the equilibrium, the two phases (stable and metastable) are separated by the unstable—transition—state:

154

8

~ηt =

Landau Theory of Phase Transitions

1 D þ3 2 W

ð8:34Þ

At the equilibrium: GEt  G W, DE , ~ηt = G0 þ

W : 32

ð8:35Þ

which shows that W is related to the free energy barrier height between the equilibrium phases. This relation helps us interpret the second term in the right-hand side of (8.30) as the barrier term and the third one as the biasing term, which shifts the balance in favor of one phase or the other. For ∂2G/∂η2 of the stable and metastable phases, we obtain the following: 2

∂ G ∂η2

= W,D

W þ 6D

for

W - 6D

for

η = η0 η = ~η1

ð8:36Þ

Similar to the Landau potential (8.4), ∂2G/∂η2 of the stable phase is always greater than that of the metastable one. The spinodal conditions for the low-symmetry ~η1 = 1 and high-symmetry η0 = 0 phases are as follows: DS0 = -

W W and DS1 = þ 6 6

ð8:37Þ

In Fig. 8.4 is depicted the potential (8.30) as a function of the rescaled OP and driving force D. In Figs. 8.5a, b are depicted projections of the special lines of the surface from Fig. 8.4, respectively, on the (η, G) and (η, D) planes. The latter represents the equilibrium state diagram for the tangential potential. The special points of this diagram are intersections of the t state line with ~η1 = 1 line, ~η1 =~η0 equilibrium line, and ~η0 = 0 line. On the diagram of Fig. 8.1, they correspond, respectively, to the points of tangency and intersections of the tangential potential line with lines 3 and 2. Thus, Fig. 8.5b is equivalent to Fig. 8.3b and Eqs. (8.33) and (8.37) to the diagram in Fig. 8.1. Example 8.1 The Barrier and the Bias Verify that inside the spinodal region (-W/6 < D < W/6) for the values of the OP (0 ≤ η ≤ 1) the barrier and biasing terms of the tangential potential (8.30) are of the same order of magnitude. This means that the remainder in the expansion (8.3) may be made arbitrarily small, which validates the truncation at nm = 4.

8.4

The Tangential Potential

155

0.15 G-Go 0.1

0.05

0

–0.05

2

–0.1

8

4 0.0

–0.4 0 0.4 0.8

orde

r par

1.2

ame

ter η∼

1.6 2 .1 –1

0.1

0.0

0 D rce .04 –0 ng fo vi .08 dri –0

Fig. 8.4 Gibbs free energy of a system with the tangential potential, (8.30), as a function of the order parameter ~η and driving force D

2(G-G0) W

D>0

(a)

3D

(b)

W 1/2

0.1 D=0

0.0

~ K

0 ~ K

~ Kt D 0:

ð8:38aÞ ð8:38bÞ ð8:38cÞ

The equation of equilibrium ∂G(η)/∂η = 0 has an obvious root Δη = 0 that corresponds to the unstable state. The other two roots, Δηα < 0 and Δηβ > 0: 1 000 1 000 G00t þ Gt ΔηαðβÞ þ G0t Δη2αðβÞ = 0 2 6

ð8:38dÞ

000

correspond to the local and global minima. If Gt = 0, then: Gα = Gβ = Gt -

3 G00t 000 2G0t

2

ð8:38eÞ

Resolving (8.38d) for Δηα(β) and relating them to ηþ and η0 of the Landau potential (8.4), we can establish a relationship between the coefficients (A, B) and ðnÞ Gt : G00t = 2 B2- A - B 000

Gt = 2 B- 3 000

B2 - A ,

B2 - A ,

G0t = 6

ð8:39aÞ ð8:39bÞ ð8:39cÞ

If the last relation does not hold, the OP Δη can always be rescaled for this to be true. The free energy expansion, (8.4), does not necessarily need to be polynomial; it can be an expansion in harmonic or logarithmic functions. An example of the latter is the Gibbs free energy (6.4), which was analyzed in association with the spinodal decomposition. The critical point of that expansion is more suitable for the whole group of phenomena called phase separation. An example of the harmonic expansion was analyzed in [4] where the tangential potential (8.30) was built based on a harmonic generating function ω(x)  cos πx, see Example 8.2.

8.6

Phase Diagrams and Measurable Quantities

157

Polynomial potentials of the order higher than the fourth can also be used for the phase transition modeling. For instance, the fifth-order “‘10-15-6’ potential has the same property as the tangential potential of preserving the values of the OP of stable phases around the equilibrium point. However, this potential cannot be used in cases of large values of the OP variation or the driving force because the state with infinitely large OP value becomes the globally stable one. The free energy potentials of the sixth order have more complicated phase diagrams and many new, interesting properties [5]. For instance, such a system may have a tricritical point where the lines of the first- and second-order transitions cross. Example 8.2 Harmonic Potential The following function: ωðxÞ  cosðπxÞ

ð8E:1aÞ

can be used as the generating function of the free energy potential if it is known that -1 ≤ x ≤ 1. In this case the biasing potential has the following form: νðxÞ 

8.6 8.6.1

1 sinðπxÞ 2

ð8E:1bÞ

Phase Diagrams and Measurable Quantities First-Order Transitions

In order to verify a theory of phase transitions, we need to identify in the theory the quantities which can be compared with the experimentally measurable ones. The thermodynamic functions, such as free energy, enthalpy, and entropy, are not measurable quantities, but their jumps across the phase transition boundary are. The real-world experiments are not controlled by the model parameters A, B, W, and D, but by the temperature and pressure. That is why we must establish relations between (A, B) or (W, D) and (P, T ). Let us concentrate, first, on the description of a first-order transition by the Landau potential (8.4). The phase equilibrium condition in the (P, T ) plane describes a line (2.5), which is isomorphous to the line (8.12a, c) in the (B, A) plane. To resolve for A(P, T ) and B(P, T ), we need the relations for these quantities near the equilibrium line, (2.5) and (8.12). Such relations can be provided by (8.11) where G± and G0 are functions of P and T. Unfortunately, parameters A and B are convoluted in this equation, making it difficult to resolve for either one even if we assume that the other one is a constant. Of course, one can work backward by assuming the functional dependence A(P, T ) and B(P, T). But in this case, it is difficult to obtain a function equal to G+ - G0 = func(P, T ) of the real substance in question; see Sect. 14.1.

158

8

Landau Theory of Phase Transitions

Example 8.3 Phase Diagram of a System with Landau Potential Think of a system described by the Landau potential with: A=a

T - TC TC

ð8E:2Þ

where a and TC are const(T, P) and B = B(P). Then, according to (8.12), the equilibrium phase boundary of the system, see (2.5), is expressed as follows: TE = TC 1 þ

8 2 B ðPÞ 9a

ð8E:3Þ

Its slope: dT E 16T C B dB = dP 9a dP

ð8E:4Þ

is a measurable quantity. It can also be obtained from the Clapeyron-Clausius Eq. (2.8). Indeed, using (2.6) and (8.24a), we find the following: LE ðPÞ = T E

∂G ∂T

= E

8 2 8 B a þ B2 9 9

ð8E:5Þ

128 2 dB B dP 81

ð8E:6Þ

and ½V E ðPÞ =

∂G ∂P

=E

Substitution into (2.8) proves the point. ∎ For the purposes of quantitative analysis of transitions, it is more convenient to use the tangential potential of (8.30), for which the resolution problem for the parameters W and D decouples and the equilibrium line in the (P, T ) plane, (2.5), is isomorphous to the straight line, (8.33), in the (W, D) plane. The functional dependence D = D(P, T) may be resolved as follows. Substituting (8.32) into (2.6a), we obtain a differential equation: T

∂D - D = LðP, T Þ ∂T

ð8:40aÞ

which, using (8.33) as the boundary condition, can be resolved as follows: DðP, T Þ = T

T TE

LðP, T 0 Þ 0 dT : ðT 0 Þ2

For a system with L = const(T ), we obtain the following:

ð8:40bÞ

8.6

Phase Diagrams and Measurable Quantities

159

DðP, T Þ = LðPÞ

T - T E ðPÞ : T E ðPÞ

ð8:40cÞ

Then, substituting (8.32) into (2.6b), we obtain that: ∂D = ½V ðP, T Þ ∂P

ð8:41aÞ

which, again using (8.33) as the boundary condition, can be resolved as follows: DðP, T Þ =

P

½V ðP0 , T ÞdP0 :

ð8:41bÞ

PE

For a system with [V] = const(P), we obtain the following: DðP, T Þ = ½V ðT ÞfP- PE ðT Þg:

ð8:41cÞ

Compare (8.40c, 8.41c) to (8E.5, 8E.6), and see that D(P, T ) of the tangential potential can be easily found from the experimentally available data on L(P), [V](T ), and TE(P), while it is not so easy to find B(P) and a of the Landau potential from the same data. Moreover, on a phase diagram in the (P, T ) plane, there are two more special lines: the spinodals of the high-symmetry T = TS0(P) and low-symmetry T = TS1(P) phases. Using (8.37, 8.40b) we obtain the equations for them: 0=1

W

0=1 P, T S

=

0=1 ∓ 6T S

TS TE

LðP, T Þ dT: T2

ð8:42aÞ

For the system with L = const(T ), (8.42a) turns into the following: 0=1

W = ± 6L

TE - TS : TE

ð8:42bÞ

where all quantities are functions of pressure. Eqs. (8.42) may be used to find W(P, T ). Unfortunately, in most experiments, the mean-field spinodals are not attainable, which renders (8.42) impractical. In this case W(P, T ) can be found from the measurements of either the interfacial quantities (Chap. 9) or thermal fluctuations (Chap. 12); see also Sect. 14.1. As we can see from (8.40)–(8.42), the tangential potential has an advantage over the Landau potential for the purposes of the phase transition modeling because it provides simple relations between its internal parameters and measurable quantities. A word of caution is in order regarding the specific heat. For a system with variable OP, the definition of the specific heat (1.4a) takes the following form:

160

8

C P,η 

∂H ∂T

Landau Theory of Phase Transitions 2

= -T P,η

∂ G ∂T 2

ð8:43aÞ P,η

Applying (8.23b) we obtain a formula: 2

CP ðT, ηÞ = CP,η ðT Þ þ T

∂ G ∂η2

ðηÞ P,T

dη dT

2

ð8:43bÞ P

where in the left-hand side we have the specific heat of a phase along the line of equilibrium with another phase, while in the right-hand side we have the specific heat of the same phase at the same equilibrium point η, but along the line of constant OP value: η = const(P, T). As (8.43b) indicates, in general, these quantities are not equal (cf. Figs. 8.3b and 8.5b, see also (J.5) and Appendix J). However, for a system described by the tangential potential, these quantities are equal because ðdη=∂T ÞP = 0 for the phases “0” and “1.” Also, for a system with L = const(T ), the specific heat jump, (2.7a), (at the equilibrium line and beyond) is zero.

8.6.2

Second-Order Transitions

As we can see from Table 8.1, a second-order transition can be described by the Landau potential (8.4) with B  0. The constraint (8.27) on the tangential potential does not allow for the Landau critical point; hence, it is not adequate to describe it. The Landau critical point may be interpreted as the temperature TC at which A(P, TC) changes sign. Taking into account that the critical temperature may depend on pressure, TC = func(P), we obtain that A(P, T) > 0 for T > TC(P) and A(P, T ) < 0 for T < TC(P). Since A is a continuous function of (P, T ) and we are interested in the properties of the system in the vicinity of TC(P), it is adequate to treat A(P, T ) as a linear function of temperature with a zero at TC(P): AðP, T Þ = aτ, τ =

T - T C ðPÞ , aðPÞ > 0: T C ðPÞ

ð8:44Þ

Cf. (8E.2) and see the difference. Then, using (8.6, 8.11), the OP value and the free energy of the equilibrium states are as follows: η± = ±

p

- aτ

1 G ± ðP, T Þ = G0 ðP, T Þ - a2 τ2 : 4

ð8:45aÞ ð8:45bÞ

8.7

Effect of External Field on Phase Transition

161

Equations (8.45) show that the states ηþ and η - are completely equivalent. Using (8.24, 8.45) for (2.7a, 2.8a), we find that, at the second-order transition point A = 0, the latent heat is zero and the jump of the specific heat is as follows: ½C P  = - T C

∂A ∂T

2 P

2

a2 ∂ G = 2 2T C ∂A

ð8:45cÞ

However, another interesting transition may be considered in the system described by the potential (8.4, 8.14, 8.44)—transition between the states with ηþ and η - . Although, from the thermodynamics point of view, these states are completely equivalent, they will be recognized as different if they occupy contiguous regions. Such regions are called antiphase domains or phase variants. There are many different situations when the OP transition between the variants is possible. One of them—introduction of a conjugate field—is considered in the next section. Another one—existence of a curved boundary between them—is considered in Sects. 9.5.2 and 11.5. It is possible to show that the linear-in-temperature assumption of (8.44) is equivalent to the assumptions of the mean-field theory (see Sect. 3.4), which states that every atom of the system “operates” in the average local field produced by all the neighbors. This approach excludes effects of the fluctuations and, as a result, makes predictions which are not confirmed precisely by experimental measurements, e.g., the finite jump [CP] (8.45c) or the temperature dependence of the OP near the transition point (8.45a).

8.7

Effect of External Field on Phase Transition

If an external field conjugate to the OP of a phase transition is applied to the system, the transition will be affected by the field. For instance, we may think of the effect of electric field on a ferroelectric transition or stress on the martensitic transition. A particularly interesting example is the application of the field to a system that, without the field, can undergo a second-order transition. In Sect. 3.4 we considered influence of the magnetic field on a ferromagnetic transition and found that the reaction of the magnet depends strongly on its temperature. Namely, if the temperature of the magnet is above the Curie point, it has paramagnetic properties, which can be explained by existence of permanent magnetic dipole moments of atoms. However, if the temperature of the magnet falls below the Curie point, more complicated ferromagnetic behavior is manifest. In the ferromagnetic phase, the magnetization is much stronger even under weak field and reaches saturation at the application of the strong external field. Reversal of the direction of the field brings up effects of permanent magnetization, coercivity, and hysteresis. Quantitatively, in addition to the magnetization, temperature, and field strength, the phenomenon of

162

8 Landau Theory of Phase Transitions

ferromagnetism is described by the susceptibility and specific heat differences between the ferro- and paramagnetic phases. To describe the ferromagnetic behavior, we can use the Landau potential (8.4) where B = 0 and A depends on temperature as in (8.44). In addition, we need to find the contribution of the external field into the free energy, which is equal to the work done by the field on the system (contribution of the field in vacuum, which is proportional to the field strength squared, is not considered here). According to Curie’s principle, conjugation of the field and OP means that they have the same symmetry (e.g., scalar, vector, or tensor). For a scalar field strength H and scalar OP, which is the magnetization here, the field contribution is equal to (-Hη). Then, the free energy of the system is expressed as follows: 1 1 G = G0 þ Aη2 þ Qη4 - Hη 2 4

ð8:46Þ

Compared to (8.4), we scaled the OP to represent the magnetization, η = M. This resulted in appearance of the coefficient Q in front of the fourth-order term. The free energy (8.46) contains three parameters (A, Q, H ), which represent temperature, pressure, and external field. Instead of studying effects of three parameters separately, we may try to find a way to combine them into one and analyze the effect of this parameter on the system. However, if we want to compare our theoretical findings with the experimental results, we must return to the physical quantities. To combine the three parameters into one, we need to scale the OP and free energy. To find the right scaling, first, we introduce the scaling factor α: η = α~η

ð8:47aÞ

Then we substitute this expression into (8.46) and transform it as follows: G = G0 þ αH

αA 2 α3 Q 4 ~η þ ~η - ~η 2H 4H

ð8:47bÞ

Now we can see that by selecting the proper scaling: α=

H Q

1=3

; g=

Q1=3 A ðG- G0 Þ; δ = 1=3 2=3 H 4=3 Q H

ð8:48Þ

the scaled free energy of the system will be expressed by the potential with only one dimensionless parameter δ, which incorporates effects of the pressure, temperature, and external field: g=

1 2 1 4 δ~η þ ~η - ~η 4 2

ð8:49Þ

8.7

Effect of External Field on Phase Transition

Fig. 8.6 (a) Landau-Gibbs free energy g (8.49) of a system capable of undergoing a second-order transition and exposed to the external field δ. The coordinates are scaled; the arrows show the equilibrium states. (b) Equilibrium state diagram of the system in (a). Dashed lines are the equilibrium states of the system without the external field

163

8

(a)

g 4

G!

0

GG m

GGm -4 -2

-1

0

~

K

1

2

10

(b)

G 0

Gm

strong field weak field

-10

B K

B K

3

-20

-4

-2

Km

0

B K 2

2

~ K

1

4

This free energy is shown in Fig. 8.6a; it is instructive to compare it with Fig. 8.2a. Reduction of the number of the parameters is not the only benefit of scaling. There is a large “amount” of physics to be learned from the scaling even without the use of any mathematics. As the scaled free energy contains only one parameter, effects of different physical factors, such as temperature, pressure, and external field, may be easily assessed against each other. For instance, the influence of temperature on the transition enters the Landau expression (8.46) through the coefficient A, which increases linearly with the temperature; see (8.44). Then, we can see from the

164

8 Landau Theory of Phase Transitions

expression for δ in (8.48) that the external field has an effect on the transition opposite to that of the temperature. Hence, the field plays the role of an ordering agent, opposite to temperature—a disordering agent. Moreover, this expression allows us to quantitatively compare the effects of the temperature and field: the influence of temperature is stronger than that of the field because δ ~ T but δ ~ H-2/3. Also notice from (8.47)–(8.49) that the sign change of the field does not affect the scaled free energy but changes the sign of the OP, which is magnetization. However, as we mentioned above, the scaling has its disadvantages too. For instance, the combined parameter δ does not have a simple physical interpretation, so that if we want to compare our results to the experimental ones we must return to the dimensional variables. The equilibrium states of the system are described by the solutions of the following equation: dg = δ~η þ ~η3 - 1 = 0 d~η

ð8:50Þ

This equation can be analyzed easily if we consider δ as a function of ~η: δ=

1 - η3 η

ð8:51Þ

This function has a maximum: δm  max δðηÞ = η

3 2

2=3

≈ - 1:89

ð8:52aÞ

at ηm = -

1 ≈ - 0:794 21=3

ð8:52bÞ

with only one solution of (8.50) existing for δ > δm, two at δ = δm, and three for δ < δm. In the latter case, they are the following: η3 < ηm < η2 < 0 < η1

ð8:52cÞ

In Fig. 8.6b, the function (8.51) is depicted in the plane ð~η, δÞ; it is instructive to compare it with Fig. 8.3a. The local stability of the equilibrium states is determined by the sign of the second derivative: η3 - η3m d2 g = δ þ 3~η2 = 2 2 η d~η

ð8:53Þ

8.7

Effect of External Field on Phase Transition

165

The last expression shows that the equilibrium states η1 and η3 are always locally stable, while the state η2 is always unstable when they exist and helps us identify δ = δm as the spinodal point. The global stability of the states η1 and η3 can be assessed by comparing the free energies of the locally stable states, which can be done by using the following expression for the free energy obtained by substituting (8.51) into (8.49): gðηÞ =

1 δη2 - 3η , 4

ð8:54Þ

The sum of the roots of a cubic equation in (8.50) is equal to the coefficient in front of the quadratic term, which is zero. Hence, η1 þ η3 = - η2 > 0. Then, taking into account that for δ < δm < 0 we have η1 - η3 > 0, we obtain that: 1 gðη1 Þ - gðη3 Þ = ðη1 - η3 Þ½δðη1 þ η3 Þ - 3 < 0 4

ð8:55Þ

Thus, the state η1 is always globally stable (see Fig. 8.6a), and contrary to the case of the first-order transition, there is no exchange of stability between the states. However, one can imagine a situation when a system prepared in the state η3 transforms to the state η1 . According to the Ehrenfest classification, this is a zeroth-order transition. Dependence of the parameters of the system on the OP of the globally stable state can be found from (8.51):

η1 =

1 1 - , δ δ4 δ 1- , 3 jδj þ

for δ → þ 1 for jδj → 0 1 , 2j δ j

ð8:56Þ

for δ → - 1

Comparing our results to the experiments described in Sect. 3.4, we equate the critical temperature TC to the Curie temperature, which immediately allows us to identify A > 0 as the domain of paramagnetic properties and A < 0 as the domain of ferromagnetic properties. However, as we pointed out above, for a detailed comparison, we must return to the dimensional variables. The dimensional version of (8.50) takes the following form: H = AM þ QM 3

ð8:57Þ

where η = M is the magnetization. For instance, noticing that for H → 0 + 0 at A = const we have |δ| → 1 with sign(δ) = sign(A), one can find from (8.56) behavior of the globally stable state in the weak field:

166

8

M 1  αη1 =

H QH 3 - 4 , A A 1=2 jAj H þ , Q 2jAj

Landau Theory of Phase Transitions

for H → 0 and A > 0 ð8:58Þ for H → 0 and A < 0

In Fig. 8.7a, Eq. (8.57) is presented in the field—magnetization plane where the spinodal points are connected to the stable branches by the vertical arrows. The magnetization/demagnetization curve of the system described by the free energy (8.46) is similar to the hysteresis of a real magnet schematically expressed in

Fig. 8.7 Equilibrium behavior of a magnetic system. (a) Equilibrium states in the plane (H, M ); (b) dependence on temperature T of the specific heat CP (black), inverse susceptibility χ 1- 1 (blue), zero-field magnetization M0 (red), and spinodal field Hm (brown). The latterseparates the ferromagnetic and paramagnetic domains

8.7 Effect of External Field on Phase Transition

167

Fig. 3.2. The permanent magnetization of the magnet corresponds to the zero-field magnetization of our system: M0 =

-A Q

1=2

=

a Q

1=2

ð - τÞ1=2

ð8:59Þ

The coercivity field of the magnet corresponds to the spinodal field Hm, which can be found by resolving (8.48) and (8.52a) simultaneously: Hm =

ðA=δm Þ3=2 ða=jδm jÞ3=2 = ð - τÞ3=2 Q1=2 Q1=2

ð8:60Þ

Notice from Fig. 8.7a that in the region of the weak field (|H| < Hm, δ < δm < 0) there are three solutions (like in the case of H = 0) and only one solution in the region of the strong field (|H| > Hm, 0 > δ > δm). Also, in the presence of the field, due to its ordering effect, the transition takes place at A = Am < 0 (δ = δm < 0), that is, at a lower value than that without the field, A = 0, meaning that the applied field effectively destroys the phase transition in the system. Physically, it means that the applied field breaks the symmetry between the two locally stable equilibrium states η1 and η3 in favor of η1 , which has orientation in the direction of the field and, hence, less free energy. The strength of reaction of a system on the applied external field is characterized by the isothermal susceptibility: χ  lim

H→0

∂M ∂H

ð8:61Þ A

where compared to (3.2, 3.4) the fixed temperature is replaced with fixed A because, as we discussed above, the temperature dependence of the free energy enters through the coefficient A. Then, from (8.58), we obtain the susceptibility of the globally stable state:

χ1 =

1 , for A > 0 A 1 , for A < 0 2jAj

ð8:62Þ

Another important characteristic of a phase transition is the specific heat difference between the ferro- and paramagnetic phases. Using (8.24b) and (8.58), we obtain that:

168

8

TC ∂M 1 ½C  = M 1 a2 P ∂A

= H

-

Landau Theory of Phase Transitions

H2 , for H → 0 and A > 0 A3

1 , 2Q

ð8:63Þ

for H → 0 and A < 0

The specific heat, susceptibility, zero-field magnetization, and spinodal field derived by the Landau theory are plotted in Fig. 8.7b as functions of the temperature. They show features observed experimentally, see Fig. 3.5. The jumps of the specific heat and the slope of the inverse susceptibility at A = 0 reaffirm our conclusion in Sect. 3.4 that, in the absence of the applied field, A = 0 is the locus of the secondorder phase transition, see Fig. 3.6 The spinodal-field--versus--temperature curve may be perceived as a boundary on the phase diagram of the system meaning that it is ferromagnetic inside the curve and paramagnetic outside of it, see Fig. 3.9. However, our model lacks some of the key features of the behavior of the real magnets. For instance, in the Landau theory, magnetization does not saturate at the application of the strong external field but continues to grow at a slow rate of H1/3, cf. Fig. 3.3. Another discrepancy between the theory and experiment can be found in the shape of the magnetic hysteresis, which is slanted for real magnets, see Fig. 3.4. While the former may be explained by the limited number of terms in the expansion (8.46), the latter is due to the heterogeneous domain structure of the real magnets.

Exercises 1. Prove that with the help of a linear substitution x = z + f one can always remove the linear term from the polynomial y = a + bx + cx2 + dx3 + ex4. 2. What is the unit measure of the order parameter in the Landau potential (8.4) in terms of the unit measure of the free energy, which is joule? 3. Consider the following transitions and suggest the order parameters for them: (a) (b) (c) (d)

Condensation Crystallization Magnetization Ordering in beta-brass alloy

4. Using your favorite software, plot the Landau potential (8.4) at different values of A and B. How many phases does it have at fixed value of the parameter B and varying values of the parameter A? 5. Verify that (8.10) is a condition of instability of the state η - of the Landau potential. Hint, substitute (8.6) into (8.8). 6. Verify that (8.12c) is the only solution of (8.12b). 7. Verify that (8.37) are the spinodal conditions for the low- and high-symmetry phases of the tangential potential.

References

169

8. Verify that (8E.1b) is the biasing potential for the generating function (8E.1a), and find the physical properties of the harmonic potential. 9. Verify that (8E.3)–(8E.6) satisfy the Clapeyron-Clausius equation. 10. Calculate parameters of the tangential potentials for crystallization of water. 11. Find all states of equilibrium of the following Landau-Gibbs free energy potential and describe their stability properties: G = ½ηð1 - ηÞ2 - 0:05η3 10- 15η þ 6η2 12. Consider the following Landau-Gibbs free energy potential: G=

1 1 1 a ðT- T C Þη2 þ a4 η4 þ a6 η6 ; a4 < 0 4 6 2 2

(a) Plot the free energy as a function of temperature, and show that it describes a first-order phase transition. (b) Determine the phase transition temperature. (c) Determine the domain of phase coexistence. (d) Find the difference in the properties of the systems described by this free energy and the Landau free energy (8.4). 13. A martensitic transformation is described as a transition from the austenitic phase, which has cubic crystalline structure, into the martensitic phase, which has three variants stretched in one of the three perpendicular directions; see Fig. 3.1. (a) Suggest an order parameter of the martensitic transformation. (b) Introduce a simple Landau free energy potential, which can describe this type of a transition.

References 1. L.D. Landau, Phys. Zs. Sowjet., 11, 26,545 (1937), in Collected Papers of L. D. Landau, ed. by D. Ter-Haar, (Cordon and Breach, London, 1967), pp. 193–209 2. T. Poston, I. Stewart, Catastrophe Theory and its Application (Pitman, London, 1978) 3. V.I. Arnol’d, Dynamical Systems V. Bifurcation Theory and Catastrophe Theory (SpringerVerlag, 1994) 4. A. Umantsev, Thermodynamic stability of phases and transition kinetics under adiabatic conditions. J. Chem. Phys. 96(1), 605–617 (1992) 5. J.-C. Toledano, P. Toledano, The Landau Theory of Phase Transitions (World Scientific, 1987)

Chapter 9

Heterogeneous Equilibrium Systems

So far, in Part II we have looked at the homogeneous (one-phase) systems, which can be described by uniform spatial distributions of the OP. Let us now look at a heterogeneous equilibrium system composed of two or more phases. In Sect. 2.4 we looked at the heterogeneous equilibrium states using the classical Gibbsian approach of the theory of capillarity. Unlike the classical one, the field-theoretic approach to the problem of phase equilibrium considers an interface between the coexisting phases as a transition zone of certain thickness with spatial distribution of OPs and possibly other parameters. Hence, for the continuous description of such systems, one must know not only the average values of P, T, and OPs, but the spatial distributions of these parameters also. In this chapter we are discussing only the equilibrium properties of the system; nonequilibrium systems will be considered in Chaps. 10 and 11. In this chapter we generalize the free energy to a functional of the spatial distributions of the order parameters and introduce a gradient energy contribution into the free energy density. We analyze various forms of the gradient energy and find the square-OP-gradient to be preferable. Equilibrium conditions in the heterogeneous systems yield the Euler-Lagrange equation, solutions of which are called extremals. We study properties of the extremals in the systems of various physical origins and various sizes and find a bifurcation at the critical size. The results are presented in the form of the free energy landscapes. Analysis of the one-dimensional systems is particularly illuminating; it shows that, using qualitative methods of differential equations, many features of the extremals can be revealed without actually calculating them, based only on the general properties of the free energy. We find the field-theoretic expression for the interfacial energy and study its properties using different Landau potentials as examples. We introduce a concept of an instanton as a critical nucleus and study its properties in systems of different dimensionality. Multidimensional states are analyzed using the sharp-interface/ drumhead approximation and Fourier method. To analyze stability of the heterogeneous states, we introduce the Hamiltonian operator and find its eigenvalues for the extremals. Importance of the Goldstone modes and capillary waves for the stability

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Umantsev, Field Theoretic Method in Phase Transformations, Lecture Notes in Physics 1016, https://doi.org/10.1007/978-3-031-29605-5_9

171

172

9 Heterogeneous Equilibrium Systems

analysis of the extremals is revealed. In this chapter we are finishing column 2 and start erecting column 3 of the beams and columns FTM configuration in Fig. P.1.

9.1

The Free Energy

In a heterogeneous equilibrium system, some of the thermodynamic parameters turn into functions of space, and the states of the system should be mathematically described by a class of functions instead of numbers. The free energy of the whole system becomes a functional of the state variables and, maybe, their spatial derivatives: gfPðxÞ, T ðxÞ, H ðxÞ, ηi ðxÞgd3 x

G=

ð9:1Þ

V

Here V is the total volume occupied by the system (V = Vα + Vβ + Vint), and the integrand gfP, T, H, ηg may be called the Gibbs free energy density. Although in the previous section we used the specific free energy (per unit mass), most of the time in this book (unless it is specifically pointed out) for the sake of simplicity, we will be using the free energy density (per unit volume). As a consequence of the inclusion of heterogeneities, we have to change the mathematical tool used to find the equilibrium states of the system: instead of the calculus of functions of several variables, we have to use the calculus of variations (Appendix B). The equilibrium states of (9.1) can be found as the functions that minimize this functional. The cruxes of the problem of description of heterogeneous equilibrium systems by the functional (9.1) are the following questions: Spatial derivatives of which variables must be included into the free energy density of the system? Do we need to include the gradients of the thermodynamic variables, like P and T? Let us look at the temperature gradient (∇T), first. The Curie principle says that a scalar function cannot be affected by a vector field. Further, a linear term (a∇T) may not be included because an isotropic system cannot support a vector a. A quadratic term, b(∇T )2, withstands the tests of isotropy and homogeneity if b > 0. However, it contradicts the zeroth law of thermodynamics. Indeed, according to this law, any thermally isolated system must eventually come to the state of equilibrium characterized by the uniform temperature distribution. But if the quadratic term is included into the free energy functional (9.1), then the state of equilibrium will allow for a heterogeneousin-temperature equilibrium state in a thermally isolated system. This contradiction proves that the free energy functional G is independent of the temperature gradient. A similar argument may be laid out with respect to the gradients of pressure.

9.1

The Free Energy

9.1.1

173

Gradients of Order Parameter

Let us first assume that the OP of the system has only one scalar component, which is a function of x. In principle, the density g may be a function of the spatial derivatives of the OP of all orders: 2

g = g P, T, H, ηðxÞ,

∂η ∂ η , ... ∂xi ∂xi ; ∂xj

ð9:2aÞ

where i, j are the Cartesian indexes. The density g may be expanded in a Taylor series: 2

g = gðP, T, ηÞ - Hη þ ai

∂η 1 ∂η ∂η 1 ∂ η þ bij þ cij ∂xi 2 ∂xi ∂xj 2 ∂xi ∂xj

∂η ∂η ∂η 1 þ dijk þ⋯ 6 ∂xi ∂xj ∂xk

ð9:2bÞ

Functional dependence of the expansion coefficients ai, bij, cij, dijk, etc. is subject to the following constraints: first, the free energy density must be invariant with respect to the transformation of the coordinates, that is, a scalar; second, the continuum description is valid only if the spatial derivatives of the OP are not very large. Otherwise, the inhomogeneous part of the free energy density is much greater than the homogeneous one, and the details of the OP variations, discussed in Chap. 8, do not matter. These constraints show that the expansion coefficients may depend on the scalar quantities (P, T, H, η) and on the Euler angles θij defined as follows: tan θij =

∂η=∂xi ∂η=∂xj

ð9:3Þ

The latter causes anisotropy—dependence on orientation in space—of the free energy of the system. In this book we will not be considering anisotropy in greater depth and will assume that the expansion coefficients are independent of the Euler angles. Let us for a moment disregard the linear term in expansion (9.2b) and consider the two second-order-in-spatial-derivatives terms. They can be reduced to each other. Indeed, let’s calculate the contribution of the second second-order term into the total free energy (9.1): 2

cij V

∂ η 3 d x= ∂xi ∂xj

∂ ∂η 3 d xcij ∂xj V ∂xi

∂cij ∂η ∂η 3 d x= V ∂η ∂xi ∂xj

Ω

cij

∂η dsi ∂xj

∂cij ∂η ∂η 3 d x V ∂η ∂xi ∂xj

ð9:4Þ

174

9

Heterogeneous Equilibrium Systems

The Gauss theorem was used here to transform from the second expression to the third, and Ωdsi means integration over the surface Ω enveloping the volume of the system V. The surface term in (9.4) is of the order of the surface free energy of the system and may be disregarded compared to the volumetric term for sufficiently large systems. For small systems, which are of great interest for nanoscience, the surface and volume terms may be of the same order. In this book, we will be considering only systems that are large enough for the surface contribution to be disregarded. If cij = const(η), the last term in (9.4) vanishes, and the entire contribution of the second second-order term in (9.2b) is zero. If cij = funct(η), that term is of the same form as the first second-order term in (9.2b). Thus, the contribution of the secondorder spatial derivatives into the free energy density may be expressed by the term 1=2bij ð∂η=∂xi Þ ∂η=∂xj with the properly defined second-rank tensor bij. The free energy density, (9.2b), will be a scalar if each of the remaining irreducible terms is a scalar. For the second-order term to be a scalar, the second-rank tensor bij must be proportional to the Kronecker tensor: bij = κ ðP, T Þδij , δij =

0, for i ≠ j 1, for i = j

ð9:5Þ

Then the total contribution of the second-order-in-spatial-derivatives terms will be represented by 1 κj∇ηj2 2

ð9:6Þ

where the coefficient κ is called the gradient energy coefficient. The physical meaning of this coefficient depends on the physical interpretation of the OP. For instance, if the OP represents the magnetic moment of the system, the gradient term is the exchange energy, and the gradient energy coefficient is its strength. The following must be said regarding the sign of this coefficient: If κ > 0, then the global equilibrium in the system described by the free energy (9.1, 9.2a, 9.2b, 9.3, 9.4, 9.5, and 9.6) is achieved at a homogeneous state, η = const(x), because any OP inhomogeneity increases the free energy of the system, which is in accord with the Le Chatelier-Braun principle. If κ < 0, then the OP inhomogeneities decrease the free energy, which leads to an unphysical state of “infinite mixing” where the coexisting phases create infinite amount of infinitely sharp interfaces. To prevent our system from “going unphysical,” we must include in (9.2b) the higher-order-inspatial-derivatives terms. In this case, one can estimate a typical scale of the coefficient of the third-order term. Indeed, the order of magnitude of the derivative is ∂nη/∂xi. . .∂xk~Δη/ln where l is the scale of the OP inhomogeneity. Then, as the second- and third-order derivatives must balance each other, we obtain |κ|l ~|Δηdijk|. Mostly, in this book, we concentrate on the systems where the gradient energy coefficient is positive:

9.1

The Free Energy

175

κ ðP, T Þ > 0

ð9:7Þ

and the terms of the order higher than the second may be neglect, that is, dijk = 0. In such systems the global equilibrium is achieved at the uniform distribution of the OP. However, in addition to the global equilibrium, such systems allow locally stable—metastable—heterogeneous equilibrium states.

9.1.2

Gradients of Conjugate Fields*

Now let us come back to the linear term in expansion (9.2b). It is a scalar if ai’s are components of a vector so that ai

∂η → = a ∇η: ∂xi

Then, we must ask ourselves a question: How can the free energy density of our → system depend on a vector a ? If the system is isotropic, then by definition its free → energy cannot depend on an external vector a . If the system’s isotropy is broken by an applied external vector field, the Curie principle discussed above renders the term → a ∇η unphysical again. → However, there may be a case when the term a ∇η makes physical sense, → namely, when a is proportional to the gradient of a nonuniform, scalar, external → field conjugate to the OP: a = ν∇H. Then, the free energy density expansion (9.2b) contains an invariant linear term in the form ai

∂η = ν∇H∇η ∂xi

ð9:8Þ

where the coefficient ν has the units of length squared. This term has proper symmetry; it accounts for the heterogeneous variations of the conjugate applied field H(x). Hence, in addition to the OP-field coupling terms, we must include the “vacuum” contribution of the field—the term ½μH2, where μ is, for the time being, an undetermined field-scaling coefficient. Then G= V

1 1 gðP, T, ηÞ þ κð∇ηÞ2 - Hη þ ν∇H∇η þ μH 2 d3 x 2 2

ð9:9Þ

Now let us apply the Le Chatelier-Braun principle to a stable homogeneous equilibrium state η, which sets in a field-free system. The principle says that the reaction of the system on small disturbances must be such as to minimize the effect of the disturbances. Variation of the functional (9.9) yields that the equilibrium

176

9 Heterogeneous Equilibrium Systems

distribution of the OP, which appears as a result of the applied field, is described by the ELE: ∂g - κ∇2 η = H þ ν∇2 H ∂η

ð9:10Þ

Reaction of the system on the applied field can be analyzed with the help of the superposition of the plane waves with different wavevectors k (see Appendix F): H ðxÞ =

he k k

ikx

ηðxÞ = η þ

;

k

Δηk eikx

ð9:11Þ

where the amplitudes obey the following relations h - k = hk and Δη - k = Δηk because the waves represent the real fields. Substituting (9.11) into (9.10) and linearizing it (the disturbances are small), we obtain the expression for the generalized susceptibility: χ ðk Þ 

Δηk = hk

1 - νk 2 2

∂ gðηÞ ∂η2

þ κk 2

;k= j k j :

ð9:12Þ

Now we can find reaction of the system on the disturbance by calculating the free energy change as the result of the applied field. Substituting (9.11) and (9.12) into (9.9), we obtain GfP, T, H, η þ Δηg ffi V gðP, T, ηÞ þ

k

j hk j 2 R ð k Þ ,

ð9:13aÞ

where R ðk Þ =

μ l2 þ 2ν - ν2 k2 2 k 2 1 þ l2 k 2

ð9:13bÞ

is the field amplification factor, l

p

κμ

ð9:13cÞ

is the length scale of the OP inhomogeneity, and 2

∂ gð η Þ μ ∂η2

-1

ð9:13dÞ

is the field-scaling coefficient because the homogeneous field H contribution is balanced by the OP changes only if R(0) = 0. Furthermore, the state η will remain stable, and the influence of the applied field will be minimized only if R(k) > 0 for all

9.1

The Free Energy

177

Q 

scaled amplification factor 2Rg"

Q!

Q scaled wave number l k

0

Q  l

2

Fig. 9.1 The scaled field amplification factor R as a function of the scaled wavenumber k for different values of the coefficient ν

values of k > 0. Indeed, if R(k) is negative, at least for some values of k, then another state, possibly heterogeneous ηH(x), is more stable than η, which means that a transition into that state is probable. Analysis of (9.13b) in Fig. 9.1 shows that R(k) > 0 for all values of k > 0 only if ν=0

9.1.3

ð9:14Þ

Field-Theoretic Analogy

Thus, in the absence of the external field, the total free energy of a heterogeneous system with a one-component scalar OP may be expressed as follows: G=

gd3 x

ð9:15aÞ

V

1 g  gðP, T, H; ηÞ þ κð∇ηÞ2 2

ð9:15bÞ

A profound analogy may be established between our system and the Lagrangian system considered by the field theory; see Appendix D. In the framework of this

178

9 Heterogeneous Equilibrium Systems

analogy, the total Gibbs free energy (9.15a) is analogous to the total energy E of the Lagrangian system (D.18), and the Gibbs free energy density (9.15b) to the T00 component of the stress-energy tensor (D.16). This analogy hints at another quantity which may be significant for our system: 1 g  gðP; T; H; ηÞ - κð∇ηÞ2 2

ð9:15cÞ

Significance of this quantity comes from its analogy to the T11 component of the stress-energy tensor, which is a conserved property, that is, has the first integral; see (D.19). Existence of the first integral is one of the consequences of the Euclidean invariance of the free energy density (9.15b)—independence of the spatial coordinates. Another consequence of the Euclidean invariance is existence of the Goldstone modes; see Sect. 9.6.

9.2

Equilibrium States

As we discussed in Chap. 8, a thermodynamically stable equilibrium state of an open system with specified temperature and pressure must deliver minimum to its Gibbs free energy. Because the free energy of the heterogeneous system is a functional, (9.15), the equation for the state ηE, either differential or integral, should be obtained by minimization of the functional, that is, by the variational procedure. As explained in Appendix B, the states ηE can be found among the solutions of the Euler-Lagrange equation (ELE): δG  δη

∂g ∂η

- κ∇2 η = 0

ð9:16Þ

P,T,H

Solutions of ELE (9.16) are called extremals of the functional (9.15a): ηE = func (x; P, T, H ). There is one more parameter that is important for the extremals; this is dimensionality of the geometrical space d—the number of space variables that essentially influence it. A physical state is not uniquely specified if we simply give the differential equations which the state must satisfy. For the extremals to describe a physical state of a system uniquely, it is necessary to set the boundary conditions (BCs) on the surface Ω of the volume V of the system even in the thermodynamic limit: V → 1. BCs (and initial conditions in the case of time-dependent problems) are sets of elements of the physical behavior of the system, which are not regulated by the physical laws that entailed the equations for the system. The kind of BC that shall be used depends on the physical problem; changing BC may have dramatic effect on the properties of the state, that is, solutions of ELE (9.16). Notice in Appendix B that the variational procedure itself yields different kinds of BCs depending on the physical

9.2

Equilibrium States

179

properties of the boundary, Dirichlet’s BC on the boundary where OP is fixed, Eq. (B.3): η = F ðsÞ on Ω

ð9:17aÞ

or the Newmann-type BC on the boundary Ω where the OP may vary, Eq. (B.11): jΩ ∇η = 0 on Ω:

ð9:17bÞ

Here jΩ is a unit outer vector of the boundary Ω. Solutions of ELE (9.16) that satisfy BC (9.17a or 9.17b) represent the equilibrium states of the system: ηE 2 ff ðxÞ; P, T, H, V, d, BCg

ð9:18Þ

A more complicated case of a completely free boundary (see Eq. (B.17)) is also possible. In this book we will be using the Newmann-type BC more often than other types. In Chap. 8 we analyzed the homogeneous (0d) equilibrium states η; the 1d heterogeneous states will be considered in Sect. 9.3; 3d states will be considered in Sect. 9.5; the 2d states have intermediate properties between 1d and 3d. A set of functions {η(x)}, which can generate a functional, is called a functional space. For our purposes, the most convenient functional space is the Hilbert space where each element is defined and continuous together with its gradients in the domain V and is characterized by its “size”. How can we “measure” an infinitedimensional element of the functional space? Here is one way to do that. For each element η(x) of the functional space, we define four numbers: the average value 

1 V

ηðxÞd 3 x,

ð9:19aÞ

V

the range (the largest difference between the values of the element) Π  max j ηðxÞ - ηðx0 Þ j x, x ′ 2V

ð9:19bÞ

Ν  max jηðxÞ - < η > j,

ð9:19cÞ

Λ  max j∇ηðxÞj:

ð9:19dÞ

the amplitude V

and the slope V

Then each element of the Hilbert space {η(x)} will be characterized by its skew (asymmetry):

180

9

Heterogeneous Equilibrium Systems

Σ = 2Ν - Π

ð9:19eÞ

the length scale L=

Π Λ

ð9:19fÞ

and the norm kηðxÞk = < η > þ Π þ Λ

ð9:19gÞ

which “measures” the “size” of an element in the Hilbert space. For the dimensions of the terms in (9.19g), see Eq. (B.8) and the comment after that. To elucidate the properties of the equilibrium states ηE(x), we will derive a few other forms of the equilibrium equation useful for the analysis below. First, by partially differentiating the left-hand side of ELE (9.16) and using the fact that it does not depend explicitly on the coordinates, we find an equation for the gradient of the extremal ηE(x). In the most convenient form, this equation can be written down as HðηE ðxÞÞ∇ηE ðxÞ = 0

ð9:20Þ

using a linear operator 2

HðηE ðxÞÞ 

∂ g ∂η2

ðηE Þ - κ∇2

ð9:21Þ

P,T

called Hamiltonian, which plays an important role in the analysis of the properties of the system. An advantage of the form (9.20, 9.21) is that the properties of the Hamiltonian operator are well known from quantum mechanics (see Appendix E), where it plays a role of an operator of total energy of a particle with the term ∂2g(ηE)/ ∂η2 being analogous to the potential energy of a particle and (-κ∇2)—kinetic one. Another advantage of (9.20, 9.21) is that in the case of radially symmetric extremals, it is convenient to consider the Hamiltonian in the spherical polar coordinate system (see Appendix C). Its disadvantage is that the operator depends on the extremal ηE(x) itself, which is unknown. Second, integrating ELE (9.16) and applying the Gauss divergence theorem with the Newmann-type BC (9.17b), we obtain an integral property of the equilibrium states: ∂g 3 d x=0 V ∂η

ð9:22Þ

This formula tells us that although ∂g/∂η{ηE(x)} does not vanish everywhere, as it is the case in a homogeneous system (see Chap. 8), on average it still does.

9.2

Equilibrium States

181

Third, let’s derive another integral form of the equilibrium equation. For this, we apply the vector-calculus formula ∇ðu∇vÞ = ∇u∇v þ u∇2 v to the functions u = v = η(x) and obtain ∇ðη∇ηÞ = ð∇ηÞ2 þ η∇2 η

ð9:23Þ

Then, multiplying all the terms of ELE (9.16) by η, using the formula (9.23), integrating over the entire volume V occupied by the system, and applying the Gauss divergence theorem, we obtain the integral form of the equilibrium equation: κð∇ηE Þ2 þ ηE V

∂g dx = κ ∂η

Ω

ηE ∇ηE ds

ð9:24aÞ

For the Newmann-type BC (9.17b), the integral equilibrium equation (9.24a) takes on a very appealing mathematical form: κð∇ηE Þ2 þ ηE V

∂g 3 d x=0 ∂η

ð9:24bÞ

This relation allows us to present the total free energy of the equilibrium system in another form which involves the auxiliary function Φ(η) from (8.20) and does not involve the OP gradients: GðηE ðxÞÞ =

gV

1 ∂g 3 d x η 2 E ∂η

ð9:25aÞ

Subtracting gðηÞV from this expression and using the integral equilibrium equation (9.22), we can define the free energy excess: ΔGE = V

1 ∂g ½gðηE Þ - gðηÞ - ðηE - ηÞ ðηE Þ d3 x 2 ∂η

ð9:25bÞ

which is a measure of the free energy of the state ηE(x) relative to that of the homogeneous state η. Thermodynamically, ΔGE is equal to the minimal work which must be done on the system in order to transform it from the homogeneous state η to the equilibrium state ηE(x).

182

9.3 9.3.1

9

Heterogeneous Equilibrium Systems

One-Dimensional Equilibrium States General Properties

The 1d equilibrium states are the cornerstones of the FTM; that is why we need to study their properties in detail. Let’s first consider equilibrium states in a one-dimensional (1d), open system, which is a box with a base S, thickness X, and volume V = XS and where the OP depends on one space coordinate only, η = ηE(x). Then, ELE (9.16) takes the form κ

d2 η ∂g = 0, 0 ≤ x ≤ X dx2 ∂η

ð9:26Þ

This is an ODE of the second order, which needs two boundary conditions to identify its solution ηE(x) uniquely. Although in some physical situations Dirichlet’s conditions may be justified, here, we will be using the Newmann’s one: dη = 0 at x = 0, x = X dx

ð9:27Þ

Instead of (9.26), one can use the 1d version of (9.20, 9.21), which takes the form 2

- HðηðxÞÞ

dη dη d3 η ∂ g =0  κ 3 - 2 ðηðxÞÞ dx dx dx ∂η

ð9:28Þ

As we pointed out in Sect. 9.1 (see also Appendix D), the 1d-ELE (9.26) has the first integral in the form of the conserved quantity (9.15c). Because this is such an important property of the 1d-equilibrium states, we will demonstrate it again. We multiply both terms of Eq. (9.26) by (dη/dx), transform them as follows: κ d dη 2 dx dx

2

-

dg =0 dx

ð9:29Þ

and integrate using that κ(P, T ) = const(x). The first integral takes the form - g =

κ dη 2 dx

2

- gðP; T; ηÞ = constðxÞ  - μ

ð9:30aÞ

where μ  gðP, T, ηl Þ = gðP, T, ηr Þ

ð9:30bÞ

9.3

One-Dimensional Equilibrium States

183

ηl  ηE ð0Þ, ηr  ηE ðX Þ

ð9:30cÞ

Due to (9.7), (9.30a) yields μ ≤ gðP, T, ηE ðxÞÞ

ð9:30dÞ

According to Gibbs, the constant μ is the chemical potential of the system; its relation to the pressure, temperature, and size of the system will be discussed below. Existence of the first integral of ELE (9.16) in the form (9.30a) may be interpreted as the conservation law for the quantity g. As any conservation law, it is related to a certain symmetry of the system, which will be analyzed in Sect. 9.3.3. The conservation law (9.30a) is very helpful as it allows us to analyze the equilibrium states for a general form of the potential g(P, T, η). Unfortunately, the first integral and the conservation law exist for 1d extremals only. Equation (9.30a) plus BC (9.27) represents the boundary-value problem for the 1D equilibrium states (9.18). Let us resolve it as follows: dη =± dx

2 ½gðP, T, ηÞ - μ κ

ð9:31Þ

This equation shows that there are two categories of 1d states: monotonic and non–monotonic, that is, with alternating sign of dη/dx. Equation (9.31) can be integrated in the domain of monotonicity of the OP. Selecting the positive branch and separating variables, we obtain a general solution in quadratures: x=

κ 2

dη gðP, T, ηÞ - μ

ð9:32aÞ

Notice that, because of the conservative property (9.30a), a particular solution (9.32a) of the boundary-value problem (9.27, 9.30a) requires only one arbitrary constant, which is specified by (9.30c). Taking into account that ηl < ηr for the positive monotonic branch of (9.32a), we arrive at the relationship X=

κ 2

ηr ηl

dη gðP, T, ηÞ - μ

ð9:32bÞ

between the size of the 1d system X, its thermodynamic parameters P and T, and the parameters of the equilibrium state ηE(x)—the chemical potential μ and range Π = |ηr - ηl|. The non-monotonic states are periodic with a monotonic half-period D1d and the BC, (9.27) and (9.30c), which apply on the boundaries of the domain of monotonicity xl < x < xr. In a finite size system, the non-monotonic states are characterized by index—the number of half-periods:

184

9

n1d 

Heterogeneous Equilibrium Systems

X , D1d  xr - xl D1d

ð9:33Þ

For the monotonic states, obviously, n1d=1. To find the free energy of the 1d state, we substitute (9.30a) into (9.15a) and obtain xr

G1d = Vμ þ Sn1d

κ

xl

dη1d dx

2

dx

ð9:34aÞ

Changing the variables of integration on the monotonic branch of (9.31), we obtain G1d = Vμ þ Sn1d

ηr ηl

2κ½gðP, T; η1d Þ - μdη

ð9:34bÞ

There is a lot that we can learn about the extremals ηE(x) without calculating the integrals in (9.32b) and (9.34b). A monotonic heterogeneous state must have an inflection point (xi), and (9.28) and (9.29) with BC (9.27) help us find it. Indeed, the necessary and sufficient conditions of the inflection point of a smooth function are d3 η d2 η ð x Þ = 0 and ðx Þ ≠ 0 i dx2 dx3 i

ð9:35aÞ

Then we conclude that 2

∂ g ∂g ðη Þ = 0 and ðη Þ ≠ 0 ∂η2 i ∂η i

ð9:35bÞ

Moreover, expanding the derivative near the inflection point 1 d3 η dη dη ðxi Þ þ ð x Þ ð x - xi Þ 2 þ ⋯ ð xÞ = 2 dx3 i dx dx

ð9:35cÞ

and applying the BC (9.27) to the expansion, we obtain that d 3 ηðxi Þ=dx3 ηt. Therefore, it is advantageous to expand the function g(P, T, η) that satisfies condition (8.3) about the homogeneous transition state and truncate after the fourth-power term; see (8.38). Then, from (8.38c) we find that gβ < μ

9.3

a

K=max

E

Fig. 9.4 Tipped-off potential function U corresponds to thermodynamic preference of a stable phase, β, compared to a metastable phase, α. The filled circles indicate the initial positions, and the open circles indicate the final positions of the point mass. (a) critically damped oscillator; (b) “marginally” damped oscillator; (c) “marginally” damped oscillator from the lower hump. Damped oscillator is the mechanical analog of a traveling wave in the thermodynamic system

D

J

K

U K=0

a

K=0

b c


= ηt , Σe = 0, N e =

ð9:43Þ

Then, using the fact that in the phase plane the trajectory ηe(x) is symmetric with respect to the transition point ηt (see Fig. 9.2b(ii)), we can find the solutions of the boundary-value problem (9.26, 9.27) by substituting (8.38) into (9.32): p x= κ

dΔη

ð9:44Þ

1 0000 g00t Δη2 - Ν2e þ 12 gt Δη4 - Ν4e

where Δη  ηe - ηt. The end points of the solution (9.44) are Δηr = -Δηl = Νe; they are the turning points of the phase plane. (Why?) The solution (9.44) yields the expressions for the amplitude (9.19c) and length (9.19f) (see Exercise 9.2): Ν2e = 2mΘ2 ;

1 m= 12

Le = p

2 l; 1-m

1-4

gt - μ ; Ξ

Θ

6

g00t g0000 t

ð9:45aÞ

κ g00t

ð9:45bÞ

l

Ξ j

g00t

g00 j Θ = 6 t0000 gt 2

2

ð9:45cÞ

where m is a normalized chemical potential and Θ, l, and Ξ are, respectively, the fundamental amplitude, length, and energy density scales of the OP distribution.

9.3

One-Dimensional Equilibrium States

191

Notice that if the system approaches its spinodal point, that is, g00t → 0, then Θ → 0, Ξ → 0, and l → 1. Equation (9.45) allows us to establish other explicit relations between the characteristics of the state ηe(x); see (9.19). For instance, by excluding the parameter m from (9.45a, 9.45b), we can obtain the slope as a function of the amplitude: Νe l

Λe =

1-

Ν2e 2Θ2

ð9:45dÞ

Another length scale, similar to the fundamental length l, can be defined for a stable equilibrium state η. It is called the correlation radius rC, see (12.15), because, as we will learn in Chap. 12, the OP fluctuations of the state η are correlated on the scale of rC. Now, using the substitution u = Δη/Νe for (9.45), we can clarify the relationship (9.32b) for the type-e states: p X =2 κ

Νe

dΔη 0

=p

g00t

2l K 1-m

Δη2

- Ν2e

1 0000 gt Δη4 - Ν4e þ 12

m = Le K 1-m

m 1-m

ð9:46aÞ

where K(k) is the elliptical integral of the first kind [2]: 1

K ðk Þ  0

π k2 , 1þ 4 2 4 ln p , 1 - k2

du p → p 2 1 - u 1 - k 2 u2

for k → 0 þ 0

ð9:46bÞ

for k → 1 - 0

Notice from (9.45) and (9.46a) that for the type-e states, Le is always smaller than X and the difference between the two grows as |gt - μ| grows. To find the free energy of a type-e state (not necessarily monotonic), we again use the substitution u = Δη/Ne and (9.45) for (9.34b) with the potential (8.38) and obtain Ge = Vμ þ S

κg0000 t N3n 3 e e = μV þ 8

þ1 -1

ð 1 - u2 Þ

κjg00t j3 g0000 t

1-m - u2 du m

ΦðmÞne S

4 = ½gt - Ξmð1 - mÞV þ ΞΦðmÞne Sl 3 2 1 4 = gαðβÞ - Ξm - m V þ ΞΦðmÞne Sl 2 3 where we defined the function

ð9:47Þ

192

9

p ΦðmÞ = 1 - m E

Heterogeneous Equilibrium Systems

m - ð1- 2mÞK 1-m

m 1-m

,

ð9:48aÞ

and E(k) is the elliptical integral of the second kind [2]: 1

E ðk Þ  0



p

1 - k 2 u2 p du 1 - u2 π k2 , 14 2 1þ

for k → 0 þ 0

4 1 1 - k2 , ln p 2 2 2 1-k

ð9:48bÞ for k → 1 - 0

There are two limiting cases of the function in (9.48a) which are of interest:

ΦðmÞ →

3π m, for m → 0 þ 0 4 1 1 1 p 1-m , for m → - 0 2 2 2

ð9:48cÞ

The slope, amplitude, and free energy as implicit functions of the system’s size are best represented in the scaled form as follows: λ=

lΛe , Θ

a=

Νe , Θ

j=

Ge - gαðβÞ V ; ΞV

s=

X : l

ð9:49Þ

Notice that j represents the dimensionless free energy excess quantity. The functions a(s) and j(s) for the monotonic states (ne = 1) are depicted in Fig. 9.5. For the non-monotonic states (ne > 1), the size X is replaced in (9.49) by the length of monotonicity De; see (9.33).

9.3.5

Type-e1 Solution: Bifurcation From the Transition State

Although the implicit functions j(s) and a(s) solve the problem of the 1d extremals, it is instructive to analyze them further. For a type-e1 state (9.38a), (9.39a), (9.45) yields me1 ≈

gt - μ → 0; Ξ

Νe1 ≈

2

gt - μ → 0; g00t

Le1 → 2l

ð9:50Þ

One-Dimensional Equilibrium States

193 0.25

scaled amplitude a

1

0.8

0.2

0.6

0.15

1

2

3

0.4

0.1

0.2

0.05

0 0

~ X/l 4

scaled free energy j

9.3

0 8

12

16

scaled system size s

Fig. 9.5 Size-amplitude bifurcation diagram for type-e heterogeneous monotonic states. Red lines, scaled amplitude; blue lines, scaled energy; solid lines, 1d systems; dashed lines, 3d systems. ~ 0 , the 1d bifurcation point; numbers, the classification types X=l,

Notice that, while the amplitude Ne1 of the type-e1 heterogeneous state depends strongly on the chemical potential value through (gt - μ), the characteristic length Le1 is practically independent of it. Excluding the parameter m from (9.45a) and (9.46a), we obtain for type-e1 solutions Νe1 ≈ 2



~ X -X ~ X

ð9:51Þ

This equation shows that the heterogeneous solutions ηe1(x) branch away or ~ where bifurcate from the homogeneous transition state η = ηt at the point X = X ~  πl: X

ð9:52Þ

This situation may be called size-amplitude bifurcation (SABi). Notice that the ~ is of the same order of magnitude as the fundamental length l. bifurcation length X ~ ηt is the SABi Equation (9.51) depicted in Fig. 9.5 is the SABi diagram, and X, ~ point, near which X → X þ 0 the solution η1e(x) is a harmonic function with the ~ period 2X. Now, let us look at the SABi diagram from the standpoint of the free energy. To do this, we exclude parameter m from (9.46a) and (9.47) and analyze the limiting ~ þ 0) of the relationship between the free energy and cases (μ → gt, m → 0, X → X the system’s size V = XS for the type-e1 monotonic state:

194

9

Heterogeneous Equilibrium Systems

~ - X = V gt- 4Ξ Ge1 = Vgt þ ΞmS X

~ X-X ~ XX

2

ð9:53Þ

This expression, scaled as in (9.49), is also depicted in the SABi diagram, Fig. 9.5.

9.3.6

Type-e3 Solution: Approach to Thermodynamic Limit

For the type-e3 state, μ → gα(β) and (9.38a, 9.39c, 9.45) yield me3 → me4 =

1 ; 2

p Le3 → Le4 = 2 2l

Νe3 → Νe4 = Θ;

ð9:54Þ

Then, using properties of the elliptical integral (9.46b), we obtain the following expression for the amplitude: Ν2e3 = Θ2 1- 8e

- L2X

e4

ð9:55Þ

Equations (9.54, 9.55) yield that the type-e3 states appear when X → 1. For the free energy of the monotonic state, (9.47) yields 2p - 2X 2 1 þ 4e Le4 3 l 2p - 2X 2 1 þ 4e Le4 = V gα ð β Þ þ Ξ X 3

Ge3 ≈ VgαðβÞ þ SlΞ

ð9:56Þ

The free energy densities and amplitudes of the type-e1–e3 states scaled as in (9.49) are depicted in Fig. 9.5 as functions for the system size.

9.3.7

Type-e4 Solution: Plane Interface

As μ → gα the elliptical integral in (9.46a) grows without bound. This means that the boundary-value problem (9.26, 9.27) has no type-e4 solutions in the finite domain (X < 1) but may have solutions in the infinite domain (X → 1), that is, the thermodynamic limit. To find the type-e4 solutions, the phase-plane method must be supplemented with the exact calculations of the solutions ηE(x) using quadratures of (9.44). Substituting (9.54) we obtain

9.3

One-Dimensional Equilibrium States

195

Θ þ Δηe4 1 x = xi þ Le4 ln 4 Θ - Δηe4

ð9:57aÞ

where the constant of integration, xi, may be interpreted as the x-coordinate of the inflection point of the solution: Δηe4 ðxi Þ  0:

ð9:57bÞ

For a bounded solution, |Δη4n| < Θ, (9.57) can be resolved as follows: ηe4 = ηt þ Θtanh 2

x - xi Le4

ð9:58Þ

In materials physics, this type of solution is called a plane interface; in the theory of micromagnetism, a domain-boundary wall; and in the mathematical theory of waves, a kink. Although the elementary-functions solution (9.58) resolves the boundary-value problem (9.26, 9.27) completely (see Figs. 9.2b(iii) and 9.3), it is useful for the following to analyze the properties of the solution even further: 1. Equations (9.44, 9.54) show that for type-e4 state, xl → -1 and xr → +1. 2. Solution (9.58) is odd with respect to the point (xi, ηt) so that Δη(-x) = -Δη(x). 3. Because X → 1, BC (9.27) yields d2η/dx2 = 0, which, due to ELE (9.26), results in ∂g ∂g ðη Þ = ðη Þ = 0 ∂η l ∂η r

ð9:59Þ

This relation shows that type-e4 solution connects two equilibrium phases with ηl = ηα and ηr = ηβ. 4. The heterogeneity of the solution (9.58) is localized in an interval of x-axis of the p characteristic length practically the same as that of type-e1 state, Le4 = 2Le1; see (9.50, 9.54). But solution (9.58) has infinitely long “tails” in both directions, x → ±1. 5. Selecting the positive branch of (9.31), we may express the slope as a function of η: Θ2 - Δη2e4 dηe4 =2 >0 dx Le4 Θ

ð9:60Þ

The slope is a pulse-type function, which is essentially nonzero only on the interval of localization of the heterogeneities.

196

9

Heterogeneous Equilibrium Systems

6. Because the slope (dηe4/dx) does not depend on x explicitly (only through η), the solution must be translationally (with respect to the Euclidean group of spatial transformations) invariant: ηe4 = func(x - x0) for arbitrary x0. The translational invariance of the solution (9.58) was removed by setting x0 = xi. 7. One can see that Δη dηe4 d 2 ηe4 = - 4 e4 2 Le4 Θ dx dx

ð9:61Þ

8. Using the mathematical formulae tanhðxÞ - tanhðyÞ =

sinhðx - yÞ , coshðxÞ coshðyÞ

d tanhðxÞ 1 = , dx cosh 2 ðxÞ for solution (9.58), it is easy to show that ηe4 ðx- δxÞ - ηe4 ðxÞ ≈ - δx

dηe4 dx

ð9:62Þ

This relation shows that the difference between two slightly displaced solutions is equal to the shift times the slope of the unperturbed solution. Furthermore, for the free energy of the type-e4 state, (9.34a) and Property 1 yield Ge4 = VgαðβÞ þ S

þ1 -1

κ

dηe4 dx

2

dx

ð9:63Þ

Compare this expression with (9.47), and notice that Ge4 separates into a term proportional to the volume V and the term proportional to the area S of the box, with the coefficients of proportionality independent of m and, hence, of X. This fact allows us to compare (9.63) with the free energy of the two-phase equilibrium system in the theory of capillarity (see Sect. 4.2.1) and identify the quantity: σ

þ1 -1

κ

dηe4 dx

2

dx

ð9:64Þ

as the interfacial energy of the system in the FTM. This expression shows that σ > 0, provided condition (9.7) is true. Notice that (9.64) allows us to interpret the Newmann-type BC (9.27) as corresponding to the case of absence of additional energy on the surface of the system.

9.3

One-Dimensional Equilibrium States

197

The interfacial energy is an important quantity because it is measurable. This justifies more thorough analysis of expression (9.64). Compare (9.15a, 9.30a) with (9.64) and see that other expressions for the interfacial energy are possible: þ1

ðg - μÞdx

ð9:65aÞ

ðgðP, T E ; ηe4 Þ - μÞdx

ð9:65bÞ

σ=

-1

þ1

σ=2

-1

Taking (9.24b) into account, we also find that þ1

σ= -

-1

ηe4

∂g ðP, T E ; ηe4 Þdx ∂η

ð9:65cÞ

Using the positive branch of (9.31), we obtain another expression: p σ = 2κ

ηβ ηα

gðP, T E ; ηe4 Þ - μdη

ð9:65dÞ

In Sect. 9.3.3, similarity between (9.65d) and (D.27) allowed us to establish analogy between the interfacial energy and abbreviated action in the Lagrangian field theory. Example 9.1 Interfacial Energy for Landau and Tangential Potentials Find relationships between the interfacial energy and the thermodynamic parameters for the systems with the general, Landau, and tangential potentials. The best way to find such relationship for the general potential (8.38) is to use formula (9.65d). Then, we obtain the following relation: σ=

κg0000 t 12

þΘ -Θ

2

Θ2 - Δη2 dΔη =

2p 2Ξl 3

ð9E:1Þ

Using the Landau potential (8.4) for (9.65d), we obtain the following relation: p σ = 2κ

4B=3 0

η

2 1 2 B - η dη = 3 2 3

4

p

2κ B3

ð9E:2Þ

For the tangential potential (8.30), first notice that, because the OP is rescaled compared to the OP of the Landau potential, the gradient energy coefficient of the tangential potential must be rescaled too. From (9.15b), (8.29), we find that

198

9

Heterogeneous Equilibrium Systems

~κ = κC 2

ð9E:3Þ

Then using (8.30) for the tangential potential, we find that at equilibrium = W=2. Substituting this into (9.45b) and using (9.54) leads to

g00t

2~κ ;L =4 W e4

l=

~κ W

ð9E:4Þ

Although the expression for the interfacial energy can be obtained from (9E.1), it is advantageous to derive it directly from (9.65d) using (8.30). Notice that the expression in (9.65d) is invariant to the scaling (Why?). Then, taking into account that ηα = η0 , ηβ = η1 , we obtain p σ = ~κW

1

ω2 ðηÞdη =

0

9.3.8

1p ~κ W 6

ð9E:5Þ

Interfacial Properties: Gibbs Adsorption Equation*

The interfacial energy is an important property of a system, which deserves a closer look. At this juncture we may ask a question: how many thermodynamic variables does the interfacial energy depend on? First, notice that σ can be defined properly only in the thermodynamic limit of V → 1 or at least X → 1 (dependence of the interfacial energy on the system’s size is an important subject in its own rights, but we will not be looking at that in this book). At first glance, it seems that σ is a function of three variables: σ = func(P, T, μ). However, considering equilibrium conditions (2.5, 9.30a, 9.59), we find that not all of them are independent. If ηα and ηβ represent different phases, then five variables (ηα, ηβ, P, T, μ) are constrained by four conditions, which leave only one independent; if ηα and ηβ represent the same phase, then four variables are constrained by two conditions, which leave two variables independent. Thus, using (9.30a, 9.65a) we obtain dσ = DT σ  dT þ DP σ  dP where the partials of σ are DT σ = -

þ1 -1

ðs - sα Þdx;

ð9:66aÞ

9.3

One-Dimensional Equilibrium States

sðP, T, ηÞ = -

∂g , ∂T

199

sα = -

DP σ = ρ vðP, T, ηÞ =

1 ∂g , ρ ∂P

∂μ ∂g ðP, T, ηα Þ =∂T ∂T

þ1 -1

vα =

ð9:66bÞ

ðv - vα Þdx; 1 ∂μ 1 ∂g ðP, T, ηα Þ = ρ ∂P ρ ∂P

ð9:66cÞ

where s is the entropy density, v is the specific volume, and ρ = M=V is the average mass density. If ηα and ηβ represent the same phase, then the partials represent the proper (i.e., non-diverging) quantities, the interfacial entropy and volume: χ  - DT σ,

ν

DP σ , ρ

ð9:67Þ

If ηα and ηβ represent different phases, then according to the Clapeyron-Clausius equation (2.8), see also (2.4): ½sβα dT = ρ½vβα dP,

ð9:68Þ

dσ = - ΓðsvÞ  dT

ð9:69aÞ

and

where the proper, non-diverging, quantity is the relative interfacial entropy: ΓðsvÞ 

þ1 -1

s - s α - ð v - vα Þ

½sβα

½vβα

dx:

ð9:69bÞ

If density of the system does not change from one phase to another or is not essential for the problem, the relative interfacial entropy may be defined with respect to the OP: ΓðsηÞ 

þ1 -1

s - s - - ðη - η - Þ

½sþdx: ½ηþ-

ð9:69cÞ

Example 9.2 Interfacial Energy of Antiphase Domain Boundary Find the fundamental length scale, interfacial energy, and entropy of the antiphase domain boundary. Antiphase domain boundary (APB) is the type-e4 state that appears in the system after the second-order transition. Hence, we can use the Landau potential (8.4, 8.44)

200

9

equilibrium state, bound eigenfunctions, "potential energy"

Fig. 9.6 Antiphase domain boundary of the system with A = -1 and κ = ½. The equilibrium state ηe4, (9E.9), red line; “potential energy” ∂2g(ηe4)/∂η2 for this state, black line; and two bound eigenfunctions, blue and green lines. The two boundeigenfunction energy levels are identified by the respective colors

Heterogeneous Equilibrium Systems

2

1

0

-1 -4

-2

0

2

4

spatial coordinate x

00 with B = 0 and A < 0; see Sect. 8.6.2. Then, ηt = 0, g0000 t = 6, gt = A = aτ, which yields

l=

κ , jAj

Θ=

jAj,

Ξ = jAj2

ð9E:6Þ

Using (9E.1) we obtain for the interfacial energy and entropy, respectively, 2 2κjAj3 3 1 χ= 2κjAj Tc σ=

ð9E:7Þ ð9E:8Þ

Notice that close to the LCP (8.16), that is, for |τ| → 0: χ ≫ σTC. In Fig. 9.6 is depicted ηe4 ðxÞ =

jAj tanh x

jAj=2κ

ð9E:9Þ

9.3

One-Dimensional Equilibrium States

9.3.9

201

Type-n4 Solution: Critical Plate—Instanton

Type-n states are more complicated; they appear when the condition (9.42) does not 000 hold and gt becomes a measure of deviation from the phase equilibrium. The phaseplane method and the thermomechanical analogy, however, may elucidate some of the properties of the type-n states in sufficient detail. Let us consider type-n4 state. Two comments are in order here. First, to preserve consistency with the analysis above, we will be analyzing the monotonic branch of the state. Second, it is more convenient here to use the Landau potential (8.4) instead of the general one (8.38). The relation between the two is established by (8.39), e.g., 000

gt = 2B - 6

B2 - A < 0

ð9:70Þ

The phase-plane method (see Figs. 9.2a(ii) and 9.4) suggests that ηn4(x) varies between ηα = 0 and the turning point η1c where η1c =

4 B 13

1-

9A 8B2

ð9:71Þ

Taking the integral in (9.32a), we obtain the full solution (see Fig. 9.2a(iii)): x=

κ ln A

1-

3A 9A - ln 12Bηn4 8B2





3η2n4 - 8Bηn4 6A

ð9:72Þ

It has an inflection point at (cf. (8.6)) ηt = η - = B -

B2 - A

ð9:73aÞ

The characteristic length (9.36) of the solution ηn4(x) is represented as follows: Ln4 =

p 2η1c κ η-

2A - 43 Bη -

ð9:73bÞ

The “tail” of the solution in (9.72), that is, the part 0 < ηn4 < < η - , may be represented as ηn4 ≈ const × exp (x/L4n). Notice that the characteristic length of the tail of ηn4(x) is different from that of ηe4(x) in (9.58). Another important length scale of this solution is the distance between the turning (η1c) and inflection ðη - Þ points:

202

p Dn4 = κ

η1c η-

η

9

Heterogeneous Equilibrium Systems



ð9:74Þ

A - 43 Bη þ 12 η2

To calculate the free energy of the type-n4 state, we use the Landau potential (8.4) for (9.34b) with nn4 = 1 (Why?): p Gn4 = Vμ þ S 2κ

η1c

η1c

p gðηÞ - μdη = Vμ þ S κ

0

1-R 3 , = Vμ þ Sσ 3R - 2R þ 1 - R2 ln 1þR 2

0

η

4 1 A - Bη þ η2 dη 3 2

3

ð9:75aÞ where p 3 A R= p 2 2B

ð9:75bÞ

Notice that as g′′′t → 0: R → 1 and Gn4 → Vμ + Sσ. Type-n4 represents a 1d pulse of ordering of one kind in the “sea” of ordering of another kind, e.g., β-solid in α-liquid. Notice from (9.30) and Fig. 9.2a(i) that the ordering of the “sea” is that of the metastable phase, e.g., liquid, while the ordering of the pulse comes close (although not quite there yet) to that of the stable phase, e.g., solid. For type-n4 the chemical potential, μ = gα, corresponds to the critical value between the cases of general periodic distribution of ordering in the system and absence of the equilibrium states at all; that is why this case is called the instanton— a localized, critical, equilibrium excitation of the old phase with finite amount of the free energy excess. Instanton is described by a homoclinic orbit, while the kink by the heteroclinic orbit (cf. Fig. 9.2a(ii) and b(ii)). In that regard actually, (9.72) describes half-the-instanton; the whole instanton has nn4 = 2. Hence, as R → 1 the total energy of the whole instanton approaches Vμ + 2Sσ.

9.4

Free Energy Landscape*

Various types of solutions of the 1d equilibrium boundary-value problem (9.26, 9.27) obtained above are in need of physical interpretation. The finite-domain solutions can be interpreted as the equilibrium states of a slab of the thickness X, while the infinite-domain solutions (-1 < x < +1) are the equilibrium states in the thermodynamic limit, X → 1. In the infinite domain, there are periodic type-1, 2, 3, pulse type-n4, and kink type-e4 solutions. The periodic and pulse solutions are non-monotonic. For the periodic solution ηe2(x), (9.33, 9.46a) describe the halfperiod De2 and the index ne2 of the state. Type-e states correspond to the conditions

9.4

Free Energy Landscape*

203

of phase equilibrium, T = TE(P), when the free energies of the phases α and β are equal (see Figs. 9.2b(i), 9.3); hence, the type-e4 state can be interpreted as a two-phase equilibrium state where the phases are separated by an interface. For type-n states, the free energies of the homogeneous phases differ; hence, T ≠ TE(P) and type-n4 state can be interpreted as a 1d pulse of critical ordering (instanton). At this stage we have learned enough about our system to be able to understand the relationships between the physical parameters that control it. First, let’s ask the question: What are the factors that may influence a state of the system ηE(x), (9.18)? Obviously, pressure P, temperature T, and the external field H do. In the previous section, we saw that the linear size X of the system’s domain Ω makes a difference: the states may be finite, infinite, or semi-infinite. A less obvious factor that may change the equilibrium state is the type of the boundary condition on the surface of the system. The latter becomes less important in the thermodynamic limit, X → 1. In a laboratory experiment, all the abovementioned parameters can be altered more or less independently including the dimensionality and boundary conditions. The former can be changed by preparing the samples of different shapes; the latter by changing the interaction of the system with the environment, e.g., lubricating the surface. However, the pressure P, temperature T, and system’s size X are the most flexible control parameters of the system; they constitute the thermodynamic degrees of freedom of the system. As we saw in the previous sections, there is one more parameter that influences the equilibrium states of the system: the chemical potential μ. At first glance, the chemical potential is independent of the other parameters and, hence, constitutes a thermodynamic degree of freedom. However, this perception is false. The crux of the matter is that an important factor, stability of the solution, has not been taken into account yet. In the oncoming sections, we will see that the stability considerations present a selection criterion for the equilibrium states. That is, when all the thermodynamic degrees of freedom are specified, a unique value of the chemical potential μ will be selected based on the principle of the free energy minimum. A geometrical concept of the free energy landscape may help us visualize distribution of the equilibrium states ηE(x) in the Hilbert space {η(x)}. The free energy landscape is the “elevation” of the functional G{P, T, H, V, d, BC; η(x)}, (9.15a), with respect to an arbitrary “sea” level. The extremals are the stationary points of the landscape: minima, maxima, and saddle points. How can we visualize the “real estate” of the infinite-dimensional functional space {η(x)} over which the landscape will be constructed? This space consists of functions, which are continuous enough (usually continuous are first two partial derivatives) and satisfy the BC, e.g., (9.17b). Then the free energy functional G{η(x)}, (9.15a), may be viewed as a hypersurface in the functional space {η(x)}, each element of which is characterized by five numbers; see (9.19, 9.33). Then, each element of {η(x)} may be represented by a point in the 4-space ( + Π, Λ, Σ, n), and the landscape will be the hypersurface over that space:

204

9

Heterogeneous Equilibrium Systems

G = Gf < η > þ Π, Λ, Σ, ng

ð9:76Þ

Notice that there is no explicit dependence on the system size; it enters implicitly through the relationship between the range, slope, and index. Using this geometrical image, one can formulate a better idea about the equilibrium states and their stability. The equilibrium states are represented by the extremals ηE(x), (9.18); they correspond to minima, maxima, or saddle points of the hypersurface (9.76). The minimum means that some small deviations from the extremal ηE(x) make the functional to increase, the maximum to decrease, and the saddle to increase in some directions and decrease in others. Here we show how to construct the landscape (9.76) for the 1d type-e monotonic states with varying value of the chemical potential μ. As the landscape-dependent coordinate, we use j of (9.49). As Σe = 0 and n1d = 1 (see (9.33, 9.43)), we may reduce the number of the real-estate independent coordinates to two: the dimensionless slope λ and amplitude ( + Ν). For convenience, the latter is shifted by ηt and scaled with Θ: y=

1 ð < η > þ Ν - ηt Þ Θ

ð9:77Þ

For the homogeneous elements of {η(x)}, Ν = 0 = Λ. The homogeneous equilibrium states have = ηα or ηt or ηβ; hence, respectively, y = -0.5 or 0 or +0.5. For the heterogeneous equilibrium states of {η(x)}: = ηt, Ν ≠ 0 ≠ Λ, and y = a; see (9.43, 9.49). Then, (9.45d) is the projection of the line of the extremals on the plane (y, λ) and (9.45a, 9.45c, 9.46a, 9.47, 9.48a)—on the plane (y, j). The dimensionless forms of these relations are λ=y

1-

1 1 j = 1- y2 þ 2- y2 3 4

1 2 y 2

ð9:78aÞ E p

y -1 þ 2

y 2 - y2

ð9:78bÞ

K py

2 - y2

To visualize the landscape, we are depicting it in Fig. 9.7 and projecting its characteristic lines on the planes (y, j) and (y, λ) in Fig. 9.8.

9.5

Multidimensional Equilibrium States

In the previous sections, we analyzed properties of the 1d equilibrium states. We may wonder now if the free energy landscape of a thermodynamic system G{η(x)}, (9.76), contains multidimensional equilibrium states, that is, those where essential

9.5

Multidimensional Equilibrium States

205

0.25

0.2

0.15

j 0.1

0.05 0

0 0.2

–0.5 0

0.4 0.5

y

1

0.6

l

1.5

Fig. 9.7 Free energy landscape of the 1d type-e system, j = func(y, λ). Purple line, the heterogeneous equilibrium states; pink line, projection of the heterogeneous equilibrium state line; yellow line, the homogeneous states

variations take place in more than one spatial direction. In fact, it does and, actually, “many” (infinitely many). The mathematical complexity of the multidimensional states comes from the fact that ELE (9.16) cannot be integrated in d > 1, that is, there are no first integrals that describe such equilibria. In Sect. 9.5.1 we demonstrate how one can find multidimensional equilibrium states which can be thought of as small deviations—ripples—on top of one of the homogeneous equilibrium states η of Chap. 8. Some of the multidimensional states (Sect. 9.5.2) have quasi-1d structure similar to those of the plane interfaces studied in Sect. 9.3.7; others have spherical symmetry, which helps treating them rigorously (Sect. 9.5.3).

9.5.1

Ripples: The Fourier Method*

Let’s look for an equilibrium state ηE(x), which on average is equal to η, so that

206

9

Heterogeneous Equilibrium Systems

j

(a)

0.2

m=½

m=½

y

0 -0.5

0

0.5

1

1.5

type-e1

s=S m=0

-0.2

-0.4

type-e4

s m=½

-0.6

O Fig. 9.8 Projections of the free energy landscape in Fig. 9.7 on the planes (y, j) and (y, λ). Red lines, the homogeneous states; purple lines, the heterogeneous equilibrium states. Pink circle, the barrier state; purple circle, type-e4 state, plane interface. Pink rounded box, SABi; purple rounded box, approach to the thermodynamic limit

ΔηðxÞ  ηE ðxÞ - η and

< ηE > = η

ð9:79Þ

Applying the equilibrium integral equations (9.22, 9.24b), we obtain κð∇ΔηÞ2 þ V

∂g ðη ÞΔη dx = 0 ∂η E

Expanding ∂g/∂η about η and taking into account (8.5), we obtain

ð9:80aÞ

9.5

Multidimensional Equilibrium States

207

2

κð∇ΔηÞ2 þ V

3

4

∂ g 1 ∂ g 1 ∂ g ðηÞðΔηÞ2 þ ðηÞðΔηÞ3 þ ðηÞðΔηÞ4 þ ⋯ dx 2 ∂η3 6 ∂η4 ∂η2

=0 ð9:80bÞ This equation may be analyzed using the method of Fourier transform (Appendix F). To see this, let’s present Δη(x) in the form of a Fourier series: ΔηðxÞ =

fkg

Δηk eikx

ð9:81aÞ

where k = (kx, ky, kz) is a separation parameter called the wavevector. In a parallelepiped with the sides (X, Y, Z ) and volume V = XYZ, k has the following components: k=

πnx πny πnz , , , nj - integers, j = x, y, z X Y Z

ð9:81bÞ

The summation in (9.81a) is over all integral-number combinations (nx, ny, nz). Δηk are called the Fourier coefficients of Δη(x) and can be found as Δηk =

1 V

ΔηðxÞe - ikx dx

ð9:82Þ

V

Due to (9.79), the k = 0 Fourier mode of Δη(x) vanishes: Δη0 = 0

ð9:83Þ

If the OP field η(x) is represented by real numbers (fields of different mathematical origin will be considered in Chap. 16), then, as you can see from (9.82), the Fourier coefficients of the opposite wavevectors k are complex conjugate: 

Δη - k = Δηk

ð9:84Þ

Using the Fourier representation, we can express the gradient of Δη(x) as ∇ΔηðxÞ = i

fkg

kΔηk eikx

ð9:85Þ

Substituting (9.81) and (9.85) into (9.80b), we obtain the Fourier representation of the equilibrium equation:

208

9

Heterogeneous Equilibrium Systems

0 2

V =

fkg

κ j kj 2 þ

3

2 ∂ g V∂ g ðηÞ Δηk þ ðη Þ 2 2 ∂η3 ∂η

fk1 þk2 þk3 = 0g

Δηk1 Δηk2 Δηk3

4

þ

V∂ g ðηÞ 6 ∂η4

fk1 þk2 þk3 þk4 = 0g

Δηk1 Δηk2 Δηk3 Δηk4 þ ⋯ ð9:86Þ

In the summations of the Fourier coefficients of the third, fourth, etc. orders, the sums of the wavevectors add up to zero because eikx dx → ð2π Þ3 δðkÞ,

ðF:10Þ

for V → 1

V

On the boundaries of the system x = (0 or X, y, z), jΩ = (±1, 0, 0) or x = (x, 0 or Y, z), jΩ = (0, ±1, 0) or x = (x, y, 0 or Z ), jΩ=(0, 0, ±1). Then, to satisfy the boundary conditions (9.17b), the Fourier coefficients must obey the following relationships (Why?): þ1 nj = - 1

ð - 1Þnj nj Δηfnx ,ny ,nz g = 0:

ð9:87aÞ

If the OP field can be represented as Δη(x) = f(x)g( y)h(z), the Fourier coefficients break down into products of the separate components: Δηk = Δηnx Δηny Δηnz . In this case condition (9.87a) may be simplified: þ1 nj = 1

ð - 1Þnj nj Im Δηnj = 0:

ð9:87bÞ

This condition shows that BC (9.17b) will be satisfied if all the Fourier coefficients are real. The advantage of (9.86, 9.87a) over (9.16) or (9.24b) is that these are algebraic equations; the disadvantage is that we must deal now with many components Δηk instead of one function Δη(x). For the state ηE(x) to be close to η, the Fourier coefficients Δηk must be small. Then, (9.86) yields that the necessary condition for this is that the coefficient in front of the quadratic term vanishes: 2

κ j kj 2 þ

∂ g ðη Þ = 0 ∂η2

ð9:88Þ 2

This is possible only at the point of unstable equilibrium, ∂ gðηÞ=∂η2 < 0, that is, at the transition state: η = ηt . In a cube with the side X, this condition may be expressed as follows:

9.5

Multidimensional Equilibrium States

209

~ 2 ðP, T Þ X 2 = n2x þ n2y þ n2z X

ð9:89Þ

~ comes from (9.52). As nj’s are integers, this equation defines the SABi point where X the 2d and 3d Xd, which is a 3d analog of the 1d SABi point in (9.51). Notice p that for p ~ and 3X, ~ respecstates, the bifurcation is deferred until greater lengths of 2X ~ tively, instead of X for the 1d states; see Fig. 9.5. Equation (9.86) can be used to find the coefficients Δηk beyond the SABi point. Indeed, let us resolve (9.86) for a Δη(x) that has k1 = k 2 = - k 3 = - k 4 in a system where (9.42) applies, that is, T = TE(P). Then the third-order summation term vanishes, and the fourth-order term turns into V 0000 g 6 t

Δηk

fkg

2

2

Then (9.86) can be resolved as follows: 2

Δηk = 6

g00t g0000 t

1-

g00t X - X d X 2d ≈ 12 g0000 Xd X2 t

ð9:90Þ

Expanding g(ηE) in (9.15a) and substituting (9.90) into the expansion, we obtain Gd = Vgt þ V

f kg

2 1 1 00 κjkj2 þ g00t Δηk þ g00 t 2 24 2

g00 = V gt - 6 00t 00 g t

X - Xd Xd

2

Δηk

2

2

ð9:91Þ

Compare this expression with (9.53) and notice that for the extremals with (d > 1), Gd > 1 ðV Þ > G1 ðV Þ

ð9:92Þ

because of two reasons: (1) the deviation from Vgt starts at the linear size Xd greater ~ (2) the curvature of the deviation is d times smaller than than that for d = 1, that is, X; for d = 1. In Fig. 9.5 the Fourier coefficients (9.90) and free energy (9.91) are expressed through a and j of (9.49) and presented in comparison with the 1d type-e extremals.

210

9.5.2

9

Heterogeneous Equilibrium Systems

Sharp-Interface (Drumhead) Approximation

As we saw in Sect. 9.3.7, a heterogeneous equilibrium state is possible where the OP changes quickly from one bulk-phase value to another one in a plain transition zone, called interface, while remaining practically constant or changing very slowly outside this zone. This result prompts us to search for geometrically more complicated equilibrium states that are represented by thin transition layers where most of the OP changes occur. On the “outside,” these transition layers may have rather complicated, multidimensional geometrical shape; see Fig. 9.9. To find such states, instead of the Cartesian coordinates x = (x1, x2, x3), we shall introduce new curvilinear coordinates {u = U(x), v = V(x), w = W(x)} (see Appendix C). The main requirements to the new coordinates are that they are orthogonal and that the OP is a function of one coordinate only: η = η(u). In order to eliminate the remaining arbitrariness of the new curvilinear coordinates, we assume that the coordinate u = U(x) obeys the eikonal equation everywhere: ð∇U Þ2 = 1,

ðC:16Þ

and the level surface U = 0 is specified as follows: d2 η ð0Þ = 0: du2

ð9:93Þ

Fig. 9.9 Curvilinear coordinate system (u, v, w) associated with a curved interface. Vn—velocity of the interface motion

E -phase  K U=0 v, w

Vn

y

0

uD D -phase  K

D

x

E

uE

9.5

Multidimensional Equilibrium States

211

Then, according to (9.26), on the level surface U (x)= 0, the OP takes on one of the equilibrium values, η. The level surfaces U (x)= const are characterized by a unit normal vector: u

∇η 1 ∂η ∂η ∂η j , j , j , = j∇ηj j∇ηj ∂x1 1 ∂x2 2 ∂x3 3

ðC:11Þ

which is invariant with respect to the rotation of the reference frame (see Appendix C). Then the curved interface may be represented by the direction cosines of u or by the Euler angles of inclination θij with respect to the coordinate axis; see (C.12) and cf. (9.3): ∂η=∂xi ∂u=∂xi = ∂η=∂xj ∂u=∂xj

tan θij =

ð9:94Þ

The unit normal u has many practical applications; for instance, it may be used to express the anisotropic properties of the interface, e.g., anisotropy of the interfacial energy. Another way to characterize the curved surfaces is by describing its curvature as a function of the curvilinear coordinates. Using the formula ∇u = 2K ðu, v, wÞ = k 1 þ k2

ðC:21Þ

where K is the mean curvature and k1, k2 are the principal curvatures of the level surface U = const and expressing the Laplacian operator in the new coordinates: 2

∇2 =

∂ ∂ þ 2K ðu, v, wÞ ∂u2 ∂u

ðC:23Þ

ELE (9.16) transforms as follows: κ

d2 η dη ∂g þ 2κK ðu, v, wÞ ðT, ηÞ = 0 du ∂η du2

ð9:95Þ

To solve (9.95) a number of different ideas may be used. For instance, one may imagine a 3d equilibrium state ηE, which has the same internal structure as the 1d one, see (9.26), that is, the first and third terms in (9.95) add up to zero. Then, the mean curvature K (u,v,w) must be zero everywhere, which brings us back to the Cartesian coordinate system. However, there is a more general technique, which can be used to derive the equilibrium and nonequilibrium equations for a piece of an interface. This is the method of averaging. This is how it works. First, we introduce the averaging operator:

212

9

Af 



du uβ

dη f ðηðuÞ, uÞ = du

Heterogeneous Equilibrium Systems ηα

ηβ

dη f ðη, uðηÞÞ

ð9:96Þ

where the integration is over the thickness of the interface, that is, the interval (uβ, uα) end points of which are located outside the interface zone in the regions occupied by the respective phases, η(uβ) = ηβ and η(uα) = ηα; see Fig. 9.9. A comment is in order here. The straight averaging of (9.95) without a weight factor is not effective because it eliminates the free energy contribution (the driving force), þ1∂gðTE ,ηe4 Þ =∂η du = 0, (see (9.22)), but does not eliminate the OP variation: -1 uα dη uβ =du du = ηα - ηβ . Hence, a proper averaging of (9.95) should include a weight factor. Using dη/du as the weight factor (for the choice of the weight factor, see Appendix D) and taking into account that it vanishes at uα and uβ, we obtain that A Then, using the relation dg = A

∂g=∂η

d2 η =0 du2

ð9:97aÞ

dη, we obtain that

∂g = gðuα Þ - g uβ ffi ½gαβ ∂η

ð9:97bÞ

where ½gαβ is the jump of the free energy density from one phase to another; see (2.4). The difference between this quantity and the one defined in (2.4a) is that now we are dealing with the phases which may not have the same amount of free energy, that is, ½gαβ ≠ 0. Furthermore, the level surfaces U = const are equidistant; hence, the radius of curvature R of these surfaces is R = R0(v, w) + u, and for the mean curvature, we obtain 1  K = K 0 1 þ uK 0 þ u2 K 20 þ O u3 K 30 Rðu, v, wÞ

ð9:98Þ

where R0(v, w) and K0(v, w) are the radius and the curvature of the level surface U (u, v, w)= 0. In Sect. 9.3.7 we found that the equilibrium state ηe4, (9.58), is an odd function of u with the characteristic length of variation equal to Le4 and that (dηe4/du)2 is a belllike, even function of u. Then, if Ge  2 j K 0 j Le4 < < 1,

ð9:99Þ

where Ge is called the geometric number of the layer, taking into account (9.64), we obtain that

9.5

Multidimensional Equilibrium States

A  2κK

213

dη dη = 2K 0 A  κ þ O L3e4 K 30 ≈ 2σK 0 ðv, wÞ du du

ð9:97cÞ

Thus, the coordinates u and (v, w) in (9.95) separate and it can be rewritten in the form 2σK 0 = ½gαβ

ð9:100Þ

This is a sharp-interface or drumhead approximation of the ELE (9.16), and the smallness of the geometric number Ge is the criterion of its validity. Geometrically, the latter means that the level surface U = 0 does not have sharp folds with the radii of curvature on the scale or smaller than the interfacial thickness. A helpful perspective of the transformation of (9.95) into (9.100) is to think about digging a tunnel through a mountain ridge. Then, u is a tunnel variable, which gives us all the information about the structure of the mountain, and (v, w) are the ridge variables, which tell us more about the shape of the ridgeline. If ½gαβ ≠ 0, then (9.100) is another form of Laplace’s pressure equation (2.11). If α ½gβ = 0, e.g., APB of Example 9.2, then, in two dimensions, this is possible only for a plain interface where K0 (v, w)= 0. In three dimensions, the mean curvature of a curved surface can be zero if the two principal curvatures at that point of the surface are equal in magnitude but of opposite sign: k1 ðv, wÞ = - k 2 ðv, wÞ

ð9:101Þ

It is possible to construct a surface that has zero mean curvature everywhere by translating a specially designed unit cell. Such surfaces are termed “periodic minimal surfaces.”

9.5.3

The Critical Droplet: 3d Spherically Symmetric Instanton

The case of ½gαβ ≠ 0 is particularly important and needs to be analyzed in detail. As we stated above, ELE (9.16) cannot be integrated in d > 1. However, for an isotropic system, some of the extremals have very simple symmetry: cylindrical for d = 2 and spherical for d = 3. In a cylindrically/spherically symmetric system, the divergence operator takes the form (Appendix C) ∇2 =

d2 d - 1 d þ r dr dr 2

ðC:26Þ

where r is the spatial coordinate. The singular point r = 0 is called a center and

214

9

Heterogeneous Equilibrium Systems

ηdc  ηE ðr= 0Þ;

ð9:102Þ

see Fig. 9.2a(i). Then ELE (9.16) takes the form κ

d 2 η d - 1 dη ∂g þ =0 r dr dr 2 ∂η

ð9:103Þ

Notice that this form also applies to a d = 1 system with translation invariance. Formally it can also be applied to d ≥ 4 systems and even to systems with fractional dimensionalities. A physically acceptable solution must be continuous everywhere. Thus, in order to avoid discontinuity at the center, the cylindrically/spherically symmetric solutions have to satisfy the following boundary condition: dη =0 dr

at r = 0

ð9:104Þ

If the boundary Ω of the system is also cylindrically/spherically symmetric, then (see Appendix C) j Ω = jr =

r ; jrj

jr ∇ =

d on Ω dr

In the thermodynamic limit, the boundary condition, (9.17b), takes the form dη → 0 at r → 1 dr

ð9:105Þ

One of the goals of this section is to look at the process of α → β transformation from the metastable into the stable phase of the system and compare the solutions of (9.103, 9.104, 9.105) with those of the Gibbsian theory of capillarity of Sect. 4.2.1. The solution that we are looking for, ηE(r; d ), represents the critical nucleus (instanton), that is, a localized heterogeneity in the “sea” of a homogeneous metastable phase. Thus, the third boundary condition appears: ηE → ηα at r → 1

ð9:106Þ

where ηα corresponds to the metastable phase (cf. Fig. 9.2a(i)). For d = 1 the nucleus has a shape of a plate, d = 2 a cylinder, and for d = 3 the nucleus is a sphere with r representing the distance from the center. Equation (9.103) is known in the literature as a generalized Emden equation [3]. If g(η) is a single-well parabolic function, its solution is a Bessel function of the zeroth order. If g(η) is a double-well function, like in (8.4), (8.18), or (8.30), (9.103) is not known to have a general solution which can be expressed through elementary or special functions in a closed form. Solutions of the special cases that possess the

9.5

Multidimensional Equilibrium States

215

Painlevé property (not to have movable singularities other than poles) can be expressed in quadratures. The boundary-value problem (9.103, 9.104, 9.105, 9.106) can be qualitatively analyzed using the phase plane (η, dη/dr). The critical-nucleus solution is not a regular (coarse) trajectory in the plane but a separatrix because there are three boundary conditions for a second-order ODE. Some of the properties of the critical nucleus can be deduced from the general properties of the ELE (9.103) and function g(η) [4]. Indeed, if we multiply all terms in (9.103) by (dη/dr) and integrate from r to 1, cf. averaging in (9.96), we obtain 1

κ r

d2 η dη dr þ κ ðd- 1Þ dr 2 dr

1 r

1

2

1 dη r dr

dr r

∂g dη dr = 0 ∂η dr

ð9:107Þ

Taking the BC (9.105, 9.106), into account, we obtain (cf. (9.30a)) 1 dη - gðP, T; ηα Þ = - gðP, T; ηÞ þ κ 2 dr

1

2

- κ ðd- 1Þ r

dr 0 dη r 0 dr 0

2

ð9:108Þ

This relation shows that g(P, T; ηα) plays the role of the chemical potential for this type of the states. Applying this relation at the center of the critical nucleus and using the BC, (9.104), we obtain g P, T, ηdc - gðP, T, ηα Þ = - κ ðd- 1Þ = κ ðd- 1Þ

ηdc ηα

1 0

1 dη r dr

1 dη dη r dr

2

dr ð9:109Þ

Below are listed some of the most important properties of the instanton solution representing a critical nucleus: 1. From (9.109) we can see that g η1c = gðηα Þ and g ηdc > 1 < gðηα Þ, which means that ηα < ηt < η1c < ηdc > 1 < ηβ

ð9:110Þ

See Fig. 9.2a(i). From this inequality follows that the antiphase domains cannot have critical nuclei because for this transition g(ηα) = g(ηβ) (see Example 9.2). 2. Equation (9.109) also indicates that for d → 1: ηcd → ηβ because g(ηcd) - g(ηα) becomes more negative as d → 1. Indeed, the integral in (9.109) decreases with d → 1 but not as “fast” as (d - 1) → 1. 3. If T → TE(P), then g(ηβ) → g(ηα) and ηcd → ηβ. This follows from the inequality (9.110) and the fact that ηc1 → ηβ as g(ηβ) → g(ηα); see Fig. 9.2b(i).

216

9

Heterogeneous Equilibrium Systems

4. If T → TE(P), the spatial distribution of OP described by (9.103) becomes similar to that described by the 1d equation (9.26). Indeed, as the critical nucleus represents the separatrix in the plane (η, dη/dr), the derivative (dη/dr) does not change sign in the entire domain 0 < r < 1 (in the above considered case of α → β transformation: dη/dr < 0); cf. the monotonic branch of (9.31). Then, the “multidimensional” term in (9.103) is small and can be neglected because it cannot balance the two other terms which alternate. Overall, as T → TE(P), the critical nucleus of the field theory tends to look more like that in the Gibbsian theory of capillarity (see Sect. 4.2.1), that is, a sphere (cylinder) of the phase β in the “sea” of phase α separated by the sheath of interface. The only difference of that from the plane interface of Sect. 9.3.7 is its curvature. Now let us use the fact that (dη/dr)2 is a bell-like, sharply peaked function of r that reaches maximum in 0 < r < 1 (cf. Fig. 9.6) and define Rd as the locus of the inflection point: d2 η ðR Þ = 0 dr 2 d

ð9:111Þ

Such defined distance Rd may be called a d-dimensional radius of the critical nucleus. Applying the Laplace method of asymptotic expansion [5] to slowly varying functions of r, F(r) and H(r), we obtain the following relations: 1

drF ðr Þ

0 1 0

dη dr 1

drH ðr Þ r

1

2

≈ F ðR d Þ

dr 0

dr 0 F ðr 0 Þ

dη dr 0

dη dr 1

2



2

≈ F ðRd Þ drF ðr Þ

0

dη dr

1

dr -1 2

Rd

dη dr

2

drH ðr Þ

ð9:112aÞ ð9:112bÞ

0

Then, using these relations and the definition of the surface energy (9.64) for (9.109), we obtain Rd =

ðd - 1Þσ gðηα Þ - g ηdc

ð9:113Þ

In d = 3 and the limit T → TE(P), R3 is equal to the critical nucleus R* in the Gibbsian theory of capillarity (see (4.27a)), because g ηdc → g ηβ , but R3 > R* if the difference (TE - T ) grows. For a system with L = const(T ) (see Example 2.2 and (2E.3)), we obtain Rd =

d - 1 σT E TE - T L

ð9:114Þ

which shows that Rd → 1 not only if T → TE(P) but also if d → 1 for T = const.

9.5

Multidimensional Equilibrium States

217

Let us now introduce the total free energy excess due to presence of the critical nucleus in a previously homogeneous phase α: ΔGcn  GfT, P, ηE ðrÞg - gðT, P, ηα ÞV = V

ð9:115aÞ

1 gðT, P, ηÞ þ κ ð∇ηÞ2 - gðT, P, ηα Þ dd x 2

ð9:115bÞ

Substituting (9.108) into (9.115a) and taking into account that dd x = dV d = Sd ðr Þdr; Sd ðr Þ = 2π ðd- 1Þrd - 1 ; V d ðr Þ = 2π

d-1 d r d

ð9:116Þ

where Sd and Vd are surface and volume of the d-dimensional sphere, we obtain an expression for the free energy excess of the d-dimensional, spherically symmetric critical nucleus: 1

ΔGcn =

drSd ðr Þ κ

0

dη dr

1

2

- ðd- 1Þ r

dr 0 dη κ r0 dr 0

2

ð9:117aÞ

Again, using (9.112a, 9.112b) with the lower limit of integration replaced by (-1) and the definition of the surface energy, (9.64), we obtain V ðR Þ 1 ΔGcn = σ Sd ðRd Þ - ðd- 1Þ d d = σSd ðRd Þ Rd d

ð9:117bÞ

This expression shows that the solution ηE(r; d ) indeed represents a d-dimensional instanton because it corresponds to the final amount of the free energy excess. In the Gibbsian theory of capillarity, it corresponds to the free energy excess ΔG  due to the 3d critical nucleus; see (4.30). When the approximation of a sharply peaked function (dη/dr)2 is not valid, the free energy excess of the critical nucleus, (9.115), may be efficiently calculated using expression (9.25b) for the free energy: 1

ΔGcn = 2ðd- 1Þπ 0

gðT, P, ηÞ - gðT , P, ηα Þ -

1 ∂g ðd - 1Þ dr η r 2 ∂η

ð9:117cÞ

Notice that the Newmann-type BC (9.105) is already accounted for in the expression (9.25b). For the Landau potential, (8.4), we obtain the following expression:

218

9

ΔGcn = π ðd- 1Þ

Heterogeneous Equilibrium Systems

2 d 1 d BI - I 3 3 2 4

ð9:117dÞ

where we introduce the d-dimensional n-th order moments of the OP distribution: I dn =

1

ηn r d - 1 dr

ð9:118Þ

0

These moments may help us calculate many different quantities related to the critical nucleus. For instance, as the radius of the nucleus is not well defined now and (dη/dr)2 is not sharply peaked, the volume of the nucleus may be defined as follows: V cn =

1 ηβ - ηα

V

ðη - ηα Þdd x =

1 ηβ - ηα

1

Sd ðr Þηdr =

0

2π ðd - 1Þ d I ηβ - ηα 1

ð9:119Þ

The moments Ind may be estimated analytically or calculated numerically using numeric solution of the boundary-value problem (9.103, 9.104, 9.105, 9.106). Example 9.3 3d Instanton Find the instantons of the tangential potential. For the tangential potential, (8.30), the space coordinate r can be scaled as follows: ~r = r

W κ

ð9E:10Þ

and the boundary-value problem, (9.103, 9.104, 9.105, 9.106), takes the form d2 η d - 1 dη þ þ 2ηðη - ηt Þð1- ηÞ = 0, ~r d~r d~r 2 dη = 0 at ~r = 0, d~r dη → 0, η → 0 at ~r → 1: d~r

ð9E:11Þ ð9E:12Þ ð9E:13Þ

Notice that, in the scaled form, the system has only one external parameter—the transition-state OP: ηt =

1 D þ3 2 W

The 1d instanton solution can be obtained analytically similar to (9.72):

ð8:34Þ

9.5

Multidimensional Equilibrium States

~r 2ηt = ln

- ln 4

6ηt 4 - ð1 þ ηt Þ ð1=2 - ηt Þð2 - ηt Þ 3

1 þ ηt -

ηt þ

219

ηt ηt - 23 ð1 þ ηt Þη þ 12 η2 η

-

4 ð1 þ ηt Þ 3

ð9E:14Þ

For d > 1 and ηt ≪ 0.5, the OP in (9E.11, 9E.12, 9E.13) can be scaled as ~η = η=ηt , and the problem can be reduced to the universal (parameterless) form (Why?). For ηt not small (~0.5), the instanton solution can be obtained numerically using the shooting method. According to this method, we pick a test value of ηdc , present the solution as η = ηdc - ηdc ηdc - ηt 1- ηdc

~r 2 ; 0 < ~r ≤ δ < < 1, d

ð9E:15Þ

(Why?), and solve (9E.11) numerically for ~r > δ. If the solution satisfied the BC (9E.13) to the desired accuracy, it is declared a success; if not, the test value is adjusted, and the numerical run is repeated until the BC (9E.13) is satisfied. In Fig. 9.10 the solutions for d = 1, 2, 3 are presented as functions of ~r .

1

0.8

order parameter

Fig. 9.10 Instantons of the tangential potential with ηt = 0.353 and different dimensionalities of the space d. For d = 1, the solution is (9E.14); for d = 2, 3, the solutions are numerical

d=3

0.6

d=2

Kt=0.353

d=1

0.4

0.2

-0 0

4

8 distance from center r

12

220

9.6

9

Heterogeneous Equilibrium Systems

Thermodynamic Stability of States: Local Versus Global*

According to the thermodynamic stability principle discussed in Sect. 1.2, only states that correspond to the free energy minima are thermodynamically stable. A theorem from the calculus of variations (see Appendix B) says that the necessary condition for an extremal ηE(x) to deliver minimum to the functional G{η(x)} is for the second variation to be positive: δ2 G ≥ 0;

T, P = constðxÞ:

ð9:120Þ

For the functional (9.15), the second variation takes the form of a quadratic functional: δ2 G =

1 2

δηH ðηE Þδηdv

ð9:121Þ

V

where Ĥ is the Hamiltonian operator from (9.20, 9.21) and the variations δη obey the Newmann-type BC (9.17b). The linear operator Ĥ has a complete set of eigenfunctions {Ψn(x)}: 2

H ðηE ÞΨn ðxÞ 

∂ g ∂η2

ðηE Þ - κ∇2 Ψn ðxÞ = Λn Ψn ðxÞ

ð9:122Þ

P,T

and the variations δη can be expanded in {Ψn(x)}: α Ψ ðxÞ n n n

δη =

ð9:123Þ

Substituting (9.123) into (9.121), we obtain δ2 G =

1 2

α2 Λ n n n

ð9:124Þ

Thus, the problem of the thermodynamic stability of a state described by the extremal (9.18) is reduced to the problem of the spectrum of the eigenvalues of Ĥ. Indeed, if all the eigenvalues Λn are positive, the second variation, (9.124), is positive definite, that is, ηE(x) is a minimum. If all the eigenvalues Λn are negative, the second variation is negative definite, that is, ηE(x) is a maximum. Finally, if some of the eigenvalues Λn of Ĥ are positive and some are negative, the second variation takes on a positive or negative value depending on the variation of the state δη. That is, the extremal ηE(x) is a saddle point. If ηE = const(x), that is, is uniform in space, then (9.122) yields that for the stability of this state, we need Λ = ∂2g(ηE)/∂η2 > 0, which is precisely the criterion we found in Chap. 8.

9.6

Thermodynamic Stability of States: Local Versus Global*

221

For the nonuniform equilibrium states ηE(x), (9.122) is analogous to the Schrödinger equation from quantum mechanics, which describes stationary motion of a particle of mass (ħ2/2κ) in the potential field ∂2g(ηE)/∂η2. Then, Ĥ is analogous to the energy operator, Ψn(x) is the wave function, and (Λn) is the energy of the particle. Review (9.20, 9.21), and notice that we already know d eigenfunctions of Ĥ that correspond to the zero eigenvalue Λn* = 0 —the d-components of the gradient of the extremal ηE(x): Ψi,n ðxÞ =

∂ηE ðxÞ; ∂xi

Λn = 0;

i = 1, . . . , d

ð9:125Þ

These eigenfunctions are called the Goldstone modes. Although very important for the stability of the state ηE(x), they do not solve the problem completely because the zero eigenvalue may not be the smallest one. To resolve the problem of stability of the 1d equilibrium states ηE(x), finding motivation in the fact that they vary only in one, x-direction, we will be seeking the eigenfunctions in the form of the capillary waves—the Fourier modes: Ψn ðxÞ = ψ n ðxÞeik2 x2

ð9:126aÞ

where k2 = (ky, kz), x2 = (y, z), and the amplitudes satisfy the following eigenvalue problem: 2 d 2 ψ n ð xÞ ∂ g ð η ð x Þ Þψ ð x Þ κ = λn ψ n ðxÞ; n dx2 ∂η2 E

dψ n ðxÞ =0 dx

at

x = 0 and

x=X

ð9:126bÞ ð9:126bÞ

where λn’s are the 1d-eigenvalues of Ĥ(ηE(x)). Substituting (9.126a) into (9.122), we obtain the relation Λ n = λ n þ κ j k2 j 2 ≥ λ n

ð9:127Þ

It shows that, for the stability of 1d states, the most important contribution is the eigenvalue of the capillary-wave-amplitude because the 3d-eigenvalues Λn’s are not less than the 1d-eigenvalues λn’s.

9.6.1

Type-e4 State: Plane Interface

Let us start with the plane-interface state ηe4(x), Sect. 9.3.7. As the Goldstone mode does not solve the stability problem of this state, we must find other eigenfunctions and eigenvalues of Hamilton’s operator (9.122) where the state ηe4(x) is used for the

222

9 Heterogeneous Equilibrium Systems

“quantum mechanical potential energy” ∂2g(ηe4)/∂η2. Here we have to be more specific in the choice of the function g(η). Let us look at the Case 1 of the Landau potential, that is, (8.4) with B = 0 and A < 0, which has the type-e4 APB equilibrium state: ηe4 ðxÞ =

p

-A 2κ

- A tanh x

ð9E:9Þ

The spectrum of Ĥ(ηE(x)) is discrete for λn ≤ -2A. The 1d Goldstone mode ψ0 /

dηe4 / dx

1 cosh 2 x

-A 2κ

; λ0 = 0

ð9:128aÞ

has the smallest eigenvalue—zero. There is one more bound, real eigenfunction (see (9.61) or Appendix E):

ψ 1 / ηe4 ðxÞ

tanh x dηe4 / dx cosh x

-A 2κ -A 2κ

; λ1 = -

3 A 2

ð9:128bÞ

Normalization of the eigenfunction does not matter because they obey a linear equation. The rest of the eigenfunctions (n = 2, 3, . . .) are “scattering states” (unbound, complex valued) with λn > -2A. Thus, the APB is neutrally stable inhomogeneous state in the sense that it can be relocated in the x-direction to any part of the system without any additional free energy cost. Same is true about any type-e4 state (9.58), which represents the equilibrium interface. The equilibrium state ηe4, “potential energy” ∂2g(ηe4)/∂η2 for this state, and bound eigenfunctions are shown in Fig. 9.6.

9.6.2

General Type-e and Type-n States

For other type-e and type-n states, exact solutions of the respective Schrödinger equations are not known, and we must look for other means to determine type of stability of these states. That is when Sturm’s comparison theorem (Appendix E) is of significant help. According to that theorem, the discrete band of the spectrum of the eigenvalues {λn} can be ordered such that the greater eigenvalue corresponds to the eigenfunction Ψn with greater number of zeros:

9.6

Thermodynamic Stability of States: Local Versus Global*

n = 0, 1, . . . , n - 1, n , . . . λ0 < λ1 < . . . < λn - 1 < λn = 0 < . . .

223

ð9:129Þ

ψ 0 , ψ 1 , . . . , ψ n - 1 , ψ n , . . . Then, (9.129) shows that ψ n*(x) = dη1d/dx is the eigenfunction with the eigenvalue λn* = 0 and all we need to know is how many zeros this function has. Indeed, if ψ n*(x) has at least one zero in the domain (0, X), then there exists another eigenfunction ψ n* - 1(x) with fewer zeros that corresponds to a smaller eigenvalue, λn* - 1 < λn* = 0, and the solution η1d(x) is unstable. Otherwise, it is stable. Equations (9.31, 9.38, 9.39) show that dηe1/dx, dηe2/dx, dηe3/dx, and dηn/dx have zeros in the domain 0 < x < X. Hence, on the hypersurface G{η(x)}, the functions ηe1–3 and ηn represent saddle points. Notice that because dηe4/dx does not have zeros in the domain -1 < x < +1, Sturm’s comparison theorem proves our previous point that ηe4 is a state of neutral stability. Example 9.4 1d Instanton and Its Stability Find the 1d instanton (critical plate) and determine its stability in the system described by the following free energy density: a 1 g = g0 þ η2 - η4 , 2 4

a>0

ð9E:16Þ

This potential may be used to model a metastable state ðηm = 0Þ in a system where stable states ðηs → ± 1Þ have much lower level of the free energy. ELE (9.16) for this system is resolved in the following form (cf. (9.31)): dη = ±η dx

a η2 1κ 2a

ð9E:17Þ

Its instanton solution centered at x0 = 0 is p ηn4 ðxÞ = ± 2a sec h

a x κ

ð9E:18Þ

The eigenvalue equation is 2

Hψ n ðxÞ 

∂ gðηn4 Þ d2 - κ 2 ψ n ð xÞ = λ n ψ n ð xÞ 2 dx ∂η

The Goldstone mode of this equation is

ð9E:19Þ

224

9

ψ 0 / tanh

a x sech κ

Heterogeneous Equilibrium Systems

a x ; λ0 = 0 κ

ð9E:20Þ

The eigenvalue equation has one more bound eigenfunction: ψ - 1 / sech2

a x ; λ - 1 = - 3a κ

ð9E:21Þ

The 1D instanton is unstable because the eigenvalue λ-1 < 0. Compare this example to the domain wall and notice that the Goldstone and second bound state switch their places. An illuminating analogy can be established between the instanton described in this example and a homoclinic orbit describing motion of a particle in one well of a double-well even potential. The potential (9E.16), ELE (9E.17), solution, (9E.18), and the bound eigenfunctions (9E.20) and (9E.21) are shown in Fig. 9.11.

9.6.3

3d Spherically Symmetric Instanton

Now let’s consider thermodynamic stability of the critical droplet—the 3d radially symmetric instanton; see Sect. 9.5.3. Based on the success with using the Goldstone mode and Sturm’s comparison theorem for the analysis of stability of the 1d states, it is tempting to try to use the same combination for the 3d case. We will do that by presenting here the stability analysis on the “physical level of rigor,” which is very illuminating and sufficiently rigorous for our purposes. First, let’s present the 3d Goldstone modes in the vector form: Ψn ðxÞ = ∇η3d ðr Þ =

dη3d ðr Þ dη ðr Þ j = 3d f i ðϑ, ϕÞji , dr r dr

i = 1, 2, 3

ð9:130Þ

where ji and fi are the Cartesian unit vectors and projection coefficients, respectively (see Appendix C). Then, substituting (9.130) into (9.122) and using (C.27), we obtain an equation: 0 = Hðη3d ðr ÞÞΨn ðxÞ 2

=

1 ∂ ∂ g ∂ 2 ðη Þ - κ 2 r2 - 2 r ∂r r ∂η2 3d ∂r

which shows that the eigenfunction

dη3d ðr Þ f i ðθ, ϕÞji , dr

ð9:131Þ

9.6 Thermodynamic Stability of States: Local Versus Global*

free energy density g

0.2

i

0.1

0

-0.1

-0.5

0

0.5 order parameter h

1

1.5

0

0.5 order parameter h

1

1.5

0.8

ii

dh/dx

0.4

0

-0.4

-0.8 -0.5

iii 4

spatial coordinate x

Fig. 9.11 A 1d system described by the free energy (9E.16). The potential (i), the phase plane (ii), and the spatial distribution of the modes (iii): (1) instanton (9E.18); (2) the Goldstone mode (9E.20); and (3) the bound eigenfunction (9E.21)

225

2

3

1

0

2 -2

-4

-0.5

0

0.5 order parameter

1

1.5

226

ψ n ðr Þ 

9

Heterogeneous Equilibrium Systems

dη3d ðr Þ dr

ð9:132Þ

is the zero-eigenvalue solution of the following Sturm-Liouville problem: 2 dψ d r2 ∂ g ð η ðr ÞÞ ψ n = λ n ψ n r2 n - 2 þ dr dr κ ∂η2 3d

ð9:133Þ

in the domain 0 ≤ r < 1. Taking into account that ψ n ð0Þ = 0 (see (9.104)) and applying Sturm’s comparison theorem (see Appendix E), we deduce that there exists an eigenfunction ψ 0(r) such that it does not have zeros in 0 ≤ r 0, and if its total derivative due to the system D(x) is nonpositive: (dL/dt)D = ∂L/∂xD(x) ≤ 0. Existence of a Lyapunov function allows us to analyze stability of the equilibrium states of a dynamical system and distinguish between the locally stable and unstable equilibrium states.

232

10

Dynamics of Homogeneous Systems

It also proves that the states of the evolutionary trajectory are time ordered and the reverse evolution is impossible, that is, the evolution is irreversible [1]. 5. One may argue that the system may have evolutionary paths, which do not lead to the equilibrium state η. However, they must lead to one of the other equilibrium states, e.g., local equilibrium state instead of the global one, because otherwise the zeroth law is violated. Hence, (10.8) is true on this path also. This analysis helps us recognize that the domain -1 < η < +1 may be broken into “basins of attraction” of different equilibrium states η1 , η2 , . . .; see Chap. 8. 6. A word of caution is necessary here regarding the concept of nonequilibrium free energy of the system. Rigorously speaking, statistical mechanics provides a recipe for how to calculate the free energy at an equilibrium state only. Then, how can we define the free energy at a nonequilibrium state? The concept of an external field developed in Chap. 8 may help us clarify the concept of nonequilibrium free energy. Indeed, if an external field H conjugate to the OP is applied to the system, then dG = - SdT + VdP - Hdη. Comparing it to (10.1) written for a nonequilibrium state, we see that such state can be reproduced by application to the equilibrium state of an external field of certain strength H. Then, using (8.46), we may define the free energy at a nonequilibrium state of the OP η as equal to the free energy at the same value of the OP, which has attained equilibrium in the presence of the conjugate field H, plus H times the OP; see Appendix I: GðP, T, 0, ηÞ = GðP, T, H, ηÞ þ Hη, if

10.2

η = ηðP, T, H Þ

ð10:9Þ

Dynamics of Small Disturbances in the Linear-Ansatz Equation

Now we can analyze the general dynamic features of the linear ansatz. Let’s start with the situation when in the beginning the system was slightly pulled away from the equilibrium state η and left to evolve by Eq. (10.3). Then we can expand the free energy near this value: 2

3

4

∂G ∂GðηÞ ∂ GðηÞ 1 ∂ GðηÞ 2 1 ∂ GðηÞ 3 Δη þ Δη þ Δη ⋯ = þ 2 2 ∂η3 6 ∂η4 ∂η ∂η ∂η Δη = η - η

ð10:10aÞ ð10:10bÞ

Taking into account that ∂GðηÞ=∂η = 0, we obtain an equation for the small disturbances Δη:

10.2

Dynamics of Small Disturbances in the Linear-Ansatz Equation

233

2

∂ GðηÞ dΔη Δη ffi -γ dt ∂η2

ð10:11aÞ

This is a linear differential equation of the first order, which has a general solution: Δη = Δηð0Þeβ1 t ;

2

β1 = - γ

∂ GðηÞ : ∂η2

ð10:11bÞ

Here β1 is called the amplification factor. As you can see from (10.11a), the sign of the amplification factor determines asymptotic behavior of the small disturbances: if β1 < 0, then |Δη(t)| decreases with time (decays), relaxes; if β1 > 0, then |Δη(t)| increases with time (grows) and soon leaves the domain of validity of (10.10a). If β1 = 0, then, according to (10.11b), |Δη(t)| neither grows nor decays, which means that (10.11a) is not adequate and we have to include the higher-order terms of the expansion (10.10a). Given the condition, (10.8), the sign of the amplification factor 2 2 β1 is fully determined by the sign of ∂ GðηÞ=∂η2 . Hence, ∂ GðηÞ=∂η2 is a very 2 good indicator of stability of a homogeneous equilibrium state: ∂ GðηÞ=∂η2 > 0 at 2 the stable equilibrium, ∂ GðηÞ=∂η2 < 0 at the unstable equilibrium, and 2 ∂ GðηÞ=∂η2 = 0 at the neutral equilibrium, that is, the spinodal point; see Sects. 1.2 and 8.2. The characteristic time scale of the relaxation 2

1 ∂ GðηÞ 1 = τ= jβ1 j γ ∂η2

-1

ð10:12Þ

determines the rate of the evolution. If the system has more than one stable equilibrium, the time scales of equilibration around them may be different. For instance, for the system described by the free energy, (8.4), the relaxation time constant of the globally stable state is smaller than that of the locally stable one. 2 If j ∂ GðηÞ=∂η2 j → 0, that is, near the spinodal points, then τ → 1. The type of stability associated with β1 is called dynamic because it originates from the dynamic equation; see Sect. 1.3. It is very closely related to the thermodynamic stability studied in Sect. 1.2 and Chap. 8. Indeed, β1 < 0 for all locally stable homogeneous states and β1 > 0 for the unstable ones. However, β1 is not a perfect indicator of the global stability because one can envision a potential G (not considered in the previous chapters) where the curvature of the deeper well is less than that of the shallower one. The situation is even more complicated for the heterogeneous states; it will be considered in the next chapter.

234

10.2.1

10

Dynamics of Homogeneous Systems

Spinodal Instability and Critical Slowing Down

As we can see from (10.10a), our analysis of the dynamic stability fails at the spinodal point, that is, if G0 ðηÞ = 0 and G00 ðηÞ = 0: The general effect that is taking place at this point is the slowing down of the transition because, as one can see from (10.12), τ → 1. However, the details of the effect depend on the higher-order terms of the expansion (10.9). 000 If G ðηÞ ≠ 0, then dΔη ffi β2 Δη2 ; dt

3

β2 = -

1 ∂ GðηÞ γ 2 ∂η3

ð10:13aÞ

and Δη =

Δηð0Þ : 1 - Δηð0Þβ2 t

ð10:13bÞ

Important feature of (10.13a) is that the temporal behavior of Δη depends not only 000 on the sign of G ðηÞ but on the sign of the initial condition Δη(0) also. Indeed, if 000 Δηð0ÞG ðηÞ > 0, then Δη(t) is a monotonically decaying function with the asymp000 totic behavior of t-1 instead of the exponential, exp(β1t). But if Δηð0ÞG ðηÞ < 0, then Δη(t) increases without bound—finite-time blowup—and quickly leaves the domain of small disturbances. The rate of this process is characterized by the blowup 000 time t 2 = ðΔηð0Þβ2 Þ - 1 : Thus, if G ðηÞ ≠ 0, the system is dynamically unstable because the runaway scenario is always possible: all one needs is to find an initial 000 disturbance such that Δηð0ÞG ðηÞ < 0. 000 000 If, in addition to G00 ðηÞ = 0, also G ðηÞ = 0 but G0 ðηÞ ≠ 0, then dΔη ffi β3 Δη3 ; dt

4

β3 = -

1 ∂ GðηÞ γ 6 ∂η4

ð10:14aÞ

and Δη =

Δηð0Þ : 1 - 2Δη2 ð0Þβ3 t

ð10:14bÞ

As we can see in this case, the asymptotic behavior of the OP disturbance depends on the sign of G0000 ðηÞ but is independent of the sign of Δη(0). If G0000 ðηÞ < 0, the -1 system is unstable with the blowup time t3 = ð2Δη2 ð0Þβ3 Þ : If G0000 ðηÞ > 0, the system is stable, but its asymptotic approach to the equilibrium is Δη ~ t-½ instead of the exponential one, that is, much slower than the relaxational or spinodal. The phenomenon of unusually long equilibration is known as critical slowing down.

10.2

Dynamics of Small Disturbances in the Linear-Ansatz Equation

235

Example 10.1 Spinodal Instability and Critical Slowing Down for the Landau Potential Find the conditions, relaxation time, and amplification factors of the slowing down for a system described by the Landau potential. To find the temporal properties of these regimes, we use the following equations derived in Sect. 8.2: 2

∂ G ∂η2

A p 2 B2 - A ± B B2 - A

= A, B 3

∂ G ∂η3

= A, B

η=η0 η=η ±

for for

- 4B p 2 B ± 3 B2 - A

for for

η=η0 η=η ±

ð10E:1Þ

ð10E:2Þ

4

∂ G ∂η4

=6

for η = η0 , η ±

ð10E:3Þ

A,B 000

For the state η0 , the condition of G00 ðηÞ = 0 is A = 0 and of G ðηÞ = 0 is B = 0. 000 For the state η ± , the condition of G00 ðηÞ = 0 is A = B2 and of G ðηÞ = 0 is B = 0. Hence, the spinodal slowing down is, obviously, on the spinodal lines 1 and 2 for η ± and η0 , respectively, and the critical slowing down occurs at the Landau critical point; see Fig. 8.1. Then, for the relaxation time, we obtain

τ=

½γA - 1

p 2γ B - A þ B B2 - A

-1

2

for

η=η0

for

η=ηþ

ð10E:4Þ

Notice that τ0 ≷ τ+ below/above the equilibrium line; see Fig. 8.1. For the spinodal slowing-down amplification factors, we obtain β2 =

2γB for

η = η0

- γB for

η = ηþ

ð10E:5Þ

For the critical slowing down, we obtain the amplification factors: β3 = - γ for

η = η0 , η ±

ð10E:6Þ

These regimes of the linear-ansatz evolution of small disturbances in a system described by the Landau potential are depicted in Fig. 10.1.

236

10

finite-time blow-up unstable (transition) state

order parameter disturbance 'K

Fig. 10.1 The linear-ansatz dynamics of small disturbances in a system described by the Landau potential. The respective regimes are described by Eqs. (10.11b), (10.13b), and (10.14b) with the parameters from Eqs. (10E.4), (10E.5), and (10E.6)

Dynamics of Homogeneous Systems

'K 

critical slowing down spinodal slowing down locally stable state

globally stable state

0 0

1

2

3

4

5

scaled time t

10.2.2

More Complicated Types of OPs

If the OP of the system is a complex variable, the driving force in (10.3) should be 2 understood as a derivative of the analytic function G(η). In this case Re ∂ GðηÞ=∂η2 2 2 serves as an indicator of stability, and Im∂ GðηÞ=∂η is proportional to the frequency of oscillations. If the OP is a multicomponent variable, the linear-ansatz dynamic equation should be written for each component separately (see Appendix I, §3). The dynamic equations for the components will contain the cross-terms, which obey the Onsager symmetry relations. The cross-coefficients affect stability of the equilibrium states.

10.3

Evolution of Finite Disturbances by the Linear-Ansatz Equation

Although many features of the evolution of homogeneous systems may be understood from the analysis of the linearized Eqs. (10.10–10.14), the full evolutionary scenario can be revealed only by the solution of the full nonlinear Eq. (10.3). We will demonstrate that by solving (10.3) using the tangential potential (8.30). In this case,

Evolution of Finite Disturbances by the Linear-Ansatz Equation

Fig. 10.2 Solutions (10.16a, b) of the nonlinear homogeneous linear-ansatz dynamic Eq. (10.15) for various values of ηt. The arrow of time is from left to right. Purple curve—∂G/∂η for ηt = 0.25. Blue circles, equilibrium states; closed, stable; open, unstable. Black arrows—direction of evolution of the OP. Notice the basins of attraction of two stable states, η = 0, 1, and the repulsive “watershed” by the unstable state η = ηt

237

1

order parameter

10.3

Kt=0.5 Kt=0.25

0.5

Kt=0.25

0 0

-20

0

20

scaled time

dη = 2γWηð1- ηÞðη - ηt Þ, dt

ð10:15Þ

where ηt is the OP value of the transition state, (8.34). Taking into account that 1 1 1 1 þ = xð1 - xÞðx - aÞ að1 - aÞðx - aÞ ð1 - aÞð1 - xÞ ax we obtain a general solution of (10.15) in the form 2γWt =

ln j η j ln j 1 - η j ln j η - ηt j - const ηt 1 - ηt ηt ð1 - ηt Þ

ð10:16aÞ

const =

ln j 1 - ηð0Þ j ln j ηð0Þ - ηt j ln j ηð0Þ j 1 - ηt ηt η t ð1 - η t Þ

ð10:16bÞ

where

and η(0) is the OP value at t = 0. In Fig. 10.2 are depicted the solutions (10.16a, b) for various values of ηt. As one can see, the equilibrium value η = ηt divides the OP domain into two basins of attraction with two “watersheds” each: ηt, ±1. Each basin of attraction is controlled by one manifold. Approaching the stable equilibrium state η = 0 or 1, or departing from the unstable one η = ηt , the OP exhibits exponential tails, which have been revealed by the linearized analysis of (10.11), (10.12). Compare solution (10.16a, b) with the Section “Decomposition of Unstable States” of Appendix G and notice the similarities.

238

10.4

10

Dynamics of Homogeneous Systems

Beyond the Linear Ansatz

Now let’s analyze a “nonlinear ansatz,” that is, a possibility that the OP evolution equation is a nonlinear function of the driving force: dη = -Φ dt

∂G ∂η

ð10:17Þ P,T

The function Φ(x) is subject to the following constraints: 1: Φð0Þ = 0

ð10:18aÞ

2: ΦðxÞ - analytic near x = 0

ð10:18bÞ

Substituting (10.17) into (10.6) and using (10.5), we find another constraint on the function Φ(x) in the domain of its definition: 3: xΦðxÞ > 0 near x = 0

ð10:18bÞ

As an example of a nonlinear ansatz, a polynomial Φ(x) = γx + αx2 + βx3 satisfies the conditions of (10.18) if γ > 0, α = 0, and β > 0. Thus, the linear-ansatz Eq. (10.3) is a valid, but not the only, choice of the OP evolution equation. The nonlinear ansatz may be thought of as an equation with the relaxation coefficient that depends on the OP itself—γ(η). Then the conditions of (10.18) limit the allowed functional dependencies of γ(η).

10.5

Relaxation with Memory

The linear and nonlinear ansatz provide instantaneous connections between the driving force and the reaction of the system. Some systems, so to speak, may have memory, that is, current reaction of the system is affected by the values of the driving force from the past. This property may be expressed by the “memory integral” over the driving force: dη =dt

t -1

Γðt- t 0 Þ

∂G ðηðt 0 ÞÞdt 0 ∂η

ð10:19Þ

where the integral kernel Γ(u) is called the memory function. The memory integral summarizes contributions of the driving force from all the moments in time up to the current moment (t) but does not spread into the future (t' > t) in order not to violate causality. These requirements justify the following properties of the memory function:

10.5

Relaxation with Memory

239

Γðu < 0Þ = 0; Γð0Þ > 0; Γð1Þ = 0: 1

M

ΓðuÞdu

ð10:20aÞ ð10:20bÞ

0

where M is the total memory of the system. Different functional forms of the memory function determine different evolutionary paths of the system: 1. If the system has “no memory,” Γ ð uÞ = γ δ ð uÞ

ð10:21Þ

then M = γ 1 and (10.19) reduces to the linear-ansatz Eq. (10.3). 2. If the system has “full memory,” ΓðuÞ = constðuÞ =

1 m

ð10:22Þ

then M = 1 and (10.19) can be differentiated with respect to t once to yield m

d2 η ∂G =ðη Þ dt 2 ∂η

ð10:23Þ

This equation is analogous to Newton’s equation of motion of a particle of mass m in the potential force field G(η) where η(t) is the position coordinate of the particle. Solutions of this equation are known to have completely different dynamics than those of the linear-ansatz Eq. (10.3). For instance, the small deviations from a stable equilibrium Δη, instead of relaxation as in (10.11), represent motion of a frictionless harmonic oscillator: 2

Δηðt Þ = A cosðωt þ φ0 Þ;

ω2 =

1 ∂ GðηÞ m ∂η2

ð10:23aÞ

3. If the memory function is Γ ð uÞ = Γ 0 e - τ m u

then

1

Notice that here we take

1 0 δðuÞdu = 1.

ð10:24Þ

240

10

Dynamics of Homogeneous Systems

M = Γ0 τ m

ð10:24aÞ

and differentiating (10.19) with respect to t, we obtain d2 η 1 dη ∂G þ = - Γ0 ðηÞ dt 2 τm dt ∂η

ð10:25Þ

This equation has features of both equations, no memory and full memory; hence, the integral kernel, (10.24), can be called the partial-memory function. Eq. (10.25) describes motion of a damped oscillator. Properties of its solutions are also well known [2]. Compare (10.21, 10.23, 10.25), and notice that for the full-memory and no-memory functions to be the limiting cases of the partial-memory function, we need Γ0 = 1=m,

τm = mγ:

ð10:26Þ

Considering a functional form of the memory function, one has to take into account the second law, which says that the dissipation during the process of OP evolution must not be negative. Hence, the following statement must be true: t

dG ∂G dη ∂G = =ðηðt ÞÞ dt dt ∂η ∂η

-1

Γðt- t 0 Þ

∂G ðηðt 0 ÞÞdt 0 < 0 ∂η

ð10:27Þ

Here as before we assume that ∂G/∂t = 0. The analysis of this condition can be conducted using (10.24) as the most general form of the memory function. Then, we are looking for ∂G ðη ðt ÞÞ ∂η

1 0

e - τm u

∂G ðηðt- uÞÞdu > 0 ∂η

ð10:27aÞ

This expression is sign-positive if (i) τm → 0 or (ii) if ∂G/∂η is sign-definite. The former means that the system does not have memory. The latter means that the system during its evolution stays within the same basin of attraction, never crossing the barrier between them.

10.6

Other Forces

Summarizing our analysis in the previous sections, one can say that starting with any initial condition η(0), the system stays in the same basin of attraction and never crosses the divider; inspect Fig. 10.2. This means that the gradient-flow equation does not actually describe a phase transition in its entirety, only the relaxation-toequilibrium part of the process. For instance, this equation does not have means to

References

241

describe the process of nucleation; see Sect. 4.2. Neither memory nor nonlinearity can fix this problem. One can say that it is the second law that prohibits the gradientflow equation from describing nucleation. What changes do we need to make in (10.3) in order for it to be able to describe the nucleation process? In Chap. 8 we reached the conclusion that a phase transition implies overcoming the potential barrier, that is, moving “uphill” in the representative space of the system. This cannot be accommodated by the driving force (∂G/∂η) because this force drives the system “downhill.” Thus, we must include into the dynamic equation “other forces” that can push the system uphill: dη ∂G = -γ dt ∂η

þF

ð10:28Þ

P,T

One way to overcome the barrier is to include the spatially diffusive force, κ∇2η, which, as we discussed in Chap. 9, appears because of the gradient energy contribution. According to (9.16), the diffusive force can balance the driving force, hence push the system uphill. This will be discussed in Chap. 11. Another possibility is to take into account thermal fluctuations, which arise due to the atomic/molecular nature of the physical systems, that is, microscopic degrees of freedom. In the framework of the dynamic equation, the effect of the fluctuations can be expressed by adding the so-called Langevin force. This will be discussed in Chap. 12.

Exercises 1. Numerical exercise. Using your favorite programming language, solve equation dx 2 dt = - 2m½a- xx for different initial conditions. What is the role of the coefficients m, a? 2. Verify that the case of “very short memory,” (10.21), can be recovered from the general case of (10.24) if τm → 0, Γ0 → 1 , and τmΓ0 = γ and the “full memory” if τm → 1. (Hint: Integrate (10.21, 10.24) from t to 1!)

References 1. H. Metiu, K. Kitahara, J. Ross, J. Chem. Phys. 64, 292 (1976) 2. L.D. Landau, E.M. Lifshitz, Mechanics (Pergamon, Oxford, 1960), p. 61

Chapter 11

Evolution of Heterogeneous Systems

In this chapter we apply the ideas and methods developed in previous chapters to the dynamics of heterogeneous systems and obtain the celebrated time-dependent Ginzburg-Landau equation of the order parameter evolution. This equation will be applied to various heterogeneous states of the system. Its application to an equilibrium state shows that the criteria of the dynamic and local thermodynamic stability coincide. In case of a plane interface, this equation yields a traveling wave solution with a finite thickness and specific speed proportional to the deviation of the system from equilibrium. To better understand properties of such solution, we use the thermomechanical analogy, which was found useful for the equilibrium states. The drumhead approximation of the Ginzburg-Landau equation reveals various driving forces exerted on a moving interface and allows us to study evolution of droplets of a new phase. We extend the definition of the interfacial energy into the nonequilibrium domain of the thermodynamic parameters (phase diagram) and analyze growth dynamics of the antiphase domains using the Allen-Cahn self-similarity hypothesis. The analysis reveals the coarsening regime of evolution, which is observed experimentally. In this chapter we finish up column 4 of the beams and columns FTM configuration in Fig. P.1.

11.1

Time-Dependent Ginzburg-Landau Evolution Equation

Let us now look at the evolution of a spatially nonuniform system. In the spirit of the discussion of the nonequilibrium systems in Chap. 10, we assume that the evolution equation should guide the system toward one of the equilibrium states, and the rate of change of OP should be proportional to the driving force. The only difference between the homogeneous and heterogeneous systems is that in the latter the driving force includes spatially diffuse forces. Based on this argument and © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Umantsev, Field Theoretic Method in Phase Transformations, Lecture Notes in Physics 1016, https://doi.org/10.1007/978-3-031-29605-5_11

243

244

11

Evolution of Heterogeneous Systems

comparing the equilibrium equation (8.5) with (9.16), instead of equation (10.3), the following evolution equation may be proposed: dη δG = -γ dt δη

ð11:1Þ P,T

Comparing this equation with the linear-ansatz equation (10.3), we see that the partial functional derivative of the latter is replaced by the variational derivative of the former. This equation also can be recognized as a gradient flow. The difference, of course, is that now the Lyapunov function(al) is the total free energy functional, (9.15), not the function G(η). This is important for our understanding of the general properties of the evolution according to this equation. Indeed, using (11.1) we find that the total free energy of the system decreases (dissipates): dG = dt

δG dη 3 d x= -γ δη dt

δG δη

2

d3 x < 0;

ð11:2Þ

if (10.8) is valid. That is why the systems described by this equation are called dissipative. Notice, however, that evolution of the total ordering in the system is not determined: d dt

ηd3 x =

dη 3 d x= -γ dt

δG 3 > d x< 0 δη

ð11:3Þ

The conclusions that we came to in Chap. 10 regarding the properties of the nonlinear rate/driving force relation (Sect. 10.4), memory function (Sect. 10.5), and inability of the gradient-flow equation (11.1) to describe a phase transition (Sect. 10.6) are also valid here if the variational derivative δG/δη replaces partial functional derivative ∂G/∂η. The latter conclusion comes out of (11.2) (Why?) Using the expression of the variational derivative of the total free energy functional G{η} with respect to the OP (see Eq. (9.16) and Appendix B), (11.1) can be written as follows: dη ∂g = γ κ∇2 η dt ∂η

ð11:4Þ P,T

In the literature this equation is known as the time-dependent Ginzburg-Landau equation (TDGLE). Mathematically, TDGLE is an inhomogeneous parabolic-type partial differential equation of the same type as the heat or diffusion equations. There are general methods of solving this type of equation. For instance, the method of Green’s function

11.2

Dynamic Stability of Equilibrium States

245

x2 1 e - 4mt 3=2 ð4πmt Þ

With its help, the general solution of TDGLE can be written in the following form: t

ηðx, t Þ = ηðx, t 0 Þ - γ

dt t0

0

dx V

0

∂g=∂ηðx0 , t 0 Þ ½4πmðt - t 0 Þ3=2

e

-

ðx - x0 Þ2

4mðt - t 0 Þ

ð11:5Þ

where m  γκ is the mobility of the system and η(x, t0) is the initial condition. Unfortunately, this form is difficult for theoretical analysis and numerical computations.

11.2

Dynamic Stability of Equilibrium States

Using TDGLE, dynamic stability of equilibrium states can be studied in a systematic way by analyzing evolution of small perturbations near these states. Evolution of small perturbations in a heterogeneous system may be very different from that in a homogeneous one due to two major aspects. Firstly, in a heterogeneous system, there are small perturbations of the homogeneous equilibrium state which are spatially nonuniform; this changes the rate of their evolution. Secondly, and more significantly, in a heterogeneous system, there are essentially heterogeneous equilibrium states, near which the evolution of small perturbations is significantly different from that near the homogeneous ones. The small perturbations may be expressed as a superposition of some suitable and complete set of normal modes, which can be examined separately. In Sect. 1.5 we laid out the general principles of the method of normal modes. Here we will demonstrate it in action. To obtain equation of motion of the normal modes from the relevant equation of motion of the entire system, we retain only terms which are linear in the perturbations and neglect all terms of higher order. Hence, inserting ηðx, t Þ = ηE ðxÞ þ Δηðx, t Þ

ð11:6aÞ

into TDGLE (11.4) and presenting ∂g/∂η by its Taylor series near the equilibrium state ηE, we obtain 2

3

1 ∂Δη ∂ g 1 ∂ g = κ∇2 Δη - 2 ðηE ÞΔη ðη ÞðΔηÞ2 - ⋯ γ ∂t 2 ∂η3 E ∂η

ð11:6bÞ

(Why? is ∂g(ηE)/∂η missing?) Neglecting all terms beyond the first order in Δη, we get the linearized problem:

246

11

1 ∂Δη = - H ðηE ÞΔη, γ ∂t

Evolution of Heterogeneous Systems 2

H ðη E Þ =

∂ g ðη Þ - κ∇2 ∂η2 E

ð11:7Þ

Not surprisingly the problem of dynamic stability reduced to the problem of the eigenvalues of the Hamilton operator of the equilibrium state in question. We can benefit greatly from the analysis of the properties of this operator that was done in Chap. 8. Here we will briefly repeat some of the critical steps of that analysis.

11.2.1

Homogeneous Equilibrium States

Let’s first consider evolution of the small perturbations Δη near the homogeneous equilibrium states ηE = η. In this case (11.7) turns into a linear PDE with constant coefficients, and, as we did it in Sect. 9.5.1, we will be using the method of Fourier transform (see Appendix F). The only difference is that now we will be using the integral transform instead of the discrete one. Replacing Δη(x, t) in (11.6a) with its Fourier transform, Δηðx, t Þ = Δηðk, t Þ =

1 ð2π Þ3

Δηðk, t Þeikx dk Δηðx, t Þe - ikx dx

ðF:13Þ

we obtain the ODE for the Fourier transforms Δηðk, t Þ: dΔη = βðkÞΔη dt

ð11:8aÞ

2

βðkÞ = - γ κ jkj2 þ

∂ g ðηÞ ∂η2

ð11:8bÞ

This equation can be solved as an initial value problem: Δηðk, t Þ = Δη0 ðkÞeβt , Δη0 ðkÞ =

1 ð2π Þ3

Δηðx, 0Þe - ikx dx

ð11:9aÞ ð11:9bÞ

where the coefficients Δη0 ðkÞ are the Fourier transforms of the initial condition. Same way as for the homogeneous perturbations in (10.11), the amplification factor of the small heterogeneous perturbations β determines the direction and rate of the evolution—recall that the time scale of evolution is τ = |β|-1; see (10.12). Similar to the homogeneous case, β is a real number for all parameters of the system and

Dynamic Stability of Equilibrium States

Fig. 11.1 Amplification factors for the homogeneous equilibrium states η in a system described by TDGLE (11.4). The state is (1) stable, (2) neutral, and (3) unstable

247

b amplification factor

11.2

0

3

km

kn

wavenumber k=| k |

2

1

perturbations; different from the homogeneous case is that now β depends on the wavenumber k = |k| of the wavevector k = (kx, ky, kz) of the Fourier transform 2 Δηðk, t Þ. If ∂ gðηÞ=∂η2 > 0 (the state η is stable, i.e., a phase), then β < 0 for all 2 wavevectors; if ∂ gðηÞ=∂η2 < 0 (the state η is unstable), then β < 0 for the wavevectors with the wavenumbers k > kn, β > 0 for 0 < k < kn, and β = 0 for the wavevectors with the neutral wavenumber k = kn: 2

k2n = -

1 ∂ gðηÞ κ ∂η2

ð11:10Þ

These cases are depicted in Fig. 11.1. For instance, for the transition state ηt of the tangential potential, (8.30), ktn =

W 2 - ð6DÞ2 2κW

ð11:11Þ

which shows that the transition state is dynamically unstable if |D| < W/6, that is, between the spinodal lines of the phase diagram. It is convenient to think about individual harmonic wavemodes defined by the components of the wavevector k = (kx, ky, kz). For instance, the “most dangerous” mode is the one with the greatest value of the amplification factor β. For TDGLE (11.4), it is the mode with km = 0; see Fig. 11.1. Then you can see that for TDGLE there is only one most dangerous mode with km = (0, 0, 0), that is, the uniform disturbance, but there are many neutral modes kn = (kn,x, kn,y, kn,z) defined by the equation:

248

11

Evolution of Heterogeneous Systems

k2n,x þ k2n,y þ k 2n,z = k2n

ð11:12Þ

Thus, the neutral modes belong to the sphere of radius kn centered at the origin of the Fourier space k. Comparing the dynamical stability analysis of this section and Sect. 10.2 to the thermodynamic stability analysis of Sect. 8.2, we see that they lead to the same 2 conclusion that the stability of the state η is determined by the sign of ∂ gðηÞ=∂η2 . The advantage of the dynamical analysis is that we can learn more about the rate of attainment of the state in the system. A word of caution is needed here. The normal modes must be not only small in amplitude, but they cannot have very high wavenumbers also. Namely, |k| ≪ (interatomic distance)-1, see (12.25) and the discussion around it.

11.2.2

Heterogeneous Equilibrium States

In Sect. 9.6.1 we found that the plane, stationary interface ηE = ηe4(x) separating two phases at equilibrium is thermodynamically stable, that is, stable with respect to static variations. Now let us analyze the dynamical stability of this state. Basing on the success of the analysis of the thermodynamic stability, we shall represent the small perturbations Δη(x, y, z, t) in the form of the capillary waves, i.e., normal modes on the interface: Δη / exp βt þ i k z z þ ky y ψ n ðxÞ

ð11:13aÞ

where ψ n(x) are the eigenfunctions of Hamilton’s operator Ĥ(ηe4(x)) with the eigenvalues λn. Substituting this expression into (11.7), we obtain the following dispersion relation: β = - γ λn þ κ k 2z þ k2y

ð11:13bÞ

The solution (11.13) represents motion of transverse (y, z)-plane waves superimposed on the normal x-deformations of the interfacial structure ηE = ηe4(x). Their evolution depends on the amplification factor β; see (11.13a). As β in (11.13b) is a real number (Why?), the separated solutions will diminish if β < 0, grow if β > 0, and remain stationary if β = 0. Notice from the dispersion relation (11.13b) and condition (9.7) that the second term is always negative due to the stabilizing influence of the gradient energy. This effect may be compared to the effect of the surface tension on the capillary waves on the surface of water. The first term in (11.13b) is almost always negative (review Appendix E). The only other option for this term is to be equal to zero for the bound state with n* = 0. Hence, our dynamic stability analysis of the domain wall (interface) shows that it is neutrally stable with

11.3

Motion of Plane Interface

249

the Goldstone mode (n* = 0, ky = 0, kz = 0) being the “most unstable.” This mode represents a parallel shift of the interface, which does not change the thermodynamic balance in the system because the phases on both sides of it are at equilibrium. Notice that the conclusion of the dynamic stability analysis is equivalent to that of the thermodynamic one. The significant advantage of the former is that it can reveal the rate of growth or decay of different modes.

11.3

Motion of Plane Interface

In addition to the small perturbations of equilibrium states, there exists another important group of solutions of TDGLE (11.4)—group-invariant (similarity) solutions. The idea behind these solutions is to reduce a partial differential equation to an ordinary differential equation by using a particular symmetry of the system. Important examples of such solutions are called traveling waves; see (5.1b). Translation invariance of TDGLE (11.4) is a consequence of existence of the Goldstone modes; see Sect. 9.6.1. Specifically, we seek a solution in the form ηðx, t Þ = ηðuÞ

ð11:14aÞ

u = n  x - vt

ð11:14bÞ

Here n = (j1, j2, j3) is a unit vector and v is a constant, the wave speed, which needs to be determined. A traveling wave solution (11.14) may be visualized as a plane structure with the profile given by η(u) moving through the 3d space with speed v in the direction n. This wave has the Goldstone mode in the direction n and translational symmetry in any direction perpendicular to n. Using the chain rule of differentiation, we can see that ∂η dη =v du ∂t dη ∇η = n du 2

2

ð11:15aÞ ð11:15bÞ 2

∇2 η =

dη du

d2 η ∂ u ∂ u ∂ u þ þ þ du2 ∂x2 ∂y2 ∂z2

∂u ∂x

2



þ

∂u ∂y

2

þ

∂u ∂z

2

=

d2 η du2

ð11:15cÞ

In (11.15c) the parenthesis with the second derivatives vanishes due to (11.14b), and the bracket is equal to one because |n| = 1. Finally, using (11.15) for (11.4), we arrive at the ordinary differential equation:

250

11

κ

d 2 η v dη ∂g þ du2 γ du ∂η

Evolution of Heterogeneous Systems

=0

ð11:16Þ

P,T

A physically important case described by this ODE is a translation-invariant wave of OP, η = η(x - vt ), representing an interface between the stable (call it β) and metastable (call it α) states, which is traveling with constant speed v

11.3.1 Thermomechanical Analogy The thermomechanical analogy established in Sect. 9.3.3 will also help us better understand the phase-transition dynamics by developing an intuitive approach to the problem. According to that analogy, (11.16) corresponds to the Lagrange equation of motion of a particle in the force field of the potential U = (-g) and rateproportional dissipative forces with the damping coefficient (v /γ); see Table 9.1. For the mechanical problem to be relevant to the thermodynamic problem of phase transitions, the potential U must have at least two humps (α and β) separated by a trough (γ), and the particle must move between the humps; see Fig. 9.4. Exact solution of the Lagrange equation [1, 2] tells us that the particle will be executing damped oscillations about the stable rest point γ with different values of the angular frequency depending on the value of the damping coefficient; see Fig. 9.4. (Recall that the thermomechanical analogy switches stabilities between the equilibrium states of the thermodynamic system and the mechanical rest points.) However, the most interesting case of an interface between the stable and metastable phases traveling with constant speed v corresponds to the particle moving from one hump to the other. Mechanically, this is possible only if the particle moves from the higher hump (β) to the lower one (α) and if the damping coefficient has a particular value depending on the difference of heights of the humps (Why?); see Fig. 9.4a. For the problem of phase transitions, this means that the interface may propagate with only one acceptable velocity v.

11.3.2 Polynomial Solution Although the thermomechanical analogy provides a deep insight into the properties of moving interfaces, given importance of the problem, its exact solution is desired. Equation (11.16) is autonomous, that is, an ordinary differential equation in which the independent variable does not appear explicitly. As is known, an autonomous equation is invariant under the Euclidean transformation u→u + a. Since we know that the solution must be translation invariant, we can reduce the order of the differential equation. First, we change the dependent variable from η(u) to

11.3

Motion of Plane Interface

251

qðηÞ =

dη du

ð11:17aÞ

Second, by chain differentiation of (11.17a), we find that q

dq d2 η = dη du2

ð11:17bÞ

Third, we substitute (11.17) into (11.16) and obtain the first-order Abeltype ODE: qðηÞ κ

dg dq v = þ dη γ dη

ð11:18Þ

In Chap. 9 we analyzed the equilibrium properties of (11.18), that is, when v = 0, and obtained an exact solution (9.31) for a general function g(P, T, η) in the form of qðηÞ = ± 2ðgðηÞ - μÞ=κ. In the infinite domain, the solution takes the form of (9.60) or (9.61). Unfortunately, for a moving interface (v ≠ 0), such a general, exact solution does not exist. However, in the case of a polynomial function g(η), one may try to guess a solution. For instance, when g(η) = P4(η), one may try to find the solution in the form q(η) = P2(η). Indeed, let us try to solve (11.18) for the Landau free energy potential (8.4). Using expression (8.6) for the stationary points (the equilibrium set), we can write ∂g ∂η

= ηðη - η - Þ η - ηþ

ð11:19Þ

P,T

Then the dynamic equation (11.18) takes the form q κ

dq v = ηðη - η - Þ η - ηþ þ dη γ

ð11:20aÞ

Physically, a first-order transition means that a solution of its evolution equation approaches the equilibrium states η = 0 and η = ηþ far away from the wave. This motivates the following BC: qð0Þ = q ηþ = 0

ð11:20bÞ

The problem (11.20) has a solution of the form q = Q η η - ηþ

ð11:21Þ

252

11

Evolution of Heterogeneous Systems

Substituting (11.21) into (11.20a) and balancing the terms of the same power in η, we find that indeed this is the case if ±1 Q= p 2κ v= ±γ

κ η - 2η - : 2 þ

ð11:22aÞ ð11:22bÞ

Now η(u) can be found by integrating (11.17a). It has the form ηþ 1 þ eδðuþaÞ

ð11:23aÞ

η δ = ηþ Q = ± p þ , 2κ

ð11:23bÞ

η v ð uÞ =

where the constant of integration a is specified by the boundary or initial conditions. Equation (11.22b) identifies the quantity ηþ - 2η - as the “driving force” for the interface and establishes the relationship between the speed v and parameters of the system. It shows that in a system with B ≠ 0 at equilibrium (T = TE), the interface is not moving (see Eq. (8.12d)) and that for T > TE and T < TE, the interface is moving in opposite directions because the quantity ηþ - 2η - changes sign. However, the sign of v can be reversed not only by changing the thermodynamic conditions but also by the reflection (Q→-Q). Two possible signs of Q manifest mirror symmetry of the system: for Q > 0 the wave is traveling in the direction of increasing u; for Q < 0 the opposite is true; see Fig. 11.2. Comparison of (11.21) with (9.60) and (11.23a) with (9.58) shows that the moving interface has pretty much the same structure as the stationary one; see Fig. 9.2biii. The following questions are in order here: What structural changes do occur when the wave-interface solution “starts moving”? Does its thickness change? The thickness of the moving wave-interface can be estimated by using the definition (9.19f). For the wave (11.23), we have δ ηþ eδu dηv : =du ð1 þ eδu Þ2

ð11:24aÞ

j dηv =du j = δ ηþ =4. Hence, This is a bell-like function with max u Lv =

p 4 4 2κ = : ηþ jδj

ð11:24bÞ

11.3

Motion of Plane Interface

Fig. 11.2 Wave/interfaces traveling in opposite directions

253 1

order parameter Kv

0.8

0.6

v>0 G>0

v 0; 2 t j

τjk = τkj ; τjj = γ j- 1 > 0:

ð11:49aÞ ð11:49bÞ

Using (11.48) and Euler relation for homogeneous functions of the second order: If Y ðαx1 , αx2 , αx3 , . . . , αxn Þ = α2 Y ðx1 , x2 , x3 , . . . , xn Þ then

∂Y xi = 2Y, ∂xi

it is easy to see that the rate of the free energy change in the system is dG  dt

δG ∂ η d3 x = δηj t j

2Fd3 x < 0;

ð11:50Þ

where the last inequality follows from (11.49a). For a traveling wave {ηj = ηj(x - vt)}, (11.50) can be represented as follows: d gðηi Þ = 2vF > 0: dx

ð11:51Þ

For a thermodynamic system, (11.50) and (11.51) mean that 2F is the local rate of dissipation, which is analogous to the dissipation of the mechanical energy due to friction in a mechanical system. Thermodynamically, (11.51) can also be interpreted as that the wave speed v is proportional to the chemical potential gradient with (2F )-1 as the mobility. The thermomechanical analogy may also be generalized on the case of a system, which is not kept at constant temperature; see Chap. 13. The analogy between the equilibrium thermodynamics of continuous systems and conservative mechanics of rigid bodies, which we used above, deserves a deeper analysis. In other words, we can ask a question: “What is the reason for the thermomechanical analogy to exist?” The root cause of the thermomechanical analogy is that both problems allow variational (energy) formulation. There is a deep connection between Clausius’ principle of thermodynamics and Hamilton’s

266

11

Evolution of Heterogeneous Systems

principle of mechanics. Beyond that, the nonequilibrium extension of the phasetransition problem matches the dissipative dynamics of particles because both are linear extensions of the equilibrium problems. However, we can make one step further and pose another question: “What property of a system entails applicability of the variational principles?” Answer to this question may be found in the connection of both theories to the Lagrangian field theory; see Appendix D. However, a more complete discussion of this problem is beyond the scope of this book [8].

Exercises 1. 2. 3. 4. 5. 6.

7. 8.

What does boundary condition (11.20b) physically mean? Show that solution (11.22) “moves in opposite directions” for T > TE and T < TE. Show that q = Q2 ηðη - η - Þ is a solution of (11.20a). Find an application for it. Verify the statement in Example 11.2 that the mobility m in (11E.6c) is OP-scale invariant. Verify the statement in Example 11.2 that (11.21, 11.22, 11E.6) is also a 1d solution of TDGLE (11.4) for the uniform, time-dependent temperature field T(t). (a) Solve 1D TDGLE for small heterogeneous disturbances near the equilibrium states of the Landau potential. (b) Calculate the amplification factor of the evolution of the small disturbances. (c) Based on the calculations, explain which state is stable and which state is unstable. Verify the statement in Sect. 11.4 (4) that the internal structure of a moving curved APB is the same as that of the equilibrium one. Consider the emergent equation of interfacial dynamics (11.30): (a) From this equation, extract the kinetic coefficient. (b) From this equation, find the radius of the critical nucleus and prove that it is an unstable structure. (c) Prove that the critical nuclei of APBs are not possible.

9. Colloidal systems are of great interest for understanding of the dynamics of crystal growth. Radius of a nearly spherical colloidal crystal, which grew from its liquid solution, is presented in Table 11.1 as a function of time. Using the approach of Sect. 11.5: (a) Find the critical radius of this colloidal system. (b) Find the mobility of its interfacial motion. (c) What physical properties of the system do you need to know if you need to find its dimensionless supercooling (Stefan number)?

References Table 11.1 Radius of a colloidal crystal as a function of time

267 Time (s) 0 1.14 1.98 8.01 12.2 19 27.9 33.5 47.1 60.4 73.5 99.3 138

Radius (mm) 1.1 1.15 1.2 2 3 5 8 10 15 20 25 35 50

References 1. L.D. Landau, E.M. Lifshitz, Mechanics (Pergamon Press, Oxford, 1960), p. 131 2. H. Goldstein, Classical Mechanics (Addison-Wesley, Reading, 1980), p. 341 3. A. Kolmogoroff, I. Petrovsky, N. Piscounoff, Bull. Univ. Moscow, Ser. Int., Sec. A 1, 1 (1937) 4. D.G. Aronson, H.F. Weinberger, Partial Differential Equations and Related Topics, Vol. 446 of Lecture Notes in Mathematics (Springer, New York, 1975), p. 5 5. D.G. Aronson, H.F. Weinberger, Adv. Math. 30, 33 (1978) 6. S.-K. Chan, J. Chem. Phys. 67, 5755 (1977) 7. S.M. Allen, J.W. Cahn, Acta Metall. 27, 1085 (1979) 8. A. Umantsev, Phys. Rev. E 69, 016111 (2004)

Chapter 12

Thermodynamic Fluctuations

As we noticed in Chap. 11, TDGLE is not able to describe a phase transition itself, only the evolution of the system after the transition. Obviously, something very important for the complete description of the system is missing from that scheme. What is it? These are fluctuations of the thermodynamic parameters of the system, including the order parameters. Fluctuations are defined as the deviation from the average value. Several different sources of fluctuations may be pointed out; usually they are categorized as internal or external noises. Examples of internal noise are thermal fluctuations and quantum mechanical fluctuations. The former are due to the microscopic structure of matter and unpredictability of atomic hits. The latter are due to the fundamental unpredictability of nature. Examples of external fluctuations are noise in the electrical circuits containing ohmic resistances or electromagnets that create the magnetic fields. Surely, depending on its type, the internal and external fluctuations enter differently into the theoretical description with the former usually representing an additional “driving force” in the system while the latter appearing as a random parameter that is coupled to the state of the system. In this book we are considering only the influence of the thermal (internal) fluctuations on the phase transformations. As we pointed out, this type of fluctuations originates from the atomistic structure of matter and comes into the fieldtheoretic description through the coarse-graining procedure (see Appendix A). From the statistical mechanics point of view, without the account of the fluctuations, the system is confined to a set of parameters that corresponds to one phase. When liquid is cooled down below its freezing point, the conditions for the emergence of solid phase appear, but the transition may not happen. The thermodynamic fluctuations “move” the system in the phase space from one region to another “exploring” different options and “finding” the most favorable one for it. Although this is the most significant role of fluctuations for us, it does not exhaust the relevance of fluctuations to physical systems. Firstly, the fluctuations provide a natural framework for understanding the class of physical phenomena that relates the dissipative properties of a system, such as diffusion of particles in a fluid or the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Umantsev, Field Theoretic Method in Phase Transformations, Lecture Notes in Physics 1016, https://doi.org/10.1007/978-3-031-29605-5_12

269

270

12

Thermodynamic Fluctuations

electrical resistance of a conductor, to the microscopic properties of the system in a state of equilibrium. This relationship was first discovered by Einstein and is often formulated in the form of the so-called fluctuation-dissipation theorem. Secondly, fluctuations can assume considerable significance in the neighborhood of the critical points on the phase diagrams of systems where they obtain a high degree of spatial correlation. This gives rise to phenomena such as the critical opalescence and unlimited increase of the specific heat (the λ-point transition). Although these phenomena fall into a general category of phase transitions, they are not considered in this book in any significant depth because they require a different set of methods. From the point of view of statistical mechanics, different states of a system have different probabilities to be observed in the system, e.g., experimentally. In this chapter we look at the equilibrium distribution of fluctuations and their evolution under the influence of the stochastic environment. First, we calculate the average values of the fluctuations of the OP and free energy of the system. These calculations reveal an important characteristic length scale of the fluctuations—the correlation radius, which defines the length scale of the two-point space correlation function. Then we derive the Levanyuk-Ginzburg criterion, which expresses validity of FTM in a system with fluctuations, and apply this criterion to the system undergoing the second-order transformation. To describe dynamics of fluctuating systems, we introduce irregular Langevin force—the noise, which is acting along with the thermodynamic driving force and whose correlation properties obey the fluctuation-dissipation theorem. On average, evolution of the fluctuations can be described by the structure factor, which asymptotically approaches the equilibrium value. However, there are several different ways how the averaging can be executed. A fluctuating quantity may be averaged with respect to the probabilities of different states in the ensemble, which in FTM is described by the OP values. The fluctuating quantity may be averaged over a long period of time, which is related to the former by the ergodicity principle. A field quantity may be averaged over the volume of the system. Also the averaging may be performed over the values of the noise function and even the initial conditions of the system. Equivalency of these approaches is the subject of the theory of probabilities and stochastic processes, which is discussed in Appendix G. Using the drumhead approximation, we derive the fluctuational version of the emergent equation of interfacial dynamics (EEID) and apply it to analyze dynamics of the interfacial structure factor and the nucleation problem. The former reveals the length scale of the fluctuations of the interfaces. The latter allows us to find the escape time of the system and compare it with the nucleation rate of the classical nucleation theory; see Sect. 4.2. Finally, we analyze the memory effects in non-Markovian systems and find a new fluctuation-dissipation relationship. In this chapter we are building column 5 of the beams and columns FTM configuration in Fig. P.1.

12.1

12.1

Free Energy of Equilibrium System with Fluctuations

271

Free Energy of Equilibrium System with Fluctuations

Let us calculate the free energy of a system at one of the homogeneous equilibrium states, η, taking the fluctuations into account. The OP fluctuations Δη are introduced as follows: ΔηðrÞ = ηðrÞ - η

ð12:1Þ

In Sect. 12.3 we will consider the case where η(r) also depends on time. According to Einstein's formula, which is based on Boltzmann’s principle, the probabilities of the fluctuating states are proportional to the factor exp(-G/kBT ), where G is the free energy of the state, T is the temperature of the system, and kB is Boltzmann’s constant. Different states of our system are described by different values of the OP, η. Thus, the distribution of probabilities of the states is PðηÞ = Ze - GðηÞ=kB T

ð12:2Þ

where Z is the normalization constant, called the partition function. In order to evaluate the probabilities, we expand the free energy density of a fluctuating system about the equilibrium state: 2

gðηÞ = gðηÞ þ

1 ∂ g ðηÞðΔηÞ2 þ ⋯, 2 ∂η2

ð12:3Þ

substitute (12.3) into (9.15), and obtain an expression GðηÞ ≈ GðηÞ þ

2

∂ g ðηÞðΔηÞ2 þ κ ð∇ΔηÞ2 d3 x ∂η2

1 2 V

GðηÞ  gðηÞV

ð12:4aÞ ð12:4bÞ

To represent the increment Δη in this expression, we will use the 3d discrete Fourier transform; see Appendix F: ΔηðrÞ =

ΔηV ðkÞeikr

ð12:5aÞ

ΔηðrÞe - ikr dr

ð12:5bÞ

fkg

ΔηV ðkÞ =

1 V V

where {k} is a discrete set of wavevectors (see Eq. (F.6)) and ΔηV ðkÞ is a fluctuating variable. Notice from (12.5b) that ΔηV ð0Þ is the volume average of the OP

272

12

Thermodynamic Fluctuations

fluctuations; it does not need to be zero. In a homogeneous equilibrium system, however, the ensemble averages hΔηðrÞi = ΔηV ðkÞ = 0:

ð12:6Þ

Here the probabilities of different configurations, (12.2), are used for the statistical averaging.1 Using for (12.4) Properties #3 and #5 of the 3d discrete Fourier transform (see Appendix F), we obtain an expression for the free energy fluctuations in the Fourier space: ΔG  GðηÞ - GðηÞ ≈

V 2

2

f kg

∂ g ðηÞ þ κ jkj2 ΔηV ðkÞ ∂η2

2

ð12:7Þ

Equation (12.7) shows that the “fluctuating part” of the free energy, ΔG, can be diagonalized in the Fourier space, that is, represented as a sum of the terms which depend only on one wavevector k. Notice that the coefficients of the terms are proportional to the amplification factor β(|k|), (11.8b), which we found when we studied evolution of small perturbations of the equilibrium states. Applying (12.2, 12.7) to the fluctuations near the homogeneous state η, we find that the joint probability distribution of different fluctuations PðηÞ = Z exp -

V 2kB T

2

f kg

∂ g ðηÞ þ κ jkj2 ΔηV ðkÞ ∂η2

2

ð12:8Þ

can be expressed as a product of Gaussian distributions, each of which depends on the opposite wavevectors only. Hence, the individual Fourier modes ΔηV ðkÞ are statistically independent: ΔηV ðkÞΔηV ðk0 Þ = 0,

if k þ k0 ≠ 0

ð12:9Þ

Evolution of the Fourier modes with {k} and {k′ ≠ -k} may be considered independently. Applying (12.8), Property #2 of the Fourier transform (Appendix F), and the mathematical formula

1

See Appendix G for a discussion of different ways to average a quantity.

12.1

Free Energy of Equilibrium System with Fluctuations 1

1 2 - βx2



273

x e

e - βx dx 2

dx =

0

0

to the statistically dependent modes with k + k′ = 0, we obtain an expression for the average square of the Fourier mode: 1 2

ΔηV ðkÞ PðηÞd ΔηV ðkÞ ΔηV ðkÞ

2

=

0

=

1

PðηÞd ΔηV ðkÞ

V

kB T 2 ∂ g ðηÞ þ ∂η2

κ j kj 2

ð12:10Þ

0

This equation can be used for the average square of the fluctuations only if 2

∂ g ðη Þ > 0 ∂η2

ð12:11Þ

because otherwise we obtain a negative expression for the mean square fluctuations of the long-wavelength modes (|k| → 0). This means that (12.10) is applicable at the stable equilibrium states only. Formal application of formula (12.10) at an unstable state yields an imaginary value, which may be interpreted as finite lifetime of the state. This interpretation, however, will not be used in this book. With the help of (12.10), we can calculate the average fluctuation of the free energy, (12.7): ΔG =

1 k TN 2 B fkg

ð12:12Þ

where N{k} is the number of the k-modes in the system. Comparison of this result with the theorems of the canonical ensembles in the statistical mechanics allows us to interpret the k-modes as noninteracting independent degrees of freedom, ΔG ΔηV ðkÞ as the effective Hamiltonian, and (12.12) as the equipartitioning of the thermal energy among the various degrees of freedom of the system. The statistical independence of the Fourier modes with {k′ + k ≠ 0} is a result of the truncation of the expression (12.3) up to the second order, which eliminates the mode interactions. Because of that, a system described by the free energy expressions (12.4) and (12.7) is called the free field. The higher-order terms in (12.7) can be calculated with the help of the methods of statistical mechanics [1, 2]. Using that ΔηV ð0Þ is the volume average of the fluctuations, we obtain from (12.10) an expression for the mean square of the fluctuations averaged over the volume of the system:

274

12

ðΔηÞ2V =

ΔηV ð0Þ

2

=

V

Thermodynamic Fluctuations

kB T 2 ∂ g ðη Þ ∂η2

ð12:13Þ

This expression shows that the intensity of OP fluctuations is inversely proportional to the volume occupied by the system. The microscopic scale of the fluctuations is expressed by the factor (kBT/VΔg). Although under “usual” conditions the averaged fluctuations in (12.10, 12.13) are small, there exist “unusual” conditions when the fluctuations become large. The “unusual” conditions are achieved when V → 0 or simultaneously: 2

∂ g ðη Þ → 0 ∂η2

ð12:14aÞ

j kj → 0

ð12:14bÞ

and

Condition (12.14a), as shown in Sect. 8.2, means that the system approaches the spinodal point in the parameter space; condition (12.14b) means that the fluctuations increase for the longest wavemodes. The latter justifies introduction of the characteristic length scale, called the correlation radius of fluctuations: rC =

κ 2

∂ g ðηÞ ∂η2

ð12:15Þ

which plays an important role in the analysis of the fluctuations. Compare (12.15) with (9.45b) and see that the correlation radius and the fundamental length of the OP distribution are equal. The difference is that the former is defined at the stable state (see Eq. (12.11)), while the latter at the unstable one (see (9.35e) and the comment after the equation). Example 12.1 Correlation Radius for Tangential Potential Find the correlation radius of fluctuations near the stable states of the tangential potential. Using 2

∂ g ∂η2 we obtain

= W,D

W þ 6D for η = 0 W - 6D for η = 1

ð8:36Þ

12.2

Correlation Functions

275

κ W þ 6D κ W - 6D

rC =

for η = 0

ð12E:1Þ

for η = 1

As expected, the correlation radius diverges at the spinodal points of the respective phases (D = ±W/6). Also notice that the correlation radii of the phases are equal at the point of their equilibrium (D = 0). This is due to the symmetry of the tangential potential g(η).

12.2 Correlation Functions Expressions (12.7) and (12.8) may be used to calculate the two-point space correlation function: K η ðr1 , r2 Þ  hΔηðr1 ÞΔηðr2 Þi = hηðr1 Þηðr2 Þi - hηðr1 Þihηðr2 Þi = hηðr1 Þηðr2 Þi - hηðrÞi2

ð12:16Þ

where the averaging is over the fluctuation around a thermodynamically equilibrium state of the system expressed by the joint probability P{Δη(r1), Δη(r2)}; see Appendix G. Taking into account that hηðrÞi = η, some of the properties of the correlator (12.16) may be pointed out right away: 1. For large distances (|r1 - r2| → 1), the correlator breaks down into the product of the averages: K η ðr1 , r2 Þjr1 - r2 j → 1 → hΔηðr1 ÞihΔηðr2 Þi = 0

ð12:17Þ

2. At the homogeneous equilibrium state, the correlator is a function of the distance between the points only: ð12:18Þ

K η ðr1 , r2 Þ = K η ðr Þ, r = jr1- r2 j

3. Using the Fourier representation of fluctuations, the correlator takes the form 0

ΔηV ðkÞΔηV ðk0 Þ eiðkr1 þk r2 Þ

K η ðr Þ = 0

fkg fk g

ð12:19Þ

276

12

Thermodynamic Fluctuations

4. As we established earlier, the {k} and {k′ ≠ -k} Fourier modes are statistically independent. Then, using (12.9) we obtain

K η ðr Þ =

ΔηV ðkÞ

2

ð12:20Þ

eikr

f kg

This expression shows that the Fourier modes of the correlator are the mean squares of the Fourier modes of the fluctuations (12.1). In the probability theory, this statement is called the Wiener-Khinchin theorem. Substituting (12.10) into (12.20) and taking (12.15) into account, we obtain kB Teikr

K η ðr Þ = f kg

ð12:21Þ

Vκ r C- 2 þ jkj2 →

Transforming summation in (12.21) into integration ( k

K η ðr Þ =

kB T ð2π Þ3 κ

k

Vdk V ð2π Þ3 ),

eikr dk r C- 2 þ jkj2

we obtain

ð12:22Þ

To evaluate this integral, we use the Fourier transform formula (F.14) and obtain [8] kB T e - =rC 4πκ r r

K η ðr Þ =

ð12:23Þ

Several comments about (12.23) are in order: 1. rC determines the length scale of the decrease of the fluctuations. 2. If rC → 1, which is the case if the spinodal point is approached, then Kη / 1/r, that is, the fluctuations in the system become correlated over very large distances. 3. Formula (12.23) can be used to obtain (12.13). Indeed, ðΔηÞ2V =

1 V2

K η ðr1 , r2 Þdr1 dr2 =

1 V

K η ðr Þdr

ð12:24Þ

4. At r → 0 we have Kη → 1. This result is a consequence of the fact that formula (12.10) is not applicable for very large values of the wavevector |k|, which correspond to very small distances. The upper-|k| limit is due to the atomic nature of matter. Hence, formula (12.10) is applicable to the wavevectors with absolute values

12.3

Levanyuk-Ginzburg Criterion

277

j kj ≤

1 a

ð12:25Þ

where a is a typical interatomic distance. Formulae (12.15) and (12.25) reveal the constraint rC ≫ a, which presents one of the limits of applicability of the method (see Sect. 14.3).

12.3 Levanyuk-Ginzburg Criterion In addition to the constraint (12.25), FTM has other limitations associated with the presence of fluctuations in the system. The principal argument may be laid out as the following: For the method to be valid, the scale of the OP fluctuations must be smaller than the characteristic scale of the OP change. The natural measure of the OP change is its jump on both sides of the transition point, ½η = j ηþ - η0 j; the natural measure of the level of the OP fluctuations is the square root of its volume average as in (12.13). However, such criterion has a caveat. First, it cannot be applied to the free field because we use the OP jump as the scale. Second, formula (12.13) shows that the scale of the OP fluctuations is inversely proportional to the volume occupied by the system. Hence, we should identify the characteristic volume for the fluctuations. Based on the analysis in the previous section, we select a cube with the correlation radius rC on the side. Thus, the criterion formulated above can be presented as follows: kBT r 3C

ηþ

2

∂ g ∂η2

ηþ

≪ ηþ - η0

2

ð12:26aÞ

This is called the Levanyuk-Ginsburg criterion of validity of the method. Using (12.15) it may be rewritten as follows: ðkB TÞ2 ∂2 g ðη Þ ≪ ðηþ - η0 Þ4 κ3 ∂η2 þ

ð12:26bÞ

To analyze the role of the mode interactions, let us apply criterion (12.26b) to the second-order transition in a system described by the free energy density: 1 1 g = g0 þ Aη2 þ Qη4 - Hη 2 4 with A < 0, Q > 0, and H = 0. Then

ð8:46Þ

278

12

η0 = 0;

η± = ±

Thermodynamic Fluctuations

2

jAj ; Q

∂ g η = 2jAj ∂η2 ±

ð12:27Þ

Substitution of (12.27) into (12.26a) yields the criterion jAj ≫ Gi 

ðQk B T C Þ2 κ3

ð12:28Þ

This criterion shows that the method based on the Landau theory of phase transitions is valid outside the region of size Gi (Ginzburg number) around the critical point TC. Notice that Gi is proportional to the square of the mode-interaction coefficient Q.

12.4 Dynamics of Fluctuating Systems: The Langevin Force At this juncture we may pose the following questions: How do the fluctuations enter our scheme? In other words, where did we miss or omit the fluctuations in the development of FTM? How can we bring the fluctuations back into our description? In general, thermal fluctuations appear because a real system consists of discrete atoms and molecules and not of a continuum medium. As we explained above, the OP represents the most probable evolution of the system when it finds itself away from equilibrium. To bring the fluctuations back into the field theory, we can use the method suggested by Langevin in solving the problem of Brownian motion. To describe the incessant and random bombardment of the tiny grains of a plant pollen by the molecules of the surrounding fluid, Langevin suggested to use a force that consisted of two parts: a “rapidly fluctuating” part, which averages out to zero over the interval of time long compared to the time of molecular impact, and an “averaged-out” part, which represents the viscous drag. In our case the latter is represented by the thermodynamic force (-γδG/δη). Thus, the dynamic equation of the OP evolution can be written as follows: dη δG = -γ dt δη

þ ξðr, t Þ

ð12:29aÞ

P,T

where ξ(r, t) represents the rapidly fluctuating part called the Langevin force. Using the square-gradient approximation of the free energy functional (see Eq. (9.16)), this equation takes the form ∂g dη = m∇2 η - γ dt ∂η

þ ξðr, t Þ P,T

ð12:29bÞ

12.4

Dynamics of Fluctuating Systems: The Langevin Force

279

0.6 0.4 0.2 0

0

10

20

30

40

50

60

70

80

-0.2 -0.4 -0.6

Time

Fig. 12.1 The Langevin force over a period of time at a point in space

which is called the Langevin TDGLE (TDLGLE). This equation plays a very important role in the analysis of transformation processes, including nucleation. The quantity g(η), however, does not represent the free energy density now. The problem is that in a fluctuating system, the free energy and free energy density are not fluctuating quantities, while g(η) depends on a fluctuating OP. That is why the quantity g(η) is called the fluctuating free energy density or nonequilibrium Hamiltonian of the system. To distinguish fluctuations of the Langevin force from the fluctuations of the OP, the former is called noise. It averages out to zero over a long period of time Ω at every point r of the system; see Fig. 12.1: Ω

1 ξΩ  Ω

ξðr, t Þdt → 0

for Ω → 1 and all r

ð12:30Þ

0

To find other properties of the Langevin force, we will consider evolution of the OP near the homogeneous stable equilibrium state η. Assuming that our system is ergodic (see Appendix G), we obtain the first condition on the Langevin force: hξðr, t Þi = 0

for all r and t

ð12:31aÞ

where means averaging over the same distribution function as in (12.2, 12.8); however, now the OP deviations are functions of time. Due to irregularities of the Langevin force, we may assume that it is completely uncorrelated with the thermodynamic force, that is, the second condition is

280

12

Thermodynamic Fluctuations

δG ðr , t Þξðr2 , t 2 Þ = 0 for all ri and t i δη 1 1

ð12:31bÞ

Notice that the Langevin force does not change the dissipative property of the OP evolution. Indeed, using (12.31b) for (12.29), we obtain (cf. Eq. (11.2)) d dG = hGi = dt dt þ

δG δη

δG dη 3 d x = -γ δη dt δG δη

δG 3 ξd x = - γ δη

2

d3 x

2

d 3 x < 0:

ð12:31cÞ

Now let us find the third condition on the Langevin force. In order to do that, we consider evolution of the small deviations Δη(r, t) using two different approaches. On the one hand, the statistical mechanics provided us with the expression for the mean squares of the fluctuations; see (12.10), (12.13). On the other hand, we can use the approach of stochastic dynamics (see Appendix G), which can lead us to the same quantities. Equating the results should yield the sought condition. Linearizing (12.29b) we obtain the equation for the small deviations: 2

∂Δη ∂ g ðηÞΔη - κ∇2 Δη þ ξðr, t Þ = -γ ∂t ∂η2

12.4.1

ð12:32Þ

Homogeneous Langevin Force

Let us start with the analysis of the homogeneous deviations Δη(t), which appear in the system if the noise ξ(t) is homogeneous. Then, (12.32) turns into an ODE whose solution that satisfies the initial condition Δη(t = 0) = Δη(0) takes the form t

Δηðt Þ = Δηð0Þe

- t=τ0

þe

- t=τ0

eu=τ0 ξðuÞdu

ð12:33aÞ

0

where τ0 is the characteristic time of relaxation of the homogeneous deviation from the equilibrium state η; see Sect. 10.2: 2

τ0 = γ

∂ g ðη Þ ∂η2

-1

ð10:12Þ

12.4

Dynamics of Fluctuating Systems: The Langevin Force

281

Now the deviation of the OP away from this state is a random (stochastic) process because it is driven by the Langevin force. Since = 0 for all u, the mean increment of the OP vanishes: hΔηðt Þi = Δηð0Þe - t=τ0 → 0

for t → 1:

ð12:33bÞ

For the mean square increment, we obtain t

Δη ðt Þ = Δη ð0Þe 2

2

- 2t=τ0

þ 2Δηð0Þe

- 2t=τ0

eu=τ0 hξðuÞidu 0

t

t

þ e - 2t=τ0 0

eðu1 þu2 Þ=τ0 hξðu1 Þξðu2 Þidu1 du2

ð12:34Þ

0

The second term on the right-hand side of this equation is identically zero due to (12.31a). The first term vanishes over time. In the third term, we have a quantity K ξ ðsÞ  hξðuÞξðu þ sÞi

ð12:35Þ

which is called the autocorrelation function of noise ξ and is a measure of the stochastic correlation between the value of the fluctuating variable ξ at time u1 and its value at time u2 = u1 + s (see Appendix G). Using the expressions (G.54) and (12.31a) for the integral in the third term, we arrive at the expression

Δη2 ðt Þ = Δη2 ð0Þe

- τ2t

0

þ

τ0 - 2t 1- e τ0 2

þ1

K ξ ðsÞds → t → 1 -1

τ0 2

þ1

K ξ ðsÞds



ð12:36Þ

-1

Notice that if Δη2(0) was itself equal to the limiting expression of , then the latter would be constant throughout the process, which proves the principle that statistical equilibrium, once attained, has a natural tendency to persist. On the other hand, from the statistical mechanics analysis in Sect. 12.2, (12.13), we know that the average square of the homogeneous deviation of the OP is equal to 2 kB T= V∂ gðηÞ=∂η2 . Equating it to the limiting expression in (12.36), we obtain an integral condition on the autocorrelation function of the Langevin force:

282

12 þ1

K ξ ðsÞds = 2 -1

Thermodynamic Fluctuations

γkB T V

ð12:37Þ

Now think about how properties of the Langevin force determine its autocorrelation function. If the process ξ(t) has some sort of regularity, then the correlator Kξ(s) extends over a range of the time interval τcor, defined in (G.52). On the contrary, if we assume that ξ(t) is extremely irregular, then τcor is zero and we must choose Kξ(s) / δ(s). Thus, we obtain the third condition on the spatially homogeneous Langevin stochastic force: hξðt 1 Þξðt 2 Þi = 2

12.4.2

γkB T δðt 2- t 1 Þ V

ð12:38Þ

Inhomogeneous Langevin Force

Let us analyze now the autocorrelation function of the inhomogeneous Langevin force ξ(r, t). For a stationary process in a statistically homogeneous system, the autocorrelation function depends only on the distance in space |r| and time s between the points (r1, t1) and (r2, t2): K ξ ðjrj, sÞ = hξðr1 , t 1 Þξðr2 , t 2 Þi;

r = r2 - r 1 , s = t 2 - t 1

ð12:39Þ

where the averaging is meant over time or the equilibrium ensemble (12.8); see (G.5) and discussion of ergodicity in Appendix G. In (12.39) we also took into account condition (12.31a). Introducing the Fourier transform of the Langevin force, ξðr, t Þ =

ξV ðk, t Þeikr

ð12:40aÞ

ξðr, t Þe - ikr dr

ð12:40bÞ

fkg

ξV ðk, t Þ =

1 V V

rearranging the product of the Fourier modes of the Langevin force as follows: 

ξV ðk, t 1 ÞξV ðk, t 2 Þ =

1 V2

ξðr1 , t 1 Þξðr2 , t 2 Þe - ikðr2 - r1 Þ dr1 dr2 VV

and averaging both sides of the equation, we obtain the relation

12.4

Dynamics of Fluctuating Systems: The Langevin Force 

ξV ðk, t 1 ÞξV ðk, t 2 Þ =

1 V2

283

K ξ ðr, sÞe - ikr dRdr V

ð12:41aÞ

V

where R = 1=2 (r1 + r2) and dr1dr2 = dRdr. Integration over R in (12.41a) can be completed because the integrand is independent of this variable. The resulting relation 

ξV ðk, t 1 ÞξV ðk, t 2 Þ =

1 V

K ξ ðr, sÞe - ikr dr

ð12:41bÞ

V

may be rewritten in the form 

K ξ,V ðk, sÞ = ξV ðk, t 1 ÞξV ðk, t 2 Þ

ð12:41cÞ

which shows that the Fourier transform of the two-time correlator of the Langevin force equals the averaged two-time product of the Fourier transforms of the same process. This result may be considered a space analog of the Wiener-Khinchin theorem (see Appendix G). The properties of the Langevin force correlator can be found from the analysis of evolution of the OP fluctuations in (12.32). In the Fourier space, it turns into the following ODE: 2 dΔηV ðk, t Þ ∂ g = -γ ðηÞ þ κ jkj2 ΔηV ðk, t Þ þ ξV ðk, t Þ dt ∂η2

ð12:42Þ

This equation shows that the Fourier modes ΔηV ðk, t Þ are statistically independent. It is similar to the ODE for the homogeneous deviations if the homogeneous Langevin force ξ(t) is replaced by the Fourier component of the inhomogeneous one and the relaxation time constant τ0 in (10.12) is replaced by (cf. Eq. (11.8b)): 2

∂ g τjkj = γ ðηÞ þ κ jkj2 ∂η2

-1

=-

1 βðjkjÞ

ð12:43Þ

Hence, following the logic of the previous Section, we may write down a solution of (12.42) in the form like that of (12.33a) and find the relations for the Fourier modes of the Langevin force similar to those of (12.37) and (12.38):

284

12 þ1

γk B T V

ð12:44Þ

γkB T δðt 2- t 1 Þ V

ð12:45Þ



ξV ðk, t 1 ÞξV ðk, t 2 Þ ds = 2 -1 

ξV ðk, t 1 ÞξV ðk, t 2 Þ = 2

Thermodynamic Fluctuations

Notice that due to the equipartitioning of energy (see Sect. 12.1), the right-hand sides of (12.44), (12.45) are independent of the wavevector k. Let’s sum up (12.44) over all wavevectors {k} with the weight factor exp(ikρ). Then, using (12.41) we obtain þ1

K ξ,V ðk, sÞeikρ ds = 2 -1

fkg

γkB T V

eikρ

ð12:46Þ

f kg

Using the inverse Fourier transform formula (F.5a) and the formula eikr ≈ VδðrÞ f kg

we obtain that þ1

K ξ ðr, sÞds = 2γk B TδðrÞ

ð12:47Þ

-1

In Sect.12.4.1 we assumed total irregularity of the homogeneous process ξ(t), which led to the condition (12.38). If we assume the same irregularity for the process ξ(r, t) at each point r, then we obtain the third condition on the inhomogeneous Langevin stochastic force in the form: K ξ ðr, sÞ = hξðr1 , t 1 Þξðr2 , t 2 Þi = 2γk B Tδðr2- r1 Þδðt 2- t 1 Þ

ð12:48Þ

Notice that δ(r) appears in (12.48) because the k-modes are statistically independent, δ(t)—because they are irregular (no memory).

12.5

Evolution of the Structure Factor

In Sect. 12.1 we came close to introducing a very important quantity, structure factor K η,V ðk, t Þ, which is the Fourier transform of the one-time two-point correlation function Kη(r1, r2, t):

12.5

Evolution of the Structure Factor

285

K η,V ðk, t Þ =

1 V

K η ðr1 , r2 , t Þe - ikr dr

ð12:49aÞ

K η,V ðk, t Þeikr

ð12:49bÞ

V

K η ð r1 , r2 , t Þ = f kg

Physical significance of this quantity is in that it is measurable in the experiments on diffuse X-ray and neutron radiation scattering during the phase transformations. We already know something about this quantity. Indeed, compare (12.49b) with (12.10, 12.19), and notice that the equilibrium value of the structure factor at the stable homogeneous state η is: ! K η,V ðk, tÞ t → 1

ΔηV ðkÞ

2

=

V

kB T 2 ∂ g ðη Þ þ ∂η2

ð12:50Þ

κ jkj2

Let us find the expression for the structure factor away from the equilibrium. To do that we substitute (12.19) into (12.49a) and obtain K η,V ðk, t Þ =

1 V

0

ΔηV ðk0 , t ÞΔηV ðk00 t Þ eiðk r1 þk

00

r2 - krÞ

dr

ð12:51aÞ

0 00 V fk g fk g

As the averaging in this expression is over the distribution function (12.8), the modes with k′ + k″ ≠ 0 are independent; hence, K η,V ðk, t Þ =

1 V

ΔηV ðk0 , t Þ

2

0

fk g

eiðk

0

- kÞr

ð12:51bÞ

dr

V

Taking into account the mathematical formulae, ! eikr dr V → 1 ð2π Þ3 δðkÞ;

! V →1 fkg

V

V ð2π Þ3

dk

we obtain that in the thermodynamic limit of V → 1 K η,V ðk, t Þ =

ΔηV ðk, t Þ

2

ð12:51cÞ

the structure factor is the averaged square of the Fourier modes of the OP fluctuations even if the system is not at equilibrium. To derive an evolution equation for the structure factor, we differentiate (12.51c) with respect to time and use (12.42). Then,

286

12

Thermodynamic Fluctuations

 dK η,V ðk, t Þ = 2βðkÞK η,V ðk, t Þ þ ΔηV ðk, t ÞξV ðk, t Þ dt 

þ ξV ðk, t ÞΔηV ðk, t Þ

ð12:52Þ

To estimate the second and third terms in the right-hand side, we use the solution of (12.42) with the initial condition ΔηV ðk, 0Þ: t

ΔηV ðk, t Þ = ΔηV ðk, 0Þe

- t=τk

þe

- t=τk

eu=τk ξV ðk, uÞdu

ð12:53Þ

0

Substitution of (12.53) into (12.52) yields two types of averages. For the first one, we obtain 



ΔηV ðk, 0ÞξV ðk, t Þ = ξV ðk, t ÞΔηV ðk, 0Þ = 0

ð12:54Þ

because the initial conditions and the Langevin force are completely uncorrelated. The second type of averages can be easily calculated with the help of (12.44), (12.45). Indeed, substituting (12.45), (12.54) into (12.52), we obtain the sought evolution equation for the structure factor: dK η,V γk T ðk, t Þ = 2βðkÞK η,V ðk, t Þ þ 4 B dt V

ð12:55Þ

A general solution of this equation takes the form K η,V ðk, t Þ = K η,V ðk, 0Þe2βðkÞt - 2 !  1- e2βðkÞt t → 1

V

γkB T VβðkÞ

2kB T 2 ∂ g ðηÞ þ ∂η2

κ j kj 2

ð12:56Þ

2

Strictly speaking, the initial value ΔηV ðk, 0Þ is not a stochastic quantity and is completely independent of the Langevin force; it is presented in (12.56) as K η,V ðk, 0Þ for the sake of similarity.2 It is important to understand properties of the structure factor because many of its features, for instance, dependence on the initial conditions and wavenumber, can be verified experimentally. For instance, it is instructive to compare (12.55) with (12.42) and (12.56) with (12.44). As t → 1, the average value ΔηV ðk, t Þ 2

This quantity, however, may be random but for completely different reasons.

12.6

*Drumhead Approximation of the Evolution Equation

approaches zero because

ξV ðk, t Þ = 0, e.g., see (12.33a), but

287

ΔηV ðk, t Þ

2

approaches a finite value. This happens because in averaging a quantity, its positive and negative fluctuations cancel out, while in averaging its square, they add up. Keep in mind that we have considered here a linear system where the Fourier modes do not interact; see (12.8, 12.42). In a nonlinear system (see Eq. (12.29)), the Fourier modes interact. As a result, even ΔηV ðk, t Þ does not approach zero as t → 1. Although (12.55) was derived for a stable homogeneous state η, it can be used for an unstable one, but for a limited time of evolution only. The limitation comes from the magnitude of the Fourier modes ΔηV ðk, t Þ . Indeed, (12.42) can be used for as long as the modes are small. Imagine a system which was initially equilibrated at a state where (12.11) was true. The structure factor of the system is described by the asymptotic value of (12.56). After that, suddenly, we change the conditions, e.g., by lowering the temperature, such that condition (12.11) is no longer true. Then solution (12.56), where β(k) is positive or negative depending on |k|, can be used to describe initial stages of growth of the modes. When the modes are not small anymore, the linearized equation (12.42) should be replaced by the full evolution equation (12.29) in the Fourier space. In this case the evolution equation for the structure factor will depend on the Fourier transforms of the higher-order two-point correlation functions [3, 4].

12.6

*Drumhead Approximation of the Evolution Equation

In Chap. 11 we found that theoretical analysis of the OP evolution may be significantly advanced in the situations when a thin transition layer develops in the system. In Sect. 11.4 we derived the drumhead (sharp-interface) approximation of TDGLE (see Eq. (11.30)), which allowed us to identify the “driving forces” of the interfacial motion. In this section we will apply the drumhead approximation to a fluctuating system. The starting point is the Langevin TDGLE (12.29) for the OP filed with the correlation requirements (12.31, 12.48) on the stochastic Langevin force. Let us repeat here some of the key steps of the derivation in Sect. 11.4 where the Langevin force is included. Assuming that a thin layer develops in the system where the OP field changes sharply while in the rest of the system it practically does not change at all, we obtain an equation (cf. Eq. (11.28) and Fig. 9.9): γκ

d2 η dη ∂g þ ðvn þ 2γκK Þ -γ þ ξ=0 du du2 ∂η

ð12:57Þ

where η = η(u), v n and K depend on (v, w, t), and ξ depends on (u, v, w, t). Then, applying the averaging operator (9.96) to the left-hand side, that is, multiplying by dη/du and integrating it over the thickness of the layer, cf. (11.29a), we obtain

288

12

γ

Thermodynamic Fluctuations

vn dη þ 2K A  κ - γ ½gαβ þ ζ = 0 m du

ð12:58Þ

where we introduced a new stochastic force uα

ζ ðv, w, t Þ  A  ξ =

ξðu, v, w, t Þ



dη du du

ð12:59Þ

The correlation properties of the force ζ need to be analyzed. Obviously, cf. (12.31a): hζ ðv, w, t Þi = 0

ð12:60Þ

For the autocorrelation function of ζ, we may write hζ ðv, w, t Þζ ðv0 , w0 , t 0 Þi =



du uβ

dη du



du0



dη hξðu, v, w, t Þξðu0 , v0 , w0 , t 0 Þi du0

Then, using (12.48) we obtain hζ ðv, w, t Þζðv0 , w0 , t 0 Þi = 2γkB Tδðv- v0 Þδðw- w0 Þδðt- t 0 Þ

uα uβ

= 2γkB T A 



du du0



dη dη δðu- u0 Þ du du0

dη δðv- v0 Þδðw- w0 Þδðt- t 0 Þ du

ð12:61Þ

In the second line of (12.61), one integration was removed by the δ-function. Using the definitions of the nonequilibrium interfacial energy σ (11.32) and mobility m (11E.6), we obtain from (12.57) to (12.61) the drumhead approximation of the Langevin TDGLE, which is a stochastic extension of EEID (11.30): vn = m

½gαβ κ - 2K - ζ σ σ

hζ ðv, w, t Þi = 0; σ hζ ðv, w, t Þζ ðv0 , w0 , t 0 Þi = 2γkB T δðv- v0 Þδðw- w0 Þ δðt- t 0 Þ κ

ð12:62aÞ ð12:62bÞ ð12:62cÞ

In the following sections, we will use this equation for the purposes of the interfacial stability analysis and nucleation problem.

12.6

*Drumhead Approximation of the Evolution Equation

12.6.1

289

Evolution of the Interfacial Structure Factor

In Sect. 11.3.4 we examined stability of a moving plane interface by analyzing evolution of the capillary waves on the plane interface; see Sect. 9.6 for the definition. Let us now use the drumhead approximation (12.62) for the same purpose. To do that we resolve the u-coordinate equation U(x, t) = 0 as follows: z = zðx2 , t Þ =

zðk2 , t Þeik2 x2

ð12:63aÞ

f k2 g

z ð k2 , t Þ =

1 S

zðx2 , t Þe - ik2 x2 dx2

ð12:63bÞ

S

where z is the deflection of the drumhead interface from the plane; zðk2 , t Þ are the Fourier transform components of the deflection; x2 = (x, y), k2 = (kx, ky) are the two-dimensional vectors in the geometric and Fourier spaces; and S is the area of the interface. In general (see Appendix C) vn = 1 K = - ∇2 2

∂z u  jz ; ∂t

∇2 zðx2 , tÞ

;

1 þ j∇2 zðx2 , tÞj2

∇2 = jx

ð12:64aÞ ∂ ∂ , jy ∂x ∂y

ð12:64bÞ

For the capillary waves (v, w) ≈ (x, y) and vn ≈

∂z ∂z = ðk2 , t Þeik2 x2 ∂t fk g ∂t

ð12:65aÞ

2

2

K≈ -

2

1 1 ∂ z ∂ z þ zðk2 , tÞjk2 j2 eik2 x2 = 2 2 ∂x2 ∂y2 fk g

ð12:65bÞ

2

Then, substituting (12.65) into (12.62), we obtain an equation:

fk2 g

∂z κ þ zm jk2 j2 þ ζ = 0 σ ∂t

ð12:66aÞ

where the Fourier components of the Langevin force ζ have the following properties:

290

12

Thermodynamic Fluctuations

ζ ðk2 , t Þ = 0 þ1 -1

ð12:66bÞ



ζ ðk2 , t Þζ ðk2 , t þ sÞ ds = 2

γσkB T κS

ð12:66cÞ

Because each term in the sum in (12.66a) depends on the value of only one wavenumber, they must vanish separately. Then, using the same method as the one we used for (12.32), we obtain the solution:

zðk2 , t Þ = zðk2 , 0Þe

- t=τz

t

κ - e - t=τz σ

es=τz ζ ðk2 , sÞds

ð12:67aÞ

0

τz =

1 1 =βz ðjkjÞ mjk2 j2

ð12:67bÞ

where zðk2 , 0Þ is the initial condition for the capillary wave and τz is the time scale of the evolution of the waves. Notice that the latter depends strongly on the wavenumber of the wave, diverging for the very long ones. Taking (12.62) into account, we obtain expressions for the averaged Fourier components of the deflections: ! hzðk2 , t Þi = zðk2 , 0Þe - t=τz t → 1 0

for all jk2 j,

ð12:68aÞ

and the two-time autocorrelation function: ! hzðk2 , t Þz ðk2 , 0Þi = jzðk2 , 0Þj2 e - t=τz t → 1 0

for all jk2 j

ð12:68bÞ

Then, introducing the interfacial structure factor K z,S ðk2 , t Þ  jzðk2 , t Þj2

ð12:69Þ

we obtain an expression for its evolution: K z,S ðk2 , t Þ = K z,S ðk2 , 0Þe - 2t=τz þ

τz κ 2 σ

2

1- e - 2t=τz

= K z,S ðk2 , 0Þe

þ1 -1

- 2t=τz



ζ ðk2 , t Þζ ðk2 , t þ sÞ ds

k T ! kB T þ B 2 1- e - 2t=τz t → 1 σ j k2 j S σ j k2 j 2 S

ð12:70Þ

Equation (12.70) shows that the destabilizing effect of fluctuations and stabilizing effect of the surface tension bring up one more length scale, the fluctuation length:

12.6

*Drumhead Approximation of the Evolution Equation

kB T σ

lF 

291

ð12:71Þ

which sets up the scale for the structure factor. Compare (12.70) with the expression for the bulk structure factor (12.56) and notice that K z,S ðk2 , t Þ diverges for |k2| → 0. Using the inverse Fourier transform, (12.63b), we can interpret this result to mean that the surface-average square of the long waves of the interfacial deflection grows without bound because the stabilizing influence of the surface tension for these waves vanishes.

12.6.2

Nucleation in the Drumhead Approximation

In this subsection we apply the stochastic EEID (12.62a) to the problem of nucleation of a new phase and compare the results to those of CNT; see Sect. 4.2. One of the challenges that we face is to define the main quantity of CNT—the rate of nucleation—using the proper quantities of FTM. It can be done using the concept of escape time, which is inversely proportional to the nucleation rate, see Sect. 4.2.3. For a bistable potential, the escape time is defined as time needed for a system, which was initially in the vicinity of a minimum with higher value of the potential, to reach for the first time a vicinity of the minimum with lower value of the potential. In Appendix G we calculated the escape time for a “particle in a bistable potential” if the Langevin equation for the particle is known. In what follows we will derive the Langevin equation for the nucleus in the drumhead approximation, calculate the escape time using the “particle in a bistable potential” formula, and compare the escape time to the stationary nucleation rate of (4.51), (4.52). We assume that the transition layer has a shape of a sphere ΩR, which surrounds a particle of the new phase. In this case K= vn =

1 Rðt Þ

dRðt Þ dt

on ΩR

ð11:34aÞ ð11:34bÞ

where R(t) is now a random variable that represents the radius of the particle. The stochastic force depends not only on time but on the coordinates of the surface of the sphere (v,w) also. To eliminate this superfluous dependence, we average (12.62a) over the surface of the sphere taking into account (11.32) and that

ΩR

Then

dv dw = 4πR2

292

12

Thermodynamic Fluctuations

1 κ dR 1 ψðtÞ = 2m  R dt R 4πR2 σ

ð12:72Þ

where R is the radius of the critical nucleus (cf. Eq. (11.34c)). The averaged force is ψ ðt Þ 

ΩR

ζðv, w, t Þdv dw

hψ ðt Þi = 0 σ hψ ðt Þψ ðt 0 Þi = 8πγk B T R2 δðt- t 0 Þ κ

ð12:73aÞ ð12:73bÞ ð12:73cÞ

Notice in (12.72) that the smaller the particle, the greater the effect of the stochastic force on it. In principle, the particle equation (12.72), (12.73) can be used for (G.40c) or (G.41c) to derive an expression for the escape time. The problem is that those formulae apply to the stochastic force that does not depend on the random variable itself, in this case the radius R(t)—the additive noise. To eliminate the R-dependence from the intensity and autocorrelation function of the random force (see Eq. (12.73c)), we multiply all terms of (12.72) by 8πR and introduce a new random variable—the area of the particle’s surface ΩR S = 4πR2

ð12:74Þ

and the new force φð t Þ  -

2κ ψ ðt Þ Rσ

ð12:75aÞ

hφðt Þi = 0

ð12:75bÞ

hφðt Þφðt 0 Þi = Γ δðt- t 0 Þ m Γ = 32π k B T σ

ð12:75cÞ ð12:75dÞ

Contrary to the previous subsection, S is a random variable quantity now. The Langevin equation for S takes the form p dS S = 16πm p  - 1 þ φðtÞ dt S

ð12:76Þ

where S is the surface area of the critical nucleus; see (12.74) and (11.33). This equation describes “random walk of a particle” in the potential field

12.6

*Drumhead Approximation of the Evolution Equation

Û

*

potential Û

Fig. 12.2 Bistable potential Û(S), (12.77), with the two walls at S = 0 and S = Sf

293

0

area s

s*

2 UðSÞ = 16πm S - p  S3=2 3 S

sf

ð12:77Þ

which is normalized such that U(0) = 0; see Fig. 12.2 and cf. Fig. 4.11. In the spirit of CNT, particles cannot grow to infinite sizes. To terminate the particle growth past certain size, we erect an infinitely high “wall” at S = Sf, although precise value of Sf does not matter for as long as Sf > S (cf. Eq. (4.41) preceding discussion). Also, an infinite wall is erected at S = 0. The potential (12.77) with the two walls, designated as U ðSÞ (see Fig. 12.2), has two minima at 0 and Sf and a maximum at S. Hence, this is a bistable potential. If the potential barrier is high (cf. Eq. (G.39)), UðS Þ ≫ Γ

ð12:78Þ

that is, the fluctuations are weak, then for the escape time we can use the expression (G.41c). Establishing the relations Uðωb Þ → UðS Þ = 0

16 πmS 3

U 0 ðωa Þ → U ð0Þ = 16πm

ð12:79aÞ ð12:79bÞ

294 00

U 00 ðωb Þ → U ðS Þ = -

12

Thermodynamic Fluctuations

8πm S3=2 

ð12:79cÞ

a2 → Γ,

ð12:79dÞ

and substituting them into (G.41c), we obtain τof =

1 8m

p 16πσ3 kB TS 3kσSBT kB Tσ 3ð½gαβ Þ2 kB T e e = πσ 2m½gαβ

ð12:80Þ

The last expression in (12.80) was obtained by expressing S through the interfacial energy and driving force, (12.74), (11.33). Notice that this expression is independent of the cutoff radius Rf. Comparison of (12.80) for the escape time with (4.51), (4.52) for the stationary nucleation rate JS shows that the FTM result has the correct exponential and the Zeldovich factor (the square root) but fails to reproduce the dependence on the volume of the system. The reason for that is that the presented method takes into account only spherically symmetric fluctuations of the shape, that is, growth or shrinkage, and omits the shifts and distortions of the nucleus.

12.7

Homogeneous Nucleation in Ginzburg-Landau System

In this section we demonstrate how TDLGLE (12.29b) can be used to study the nucleation process, which we analyzed in Sect. 4.2 using the methods of CNT. It will be done by numerically calculating the escape time of a system described by the tangential potential (8.30) with the specified level of the driving force D [5]. The system was prepared initially in the state of a homogeneous phase α, and the Newmann-type boundary conditions (9.17b) were maintained throughout the calculations. In addition to the driving force D, one needs to know the ambient temperature T and system’s volume V—the control parameters—and the material parameters: W, κ, γ. Numerical parameters, like distance between the grid points, are also important. The escape time was measured as the time for a localized formation similar to phase β to appear in the system. In Fig. 12.3 are depicted the equilibrium fluctuations of the metastable phase α (a) and the emergence of the nucleus of the stable phase β (b). For a long time, the OP field of the system consisted of the heterophase fluctuations in the phase α; see Fig. 12.3a. The simulations confirmed the Gibbsian scenario of nucleation when a localized nucleus of phase β suddenly emerged as a result of the fluctuations in the system; see Fig. 12.3b. However, the FTM nucleus differs from the CNT one because its center is different from the bulk phase β and the interface is diffuse; see Sect. 4.3.

12.8

*Memory Effects: Non-Markovian Systems

295 1

10 20

0.2

10

0.15

20

30 0.1

40

0.8

30 0.6

40

50

0.05

60

0

60

70

-0.05

70

80

-0.1

90

50

0.4 0.2

80 0

90 -0.15

100 20

40

60

(a)

80

100

100 20

40

60

80

100

(b)

Fig. 12.3 The OP field simulated by TDLGLE (12.29b). (a) Equilibrium fluctuations of the metastable state. (b) Nucleus of the stable phase. Notice the size of the numerical grid and the difference of the OP fluctuations between (a) and (b) reflected in the color bars

12.8

*Memory Effects: Non-Markovian Systems

In Sect. 12.4 we proposed the dynamic equation for the OP evolution (12.29a) that takes into account effects of the driving force (first term in rhs) and fluctuations (second term in rhs). Let us look closer at the following property of this equation: the response of a system to the driving force is simultaneous with the application of the force. Generally, such simultaneity in a macroscopic theory is associated with ignoring certain molecular variables; it turns out to be an approximation to causal behavior, where the response to a force comes after the application of the force. Indeed, although the time dependence of system’s behavior is governed by equations (Hamilton’s or Schrödinger’s) that show an instantaneous response, complete specification of the microstate of the macroscopic system requires knowledge of a very large number of molecular variables. In the mesoscopic description, the microscopic variables are coarse-grained, and the evolution of the system is deducted from the equations for the OPs, pretending that other variables do not change (see Appendix A). That is where the causality is violated. In Sect. 10.5 we analyzed the effect of causality on the linear-ansatz dynamic equation for the OP and introduced the memory integral (10.19). Same ideas apply to TDGLE (11.1). The reason why we need to reexamine the dynamic Langevin TDGLE (12.29a) is that we would like to assess and compare the effect of causality and fluctuations at the same time. To do that, we need to derive (or at least substantiate) a causal evolution equation, which will be called the generalized Langevin TDGLE, and then investigate the validity of simultaneity as a limiting case. First, let’s ask ourselves a question: How did simultaneity “enter” our theory? In Appendix G we showed that the Langevin equation is equivalent to the FokkerPlank equation (G.10), which can be derived from the master equation (G.6). In the

296

12

Thermodynamic Fluctuations

master equation, evolution of the system depends on the transition probabilities W(ω|ω′), which depend only on the states between which the transition occurred and does not depend on the previous states of the system. This Markovian property is the source of simultaneity in the system. If the Markovian condition is relaxed, evolution of the system will depend on its “history.” In the Fokker-Plank equation, this will result in the jump moments a1, a2 being dependent on the memory effects. The Langevin TDGLE with the memory effects takes the form t

dη =dt

Γðt- t 0 Þ -1

δG ðr, t 0 Þdt 0 þ ξðr, t Þ δη

ð12:81Þ

where only the thermodynamic driving force is subjected to the memory integral. The memory effect of the Langevin force will be assessed differently. The memory function Γ(u) depends on the time difference between the present t and past t′ moments and assigns certain weights to the driving force applied in the different moments of the past. The instantaneous Langevin TDGLE (12.29a) can be recovered from (12.81) if the memory function has “very short memory,” Γ(u) = γδ(u); see (10.21). On the other hand, if the system has “full memory,” then Γ(u) = const (u) = Γ0, see (10.22). In the general case, the memory function can be written as follows: ΓðuÞ = Γ0 e - ðuÞ=τm

ð10:24Þ

where τm is the finite “memory time” of the system. Notice that the “very short memory” case of (10.21) can be recovered from the general case if τm → 0, Γ0 → 1 , and τmΓ0 = γ (see Eq. (10.26)) and the “full memory” if τm → 1; see Exercise #2 in Chapter 10. Differentiating (12.81) with respect to the time (recall the rules of differentiation of integrals with respect to a parameter), we obtain3 d2 η δG 1 = - Γ0 ðt Þ þ δη τm dt 2

t

Γðt- t 0 Þ -1

δG 0 0 dξ ðt Þdt þ ðr, t Þ δη dt

ð12:82Þ

Excluding integrals from (12.81, 12.82), we obtain differential equation for the OP evolution in a system with memory described by the memory function (10.24):

3

See Appendix G regarding dξ/dt.

12.8

*Memory Effects: Non-Markovian Systems

297

d2 η 1 dη δG 1 dξ þ = - Γ0 ðt Þ þ ξðr, t Þ þ ðr, t Þ δη τm dt dt 2 τm dt

ð12:83Þ

Notice that in the limit τm → 0, we recover the instantaneous case of (12.29a). As we can see from (12.83), memory gives rise to two different effects in the system’s evolution. First, the driving force (-δG/δη) “excites” not only the first derivative of the OP (speed) but also the second one (acceleration), which causes more long-term effect than in a system without memory. Second, the memory enhances the effect of fluctuations on the system by “engaging” the first derivative of the Langevin force ξ(r, t). Let’s study these effects separately. First, let’s look at the effects of memory on small deviations from equilibrium states. For the present purposes, it will suffice to study homogeneous deviations only Δη = η(t) - η (see Sect. 10.5) because heterogeneities do not incur any new features on the system. Expanding the driving force about the stable equilibrium state η, the evolution equation without the effect of the fluctuations will take the form Δ€η þ

1 1 Δη_ þ 2 Δη = 0, τm τr

2

τr = Γ0

∂ g ðη Þ ∂η2

- 1=2

ð12:84Þ

This equation describes motion of a damped linear oscillator [6]; adding the memory term is equivalent to adding mass to a dissipative system. Equation (12.84) is a linear homogeneous ODE with constant coefficients, properties of which are very well known [7]. The nature of its solution depends on whether the roots of the characteristic equation q2 þ q± = -

1 1 q þ 2 =0 τm τr 1 ± 2τm

τ2r - 4τ2m 2τm τr

ð12:85Þ ð12:86Þ

are real and different or coincident or complex. Short Memory: τm < τr/2. Both roots are real and negative: q- < q+ < 0, and the solutions of (12.84) are given by Δηðt Þ = Aeqþ t þ Beq - t

ð12:87Þ

where A, B are constants. The most important difference of these solutions from those of the instantaneous case (see Sect. 10.2) is the dependence of the relaxation rate on the memory constant τm. Even in the case of very short memory time, τm ≪ τr, the largest relaxation time in the system, which determines its long-time properties, is τ2r =τm . Mathematically, as one can see from (12.84), this happens because the

298

12

Thermodynamic Fluctuations

memory constant multiplies the highest derivative of the evolution equation. Physically this means that the memory effects are always significant in the long run. Long Memory: τm > τr/2. The roots (12.86) are complex with negative real parts and the solutions are

Δηðt Þ = Ae - 2τm cos t

4τ2m - τ2r tþα 2τm τr

ð12:88Þ

where A, α are constants. A typical solution of this type represents an oscillation about the equilibrium state Δη = 0 with the exponentially decreasing amplitude, decaying slower for large memory constant τm. Notice that the rate of approach to equilibrium is determined by the memory property instead of the relaxation one. In the case of very long memory, τm ≫ τr, the system approaches equilibrium at a very slow pace, τm- 1 , during which many oscillations about it will be made with high frequency of τr- 1 . Critical Memory: τm = τr/2. In this case q- = q+ = -1/2τm. The solution Δηðt Þ = ðA þ Bt Þe - 2τm t

ð12:89Þ

represents aperiodic approach to the equilibrium with the rate determined by the memory constant τm. Now let us pose another question: What changes in the properties of the Langevin force ξ(r, t) are caused by the presence of memory in the system? To answer this question, let’s try to do what we have done in the case of the instantaneous response, that is, to calculate the long-time limit of the equilibrium-averaged square of the deviation of the OP from the equilibrium value. The linear generalized Langevin equation (12.83) takes the form _ þ 1 Δη = 1 ξðt Þ þ ξ_ ðt Þ € þ 1 Δη Δη τm τm τ2r

ð12:90Þ

and the initial conditions _ ð 0Þ = 0 Δηð0Þ = Δη

ð12:90aÞ

The solution that satisfies the initial conditions may be written in the Cauchy form:

12.8

*Memory Effects: Non-Markovian Systems

299

t

M ðt, sÞ

1 ξðsÞ þ ξ_ ðsÞ ds τm

ð12:91Þ

M ðt, sÞ = M ðt- sÞ =

eqþ ðt - sÞ - eq - ðt - sÞ qþ - q -

ð12:91aÞ

Δηðt Þ = 0

where the kernel

is a one-parameter solution of the homogeneous equation (12.84), which satisfies the following initial conditions: _ ðs, sÞ = 1 M ðs, sÞ = 0, M

ð12:91bÞ

The kernel M depends on the difference (t - s) because the coefficients of its differential equation are const(t). Integrating (12.91) by parts and taking (12.91b) into account, we obtain t

Lðt- sÞξðsÞds

Δηðt Þ = - M ðt Þξð0Þ þ

ð12:92Þ

0

where _ ðt Þ þ Lðt Þ = M

q eq - t - q - eqþ t 1 M ðt Þ = þ τm qþ - q -

ð12:92aÞ

To satisfy the initial condition (12.90a), we must choose ξ(0) = 0. Then the equilibrium-averaged square of the OP deviation (12.92) takes the form t

t

Δη ðt Þ =

Lðt- s1 ÞLðt- s2 Þhξðs1 Þξðs2 Þids1 ds2

2

0

ð12:93Þ

0

In the long-time limit, the autocorrelation function Kξ(s2 - s1) = depends only on the difference (s2 - s1), and the kernel of the double integral depends only on the sum (s2 + s1). To verify the last statement, we may consider the function R ð t 1 , t 2 Þ  qþ - q -

2

= qþ q - 2e and notice that

Lðt 1 ÞLðt 2 Þ - qþ e

q - þqþ ðt 1 þt 2 Þ 2

q2 ðt 1 þt 2 Þ



- q - e 2 ðt1 þt2 Þ

- eqþ t1 þq - t2 - eq - t1 þqþ t2

2

300

12

Thermodynamic Fluctuations

dR ðt, t Þ = Rðt, t Þ = 0 dt Then, changing the variables in (12.93) to u = (s2 - s1), U = t - 1=2(s2 + s1) and using the same method as in the integration of (12.34), we obtain 1

Δη2 ð1Þ = 0

þ1

2

qþ eq - U - q - eqþ U qþ - q -

K ξ ðuÞdu

dU

ð12:93aÞ

-1 þ1

τ 1 τ = τr r þ m τm τr 2

K ξ ðuÞdu

ð12:93bÞ

-1

Regardless of the memory, in the long-time limit, the system must approach the 2 equilibrium state where Δη2 = kB T= V∂ gðηÞ=∂η2 ; see (12.13). Hence, þ1

K ξ ðuÞdu = -1

2k B T Vτr ττmr

þ ττmr

2

∂ g ðηÞ ∂η2

ð12:94Þ

This relationship sometimes is called the second fluctuation-dissipation theorem. It provides the foundation for a discussion of the properties of the Langevin force in a system with memory. Notice that in the limit of very short memory time, τm ≪ τr, we recover (12.37). However, if the characteristic memory time is very long, τm ≫ τr, then the correlation function of fluctuations depends on the memory and is independent of the relaxation properties of the system. If the Langevin force is irregular, the correlator Kξ is proportional to the deltafunction. However, it is reasonable to assume that in a system with memory, the fluctuation process is correlated over the time period equal to the time constant τm. Then, taking into account the property of symmetry of the correlator (see Eq. (G.45)) and the second fluctuation-dissipation theorem, (12.94), we obtain hξðt 1 Þξðt 2 Þi =

kB T V 1þ

τ2m τr2

Γ0 e -

jt2 - t1 j τm

ð12:95Þ

Exercises 1. Find the correlation radius of fluctuations near the stable states of the Landau potential. 2. Substitute (12.53) into (12.52) and verify that two types of averages appear in the final expression.

References

301

3. Verify that (12.91) is a solution of (12.90). 4. Verify that expression (12.93b) follows from expression (12.93a).

References 1. L.D. Landau, E.M. Lifshitz, Statistical Physics (Pergamon Press, Oxford, 1958) §146 2. A.Z. Patashinski, V.L. Pokrovski, Fluctuation Theory of Phase Transitions (Nauka, Moscow, 1982, (in Russian)), p. 39 3. J.S. Langer, M. Bar-on, H.D. Miller, Phys. Rev. A 11, 1417 (1975) 4. J.D. Gunton, M. Droz, Introduction to the Theory of Metastable States (Springer-Verlag, Heidelberg, 1983), p. 95 5. A. Umantsev, Phys. Rev. E 93, 042806 (2016) 6. L.D. Landau, E.M. Lifshitz, Mechanics (Pergamon, Oxford, 1960), p. 74 7. D.W. Jordan, P. Smith, Nonlinear Ordinary Differential Equations (Clarendon Press, Oxford, 1990) 8. L.S. Ornstein, F. Zernike, Proc. Acad. Sci. (Amsterdam), 17, 793 (1914)

Chapter 13

Multi-Physics Coupling: Thermal Effects of Phase Transformations

A simple argument should convince the reader that for the theory of phase transformations to be realistic, one must consider the OP evolution together with other processes that take place simultaneously with the phase transformations. For instance, in gravitational fields transformations in systems of varying density, e.g., mixtures, cause flow of matter, which has a feedback effect on the transformation. Another example of a transformation taking place simultaneously with a process affected by the transformation is a ferromagnetic transition (see Sect. 3.4), which proceeds along with the changes of the magnetic field, which in turn affects the transformation itself. Ferroelectric transformation (Sect. 3.2) is accompanied by the changes of the electric field and martensitic (Sect. 3.3) by the changes of the stress field. The accompanying processes usually have characteristic length and time scales longer than those of the OP variations; that’s why sometimes they are called “hydrodynamics” modes. The questions that we must answer are: How do we couple the OP evolution to these processes? What physical principles are important here? How do we maintain the thermodynamic (physical) consistency between the descriptions of all processes in the system? Regarding the beams and columns configuration of FTM in Fig. P.1, we are building column 6 now. As we found in Chap. 8, a phase transition of the first kind is accompanied by the release of the latent heat, which amounts to the difference of the internal energies (or enthalpies) of the phases on both sides of the transition. The heat does not remain localized at the sites where it was released—usually locations of the interface. Due to the mechanism of heat conduction, it diffuses to the places with lower temperatures causing the temperature field to vary. The redistribution of heat and equilibration of temperature cause the feedback effect on the phase transition in the form of changing rate of the transformation; see Chap. 7. Main questions discussed in this chapter are: How can we incorporate the mechanisms of heat release and redistribution into FTM in a physically rigorous and consistent way? What are the new effects or features that we may expect from the transformations that are accompanied by the latent heat release? Another problem that we must grapple with is that now we are dealing with the local temperature defined by the concept of local equilibrium, that is, equilibrium © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Umantsev, Field Theoretic Method in Phase Transformations, Lecture Notes in Physics 1016, https://doi.org/10.1007/978-3-031-29605-5_13

303

304

13

Multi-Physics Coupling: Thermal Effects of Phase Transformations

in a small volume of the system considered independently of the rest of the system. In this chapter we review practically all aspects of Part II of the book, taking into account the energy conservation constraint now, and find interesting, counterintuitive effects. One of the conclusions of our analysis is that the heat may be trapped in the advancing interface causing significant recalescence effect and in some cases even change of the direction of the transformation. Another conclusion is that the thermal effects appear even in transformations that proceed without any latent heat release resulting in a drag on the interface.

13.1

Equilibrium States of a Closed (Adiabatic) System

Let us consider first equilibrium states in a closed (adiabatic) system, that is, a system that does not have heat exchange with the environment. As we know from thermodynamics, see Chapter 1, for such a system, the variational principle must be changed: instead of minimizing the total free energy for the constant temperature (see Chap. 1), we must maximize the total entropy of the system keeping the total energy (enthalpy) of the system constant. Then, Sfη g 

sðη, ∇ηÞ d3 x → max

ð13:1Þ

eðη, ∇ηÞ d3 x = const

ð13:2Þ

V

for E fηg  V

Here s and e are the entropy and energy (enthalpy) densities of the system. Thus, from the viewpoint of the calculus of variations (see Appendix B), the equilibrium states in the closed system obey conditions of the isoperimetric problem. There exist two types of the solution for this problem (see Appendix B): type-E1, the equilibrium state is not an extremal of the energy functional (13.2), and type-E2: the equilibrium state is an extremal of the energy functional (13.2). Although in each case one has to consider both homogeneous and inhomogeneous equilibrium states, we will find that type-E2 applies to the inhomogeneous states only. Let us consider these cases separately.

13.1.1 Type-E1 States If the sought state {ηE1} is not an extremal of the energy functional (13.2), then there exists a constant λ such that the state is an unconditional extremal of the functional ðs þ λeÞ d 3 x, that is, the following relation is satisfied:

13.1

Equilibrium States of a Closed (Adiabatic) System

305

δS þ λδE = 0

ð13:3Þ

Since (13.3) is true for an arbitrary variation of δη, λ = -1/T where T is the absolute temperature and the state {ηE1} is an extremal of the free energy functional G{η,T} = E - TS, that is, satisfies the same necessary conditions as in the isothermal system (see Chap. 9). This hints that the Legendre transformation (see Appendix F) fE, Sg → T=

dE dE ,G=ES dS dS

ð13:4Þ

may be useful here because it allows us to express the equilibrium conditions using a more convenient function of G(T ) instead of S(E). The entropy and energy (enthalpy) densities of the system also must be Legendre transformed: s= e=g-T

∂g ∂T ∂g ∂T

ð13:5aÞ η

ð13:5bÞ η

Rigorously speaking, in (13.4, 13.5) we have to consider also the Legendre transformation from volume V to pressure P, but the compression effects are not of interest here. For that matter, you may think of G as the Helmholtz function instead of the Gibbs one. It is important to note that the OP is a part of the description in both sets {η, E, S} and {η, T, G}, but remains unaffected by the Legendre transformation. The above obtained result that the set of the equilibrium states of the open and closed systems are the same is not surprising because it is a consequence of a more general fact that in the thermodynamic limit of V → 1, the canonical and microcanonical ensembles have the same equilibrium states [1]. However, the thermodynamic stability of the equilibrium states in the microcanonical ensemble may be different from that in the canonical one. Let us consider stability of homogeneous fηE1 g and inhomogeneous {ηE1(x)} states separately. Case A For a homogeneous system ds =

∂s 1 de þ T ∂η



ð13:6Þ

e

and the equilibrium state in a closed system is characterized by the condition

306

13

Multi-Physics Coupling: Thermal Effects of Phase Transformations

∂s ∂η

=0

ð13:7Þ

e

Differentiating the Legendre transformation (13.4) with respect to the OP, we obtain ∂g ∂η

∂e ∂η

= T

∂s ∂η

-T T

ð13:8Þ T

Then, applying a mathematical formula for a partial directional derivative of a function f(x,y) in the direction h(x,y) = const ∂f ∂x

∂f ∂x

 h

h = const

=

∂f ∂f dy þ ∂x ∂y dx

ð13:9Þ

h = const

to the energy and entropy functions in (13.8), we obtain the condition of equilibrium: ∂g ∂η

= -T T

∂s ∂η

=0

ð13:10Þ

e

which shows that, indeed, the closed and open systems of the same substance have the same homogeneous equilibrium states. The condition of local thermodynamic stability in the closed system, that is, the condition that the equilibrium state corresponds to a maximum of the entropy for constant energy, is 2

∂ s ∂η2

0

2

∂ g ∂T 2 η

ð13:15Þ

Since the specific heat of a stable state is positive Cη 

2

∂e ∂T

η

= -T

∂ g ∂T 2

η

>0

ð1:4aÞ

the condition of the local thermodynamic stability in a closed system (13.15) is less restrictive than that in the open one (cf. Eq. (1.5b)): 2

∂ g ∂η2

>0

ð8:8Þ

T

In other words, a homogeneous equilibrium state may be adiabatically stable, that is, stable in a closed system where (13.15) is satisfied, but unstable in an open system where (8.8) is not satisfied. A necessary condition for that is ∂2g/∂T∂η ≠ 0; a sufficient condition is 2

M η ðT Þ 

∂ g ∂η∂T 2

∂ g ∂T 2 η

2 2

∂ g ∂η2 T η = ηðT Þ

>1

ð13:16Þ

where M η is a parameter, called the interaction module. It determines the strength of interactions between the thermal and ordering modes of the transition. The partials that make up M η in (13.16) should be taken at the equilibrium state in question. A stimulating interpretation of the condition of the adiabatic stability may be revealed if we find the slope of the equilibrium temperature-versus-OP line for this state and compare it with that of the condition of constant energy for the same state (13.13). The former can be found from the relation

308

13

d

Multi-Physics Coupling: Thermal Effects of Phase Transformations

∂g ∂η

2

= T

∂ g ∂η2

2

T

∂ g dT = 0 ∂η∂T

dT=dη

e

dT=dη

η

dη þ

ð13:17aÞ

Then, criterion (13.16) takes the form M η ðT Þ =

>1

ð13:17bÞ

This means that the slope of the constant-energy line is greater than that of the equilibrium-state line. Another interpretation of the criterion (13.16) may be found by calculating the specific heat of the adiabatically stable state: C ηðT Þ 

deðηÞ ∂e = dT ∂T

þ η

∂e ∂η

dη = C η ð1- M Þ dT T

ð13:18Þ

and applying (1.4a), (13.16). Then we find that when an isothermally unstable state gains adiabatic stability, its specific heat turns negative, which means that such state is unstable in the bulk. Two comments are in order here: 1. Equation (13.16) is only a condition of homogeneous (local) stability of the state, while it may be unstable globally, that is, with respect to an inhomogeneous state, e.g., heterogeneous mixture of coexisting phases at the same temperature. This possibility is alluded to by (13.18) and the analysis that follows from it. The global-stability analysis is more complicated. Its general conclusion is that the homogeneous state may be stable against the inhomogeneous states in a certain range of energies but only if the size of the system is below the bifurcation length ~ see (9.52) and [2]. In the systems of the size less than the bifurcation length, X; creation of a phase-separating interface is not favorable, and the homogeneous state turns into the global optimizer. 2. In Chap. 8 we represented the free energy of the system as a sum of the regular g0 and singular (transitional) (g - g0) parts. In an open system, they do not “interact” with each other in the sense that g0 does not affect the transition equations; see Sect. 8.2. This is not the case in a closed system where, as we can see from (13.15, 13.16), the regular and singular parts are convoluted in the same equations, which determine stability of the outcome of the transition. Physically, this is a result of the absence of the thermal bath in the closed system. In the open system, the regular and singular parts of the free energy, so to speak, thermally interact with the bath to achieve equilibrium. In the closed system, the interaction with the bath is prohibited, and the regular and singular parts are “forced” to interact with each other. Because of that, we have to look at the thermal properties of the terminal phases in greater detail. Theoretically, it is convenient to deal with a thermodynamic system with equal and temperature-independent specific heats in both high-symmetry (α) and low-symmetry (β) phases, that is, Cα = Cβ = C = const

13.1

Equilibrium States of a Closed (Adiabatic) System

309

(T ), and reckon the energy and entropy with the α-phase at the temperature TE of its coexistence with the β-phase. Then the entropy, energy, and free energy densities of the α- and β-phases are eα ð T Þ = C ðT- T E Þ, sα ð T Þ = C ln

T , TE

eβ ð T Þ = CðT- T E Þ - L

ð13:19aÞ

T L TE TE

ð13:19bÞ

sβ ð T Þ = C ln

gα ðT Þ = C ðT- T E Þ - CT ln

T T - TE , ½ g = L TE TE

ð13:19cÞ

The approximation of the temperature-independent specific heat simplifies our calculations in the vicinity of TE but, apparently, cannot be extended to very low temperatures. (Why?) Example 13.1 Adiabatic Stability of a Transition State Find the conditions of adiabatic stability for the transition state in the system with the tangential potential and C = const(T ). The free energy density of the system is (see Eqs. (13.19c, 8.30)) g = C ðT- T E Þ - CT ln

T 1 T - TE þ Wω2 ðηÞ þ L νðηÞ TE TE 2

ð13E:1Þ

Its partials are 2

2

2

L C ∂ g ∂ g T - TE 0 ∂ g 2 =- ; ω: = 6 ω; 2 = W ω0 - 2ω þ 6L TE TE T ∂T∂η ∂η ∂T 2

ð13E:2Þ

Applying (13E.2) to the transition state (8.34), we find that 2

1 T -T ∂ g ðη Þ = - 2Wωðηt Þ; ηt = 1þP E 2 TE ∂η2 t

ð13E:3Þ

where P

6L W

ð13E:3aÞ

is the ratio of the latent heat and barrier height.1 To calculate the interaction module of the state, we substitute (13E.2, 13E.3) into (13.16), which yields

1

Do not confuse with pressure.

310

13

Multi-Physics Coupling: Thermal Effects of Phase Transformations

M t = 3Qð1 þ P - 2ηt Þωðηt Þ Q

ð13E:4Þ

L : CT E

ð7:17bÞ

According to (13.16) for the adiabatic stability of a state, its interaction module must be greater than one. As one can see from (13E.3, 13E.4), Mt varies depending on the temperature and material parameters (P, Q). Hence, there may be an interval of temperatures where the condition Mt > 1 is satisfied. Let us find the maximum value of Mt among all possible values of temperature in the system. The maximum is attained at ηm t =

1 P - 3 þ P2 , þ 6 2

ð13E:5aÞ

or Tm =

2 þ 3

3 þ P2 TE → TE 3P

for

ð13E:5bÞ

P→1

Then (13E.4) and (13E.5a) yield an expression for the largest interaction module of the transition state for given values of the parameters (P, Q):

1 M tm = Q 3 þ P2 18

3= 2

þ P 9 - P2



p 1 p Q 1 þ 3P , 2 3 3 QP, 4

P→0

ð13E:6Þ

P→1

Thus, the transition state of the tangential potential may be adiabatically stable (conditions (13.16) and (13.17b) are satisfied) if the material properties of the system obey the condition M m t > 1 or: Q>

p 2 3 4 3P

if P → 0 if P → 1

ð13E:7Þ

Figure 13.1 provides a geometrical means to verify the condition (13.17b) for a system with given parameters (P, Q). However, notice from (13E.6, 13E.7) that the adiabatic stability of systems with large P is determined completely by a single parameter called the thermodynamic number: U

WCT E 1 = PQ 6L2

ð13E:7aÞ

Equilibrium States of a Closed (Adiabatic) System

Fig. 13.1 OP-temperature plane of the equilibriumstate diagram of a system described by the tangential potential with Q = 1 and P = 2. α, β, t, equilibrium states; red lines, constantenergy trajectories (dashed line, for Q = 1.1); circles, equilibrium transition states of the closed (adiabatic) system (filled, stable; open, unstable); crosses, 1d nonequilibrium inhomogeneous states from the numerical simulations by (13.84) (blue, t = 1100; green, t = 4200). Cf. Fig. 8.5b

311

a

b

T Sb t

temperature T

13.1

TE

T Sa

0

0.5

1

order parameter h

Also notice from (13E.2) that the interaction modules of α- and β-phases are equal to zero, which excludes any thermal effects around these states. Case B Type-E1 inhomogeneous state {ηE1(x)} of a closed system obeys the same ELE as an inhomogeneous state of an open system (Why?): ∂g δG ∂g -∇ =0  δη ∂η ∂ð∇ηÞ

ð9:16Þ

To analyze the thermodynamic stability of this state, we need to find the sign of the second variation of the entropy functional (13.1): δ2 S =

1 2

2

V

2

2

2

∂ s ∂ s ∂ s ∂ s ðδηÞ2 þ 2 ðdT Þ2 þ ð∇δηÞ2 d3 x δηdT þ 2 2 ∂η ∂T∂η ∂T ∂ð∇ηÞ2 ð13:20Þ

for the variations of the OP and temperature that leave the energy functional (13.2) unchanged: ΔE  Efη þ δη, T þ dT g - Efη, T g = δE þ δ2 E þ ⋯ = 0

ð13:21Þ

312

13

Multi-Physics Coupling: Thermal Effects of Phase Transformations

To find the sign of δ2S, we choose the variations of the OP and temperature such that the first variation of the energy functional vanishes: δE ∂e δη þ dT d 3 x = 0 ∂η ∂T

δE = V

ð13:22aÞ

For such variations, using (13.4, 13.5) we obtain

dT = -

δG ∂η

∂ - T ∂T

2

δG ∂η

δη =

∂e ∂T

g ∇ ∂T∂∂ð∇η Þ

2

∂ g ∂T∂η

2

∂ g ∂T 2

δη

ð13:22bÞ

Notice that although the variational derivative of the free energy at the equilibrium state is zero, its partial derivative with respect to temperature is not. Then (13.20–13.22) yield that the second variation of the energy functional also vanishes and δ2S has the sign opposite to that of the second variation of the free energy functional: δ2 G = δ2 E - Tδ2 S =

1 2

2

V

2

2

2

∂ g ∂ g ∂ g ∂ g ðδηÞ2 þ 2 ðdT Þ2 þ ð∇δηÞ2 d3 x δηdT þ 2 2 ∂η2 ∂T∂η ∂T ∂ð∇ηÞ ð13:23Þ

Although the sign of δ2G can be estimated for a general expression of the free energy density, we will proceed with the expression adopted in Chap. 9: 1 g = gðη, T Þ þ κð∇ηÞ2 2

ð9:15bÞ

κ = constðT Þ

ð13:24Þ

with

Then, integrating the last term in (13.23) by parts and substituting (13.22a), we obtain δ2 G =

1 2

δηHE1 δηd3 x, 2

2

ð13:25aÞ

V

∂ g HE1 = 2 ∂η

∂ g ∂η∂T 2

∂ g ∂T 2

2

- κ∇2

ð13:25bÞ

13.1

Equilibrium States of a Closed (Adiabatic) System

313

Notice that the homogeneous (non-gradient) term of the Hamilton’s operator in (13.25b) is greater than that in (9.21). This means that, like the homogeneous Case A, the closed system is, so to speak, more stable than the open one. To see this more clearly, let us assume that ∂2g/∂T∂η = const(η) and consider the gradient of an interface-type state: Ψ  ð xÞ =

dηe4 ð xÞ dx

ð13:25cÞ

as an eigenfunction of the operator HE1 ðxÞ (cf. Eq. (9.126b)). Differentiating ELE (9.16) we obtain HE1 Ψ ðxÞ =

d δG ðη Þ þ ΛE1 Ψ ðxÞ = ΛE1 Ψ ðxÞ dx δη e4

ð13:25dÞ

where 2

ΛE1 =

1 ∂ g C η ∂T∂η

2

= const > 0

ð13:25eÞ

Because ΛE1* is the smallest eigenvalue of HE1 ðxÞ (see Appendix E), the interface-type solution ηe4 becomes absolutely stable in the closed system, as opposed to being neutrally stable (the Goldstone mode) in the open one. Physically, destruction of the Goldstone mode in a closed system is a result of elimination of the translation invariance, which in turn comes as a result of the energy conservation constraint (13.2). Indeed, any shift of the interface changes the energy balance in the system because the energy densities of the phases on the opposite sides of the interface are not equal, although their free energy densities are. Additional stabilization caused by the energy conservation constraint (13.2) may cause an unstable inhomogeneous equilibrium state of the open system to gain stability in the closed one.

13.1.2

*Type-E2 States

If the equilibrium state not only imparts a maximum on the entropy functional (13.1) but also is an extremal of the energy functional (13.2), then, instead of (13.3), this state satisfies the following simultaneous relations: δSfηE2 , T E2 ; δη, δT g = 0

ð13:26aÞ

δEfηE2 , T E2 ; δη, δT g = 0

ð13:26bÞ

314

13

Multi-Physics Coupling: Thermal Effects of Phase Transformations

Importantly because of the two variational conditions (13.26), it may not be possible to properly characterize the state with just one variable field of OP: {ηE2(x)}. Then, another variable field, e.g., temperature {TE2(x)}, may be needed to complete the characterization, so that the equilibrium state {ηE2(x), TE2(x)} becomes inhomogeneous in the OP and temperature [3]. The fields {ηE2(x)} and {TE2(x)} are not independent because the conditions (13.26a), (13.26b) yield a system of simultaneous equations: δS ∂s dT þ =0 δη ∂T dη

ð13:27aÞ

δE ∂e dT þ =0 δη ∂T dη

ð13:27bÞ

where (δ/δη) is understood now as a partial functional derivative with respect to the OP for non-varying temperature field. To better understand the properties of the state {ηE2(x), TE2(x)}, let’s calculate the partial variational derivative of its free energy with respect to the OP: δG ∂g ∂g δE δS ∂s -∇ = ∇T fη, T g  -T þ δη δη ∂∇η δη ∂η ∂∇η

ð13:28aÞ

Here we used the Legendre transform (13.5) and rearranged the terms. Applying the equilibrium equations (13.27), we obtain dκ δG ∂g dT ∂s þs þ ∇T = ∇η∇T fη, T g  δη dη ∂∇η dT ∂T

ð13:28bÞ

where we used (9.15b) and again the Legendre transform (13.5). As one can see, in general, the partial functional derivative δG/δη in (13.28) is not zero. It is zero, however, if the gradient energy coefficient κ is temperature independent; see (13.24). However, even in this case, the equilibrium state {ηE2(x), TE2(x)} is different from that of type-E1 because the temperature distribution of this state is inhomogeneous. In order to analyze the thermodynamic stability of the state {ηE2(x), TE2(x)}, we need to substitute δT expressed from (13.27a) into the expression for δ2S of (13.20) and perform the integration by parts. This brings the second variation to the form (13.25a) and again reduces the problem of stability to the analysis of the spectrum of the respective operator HE2 ðxÞ. In a system where condition (13.24) applies, this operator takes on a particularly simple form: 2

2

∂ ∂ g HE2 = ∂T ∂η2

∂ g ∂η∂T 2

∂ g ∂T 2

2 2

2

2

∂ s ∂ s ∂s=∂η ∂ s = 2 -2 þ ∂η ∂T∂η ∂s=∂T ∂T 2

∂s=∂η ∂s=∂T

2

ð13:29Þ

13.1

Equilibrium States of a Closed (Adiabatic) System

315

and the type-E2 state is stable if the operator is sign negative. As we will see in Example 13.2, the type-E2 states are not stable. However, they can be achieved in a thermodynamic system with vanishing thermal conductivity (λ → 0), that is, an ideal thermal insulator. Nevertheless, the type-E2 states are important because even a heat conducting system may spend a great deal of time in the vicinity of this state during a transformation process. Example 13.2 Type-E2 State in a System with Tangential Potential Find a 1d type-E2 state of a system described by the tangential potential with C = const(T ) and κ = const(T ). In the 1d case, the system (13.27) reduces to ds =0 dη

ð13E:8aÞ

de d2 η -κ 2 =0 dη dx

ð13E:8bÞ

Multiplying (13E.8) by dη/dx and integrating them separately, we find that the equilibrium states obey the following simultaneous equations: sðη, T Þ = const eðη, T Þ -

κ dη 2 dx

ð13E:9aÞ

2

= const

ð13E:9bÞ

The first equation shows that the state {ηE2(x), TE2(x)} is isentropic, which confirms our earlier conclusion that the temperature distribution in this state must be inhomogeneous if the OP is not uniform. The second equation shows that the regions of the system far away from the transition zone have the same energies, but not the free energies because the temperatures may be different. For the tangential potential with (13.19), (13E.1), the entropy and energy densities are s = C ln

T L νðηÞ TE TE

1 e = CðT- T E Þ þ Wω2 ðηÞ - LνðηÞ 2

ð13E:10aÞ ð13E:10bÞ

From (13E.9, 13E.10) we find the distributions of the temperature and OP fields in the type-E2 state: T = T α expfQ νðηÞg,

Tα = TE

Q , exp Q - 1

ð13E:11aÞ

13

KE

TE

order parameter

Fig. 13.2 Distributions of the scaled temperature (blue) and OP (red) as functions of the scaled coordinate x for the 1d typeE2 equilibrium state after a first-order transition

Multi-Physics Coupling: Thermal Effects of Phase Transformations

D -phase

scaled temperature

316

E -phase

KD

T 0

κ

dη dx

2

= Wω2 ðηÞ þ 2L

5

scaled coordinate x

exp½QνðηÞ - 1 - νðηÞ exp Q - 1

10

D

ð13E:11bÞ

where Tα is the temperature of the α-phase. To analyze the thermodynamic stability of the type-E2 state (13E.11), we need to calculate the right-hand side of (13.29): HE2 = - 6CQ 6Qω2 ðηÞ þ ω0 ðηÞ :

ð13E:12Þ

This expression is not sign-definite in the domain 0 ≤ η ≤ 1, which means that this state is not stable but of the saddle-type stability. In Fig. 13.2 are depicted the scaled distributions of OP and C ðT - T α Þ ~ = T, L

x

κ = ~x W

ð13E:13Þ

Example 13.3 Type-E2 APB Find a type-E2 APB in a system with C = const(T) and κ = const(T). An APB is a 1d equilibrium state, which comes about after the second-order transition; see Example 9.2. The system is described by the free energy density: 1 T - Tc 1 2 þ η g = g0 ðT Þ þ aη2 2 2 Tc

1 þ κð∇ηÞ2 2

ð13E:14Þ

13.1

Equilibrium States of a Closed (Adiabatic) System

317

It undergoes a second-order transition at T = TC; see (8.44) and (8.45). If the temperature is lowered to TE < TC, two stable variants η ± = ±

p

- τ,

τ=

TE - Tc Tc

ð13E:15Þ

appear after the transition with equal likelihood. They will be separated by the transition region of the thickness p LAPB = 2 2l,

l=

κ að - τ Þ

ð13E:16Þ

If after the cooling to TE < TC the system is isolated from the environment, a 1d type-E2 state may establish in the system. For such state (13.27) yields dT = pTη, dη κ

a CT c

ð13E:17aÞ

T d2 η = aη η2 - 1 þ Tc dx2

ð13E:17bÞ

p=

The BC are: dη η = 0, dx ±

T E = T η ±

ð13E:18Þ

Equations (13E.17a, 13E.17b) can be integrated consecutively. Integrating (13E.17a) we obtain p 2

T = T 0 e 2η ,

ð13E:19aÞ

where T0 = T(η = 0). Using the BC (13E.18), we obtain that p 2 T = T C ð1 þ τÞe2ðτþη Þ

ð13E:19bÞ

Substituting (13E.19b) into (13E.17b), multiplying by dη/dx, integrating once, and applying the BC (13E.18), we obtain l

dη = dx

η4 - 2η2 - ðτ2 þ 2τÞ -

p 1 2 ð1 þ τÞ 1 - e2ðτþη Þ p

Equation (13E.20) can be integrated numerically; see Fig. 13.3.

ð13E:20Þ

Fig. 13.3 Distributions of the scaled temperature (blue) and OP (red) as functions of the scaled coordinate x for the 1d typeE2 equilibrium state after a second-order transition

Multi-Physics Coupling: Thermal Effects of Phase Transformations

0.4

a-variant

0

b-variant

scaled temperature

13

order parameter

318

-0.4

0

1

2

3

scaled coordinate x

To analyze the thermodynamic stability of the type-E2 state (13E.19b, 13E.20), we calculate the left-hand side of (13.29): HE2 = -

a 1 þ αη2 < 0: Tc

ð13E:21Þ

This expression is sign-definite, which means that this state is stable, at least locally.

13.2

Generalized Heat Equation

As we concluded in the introduction, evolution of our system is accompanied by energy redistribution and heat propagation. Hence, in addition to TDGLE dη δG = -γ , dt δη

γ>0

ð11:1Þ

our system must be described by a coupling heat equation, which takes into account heat production due to the ongoing phase transition and distribution due to the conduction. To derive this equation, we apply to a small volume δV of a heterogeneous nonequilibrium medium the first law:

13.2

Generalized Heat Equation

319

de = đq þ đw,

ð13:30aÞ

the second law: ds ≥

1 đq, T

ð13:30bÞ

and the definition of the heat flux JT: đq = - divJT dt:

ð13:30cÞ

Here đw is the work done on and đq is the amount of heat transferred to the volume δV. Let’s assume that our medium is incompressible. Then the work term vanishes, and to find the heat equation, we have to find an expression for the internal energy density variation dê, which accounts for nonlocal interactions in the medium. The derivation is based on the calculation of the variation of the internal energy functional E of the whole system, (13.2), which results from a small inhomogeneous variation of the OP δη. Let’s assume that the variation δη occurred at constant temperature and in the volume δV independently from neighboring volumes of the system. Such variation δη vanishes everywhere outside of the considered volume. Then, using the definition of the variational derivative (see Appendix B), we obtain δE =

δE δη d3 x δη δV

ð13:31Þ

Here, as before, the variational derivative is understood for the invariable temperature field. On the other hand, from (13.2) we have δe d3 x

δE =

ð13:32Þ

V

Comparing (13.32) with (13.31) and using continuity of the variations δê(x) and δη(x) as functions of the position, we arrive at the expression for the energy density variation: δe =

δE δη δη

ð13:33Þ

When the temperature varies simultaneously with OP, the nonlocal energy density variation takes the form de = Cη dT þ

δE dη, δη

ð13:34aÞ

320

13

Multi-Physics Coupling: Thermal Effects of Phase Transformations

where T(x, t) is the temperature field and the specific heat per unit volume at constant V and η Cη 

∂e ∂T

ð13:34bÞ η

may include the gradient energy contribution. Substituting (13.34a) into (13.30a) and using (13.30c) yields the generalized heat equation (GHE) for the incompressible medium (d/dt → ∂/∂t): Cη

∂T = - divJT þ Qðx, t Þ ∂t

ð13:35Þ

where Q(x, t) is the density of instantaneous heat sources in the energy representation: Qðx, t Þ = -

∂η δE ∂η ∂e =- κ E ∇2 η δη ∂t ∂t ∂η κE = κ - T

dκ dT

ð13:36aÞ ð13:36bÞ

Using the Legendre transformation (see Eqs. (13.4, 13.5) and Appendix F), the heat source Q(x, t) may be also represented in the entropy form: Qðx, t Þ = - T

δS δG ∂η þ δη δη ∂t

ð13:37Þ

The same variational procedure, which was used in (13.31–13.34), may be used to find the entropy variation in the volume δV: ds =

1 δS C dT þ dη: T η δη

ð13:38Þ

Comparing (13.38) with (13.34) and using the Legendre transformation (13.4), we arrive at the expression of the first law in the form ds =

1 1 δG đq dη T T δη

ð13:39Þ

Application of the second law (13.30b)–(13.39) yields a constraint on the rate of OP change:

13.2

Generalized Heat Equation

321

δG dη ≤0 δη dt

ð13:40Þ

This constraint is a local expression of constraint (11.2). It proves that the linearansatz equation (11.1) is an acceptable, but not unique, choice of the evolution equation for the OP. It also hints at the nonlinear extensions of the evolution equation (see Sect. 10.4). The energy, (13.36), and entropy, (13.37), representations of the heat source Q(x, t) reveal many of its important properties: 1. They show that the source does not need to be sign-definite, that is, there may be local sinks of heat inside an overall heat source. 2. The energy representation shows that, in addition to the homogeneous part (∂e/ ∂η) responsible for the latent heat L (see below), the source contains the inhomogeneous part (κE∇2η), which affects the overall heat production in the system. 3. The entropy representation shows that the heat source consists of the entropy contribution, which may be either positive or negative depending on the direction of the transition, and the dissipation which, due to the constraint (13.40), is proportional to the rate of the transition squared and, hence, always positive. A phase transition is also accompanied by the redistribution of the internal energy; its transfer is described by the energy density flux vector JE, defined as follows: ∂e = - divJE : ∂t

ð13:41Þ

To find a relation between the energy density and heat fluxes, we, first, find the time derivative of ê taking into account all (not only local) variations of temperature and OP fields: ∂e ∂T δE ∂η ∂η = Cη þ þ div κE ∇η ∂t ∂t δη ∂t ∂t

ð13:42Þ

To obtain this relation, we used the Legendre transform (13.5b) and definition (9. 15b). Substituting (13.41) and the GHE in the energetic representation (13.35, 13.36) into (13.42), we obtain the expression for JE in the incompressible motionless medium: JE = JT - κE ∇η

∂η ∂t

ð13:43Þ

This result shows that in addition to the heat flux, the expression for JE contains the work flux associated with the interactions that appear in the system due to inhomogeneities in a nonlocal nonequilibrium medium. The work flux entails the inhomogeneous term in the heat source (13.36) and is responsible for the surface

322

13

Multi-Physics Coupling: Thermal Effects of Phase Transformations

creation and dissipation effect, analyzed in the next section. The work flux is analogous to the intensity of a sound wave in a fluid with η replacing the displacement of an element of fluid and κE replacing the adiabatic bulk modulus. To complete GHE (13.35), we need an expression for the heat flux in the heterogeneous nonequilibrium medium. Notice that the expressions of the heat flux JT and heat source Q(x, t) are independent of each other. The heat flux JT depends on local values of temperature, its gradients, and properties of the medium and is known to vanish with ∇T (Fourier’s law). In [4] the following expression was used for the flux: JT = Λ∇

δS δe

ð13:44aÞ

Another possibility would be to consider an integral expression for the heat flux in a medium with the “spatial memory”; these effects are not considered in this book. Expanding the flux JT in ∇T and disregarding terms of the orders higher than the first one, we find the regular expression for the heat flux: JT = - λ∇T

ð13:44bÞ

where the thermal conductivity λ may be a function of T and η. Then the GHE takes the form Cη

∂T = ∇ðλ∇T Þ þ Qðx, t Þ ∂t

ð13:45Þ

GHE (13.45) with the heat source expressed by (13.36) or (13.37) is thermodynamically rigorous and absolutely invariant with respect to the derivation from the first or second laws of thermodynamics. It differs from the heat equation (7.5), which we used in Chap. 7 to study kinetics of transformations, by the presence of the heat source and more complicated specific heat capacity function. GHE (13.45) couples to TDGLE (11.4) or TDLGLE (12.29b) and makes up a system of simultaneous equations that describe all stages of evolution of a phase transformation in a medium with the specified thermodynamic and kinetic properties: C, κ, λ, γ, TE (or TC), L, W (or a). Both equations are of the diffusion type; GHE (13.45) is characterized by the thermal diffusivity α and TDGLE (11.4) by the ordering diffusivity m: α= The ratio of the diffusivities

λ , C

m = γκ:

ð13:46aÞ

13.3

Emergence of a New Phase

323

Ki 

α λ = m Cγκ

ð13:46bÞ

is called the kinetic number (cf. Eq. (7.66)); it determines different regimes of the evolution. As we found in Part I, a transformation process may be loosely divided into the following stages: nucleation, growth, and coarsening. All these stages will be affected by the processes of heat release and redistribution; they will be considered separately beginning with the next section. Example 13.4 Heat Source in a System with Tangential Potential Find the spatial distribution of the heat source in the plane interface moving through the isothermal system at T = T0 described by the tangential potential with κ = const(T ). Using the transformations (11.14, 11.15) for (13.36a), we obtain an expression: QðuÞ = v

dη ∂e d2 η -κ 2 du ∂η du

ð13E:22Þ

To use the solution (11.17b, 11.21–11.24), which was obtained for the Landau potential, we need to scale the variable η and parameter κ. Using the scaling as in (8.29, 9E.3) and the internal energy density expression (13E.10b), we obtain the expression for the source in the scaled variables: QðηÞ = 6vL

W 2 ω ðηÞ κ p -1

ηðuÞ = 1 þ eu

ð13E:23aÞ

W=κ

ð13E:23bÞ

v = μðT E- T 0 Þ

ð11E:6aÞ

where v is the velocity of the interface

Notice that both branches (+ and -) of the solution (11.17b, 11.21–11.24) have the same expression for Q(u). The latter is depicted in Fig. 13.4.

13.3

Emergence of a New Phase

Heat release and redistribution affects emergence of a new phase in many different ways. A field theory of thermal effects in nucleation is not completed yet. It can be constructed based on the coupling of TDLGLE (12.29b) and GHE (13.45). In this section we consider only one particular thermal effect of this type—the so-called thermal decomposition. It manifests when a stable phase is cooled down into a

324

13

Fig. 13.4 Distributions of the OP and heat source Q in the plane interface moving through the isothermal system described by the tangential potential with κ = const(T )

Multi-Physics Coupling: Thermal Effects of Phase Transformations 1

0.8

h

0.6

0.4

Q

0.2

0 -8

-4

0

4

8

distance from center u

vicinity (above or below) of the spinodal point of the first-order (discontinuous) transition or undercooled below (or heated above) the critical temperature in the second-order (continuous) transition. The thermal noise in the system gives rise to small fluctuations of the OP and local temperature. Whether these fluctuations will grow or decay depends on the stability properties of the adjacent homogeneous equilibrium states. Capitalizing on our success in using the method of normal modes in Sects. 1.5, 9.5.1, and 11.2.1, we study evolution of the small disturbances in the form of harmonic waves superimposed on the homogeneous equilibrium state in question η, T : ηðx, t Þ = η þ ΝeβðkÞtþikx

ð13:47aÞ

T ðx, t Þ = T þ ΘeβðkÞtþikx

ð13:47bÞ

Here k is the wavevector of the permitted perturbations, and β(k) is the amplification factor, which determines the “fate” of the disturbances. When these waves are substituted into TDGLE (11.4) and GHE (13.45), they yield two simultaneous equations for the small amplitudes {Ν, Θ}:

13.3

Emergence of a New Phase

325 2

2

β ∂ g ∂ g þ κ jkj2 Ν = þ Θ γ ∂η2 ∂η∂T 2

λ j kj 2 - T

ð13:48aÞ

2

∂ g ∂ g β Θ=T βΝ ∂η∂T ∂T 2

ð13:48bÞ

where the functions with a bar are taken at the equilibrium point η, T . Not surprisingly the OP/temperature interactions depend on the mixed partial 2 ∂ g=∂T∂η. If this quantity is zero, the simultaneous equations (13.48) break up into two independent equations for the amplitudes {Ν, Θ}, which means that the OP and temperature waves evolve independently. In the limit of λ → 1, we recover from (13.48) equation (11.8), that is, the isothermal condition for the OP mode. If λ is 2 finite and the quantity ∂ g=∂T∂η is not zero, the OP and temperature waves interact. In this case the system (13.48) has nontrivial solutions only if its determinant vanishes [5]. Then 2

2

∂ g ∂η2

β2 þ γ

∂ g ∂η∂T

2

þ ðα þ mÞjkj2 β

2

∂ g ∂T 2

2

þα γ

∂ g þ m j kj 2 j kj 2 = 0 ∂η2

ð13:49Þ

This solvability condition relates the amplification rate β to the wavenumber k of the perturbations, that is, it is the dispersion relation. For the equilibrium state η, T to be linearly dynamically stable, the real parts of all the roots of (13.49) must be positive. According to the Routh-Hurwitz theorem [5], the free term and the coefficient of the linear term of (13.49) must not be negative: 2

γ

∂ g ∂η2

2

∂ g=∂η∂T 2

∂ g=∂T

2

2

þ ðα þ mÞjkj2 ≥ 0

ð13:50aÞ

2

α γ

∂ g þ mjkj2 jkj2 ≥ 0 ∂η2

ð13:50bÞ

Because α > 0 and m > 0, condition (13.50a) is less restrictive than condition (13.50b). Then we obtain that for the linear dynamic stability of the state η, T , there must be

326

13

Multi-Physics Coupling: Thermal Effects of Phase Transformations 2

∂ g ≥0 ∂η2

ð13:51Þ

Hence, the criterion of the dynamical stability of the system described by TDGLE +GHE coincides with the criterion of thermodynamic stability of an open (isothermal) system, but not the closed (adiabatic) one. This happens because the normal modes (13.47), so to speak, do not conserve the energy of the functional (13.2). Now let us study the dynamical behavior of different modes of an unstable equilibrium state, that is, a state where (13.10) is true but (13.51) is not. It is more convenient to study the dispersion relation in the form where the amplification factor β and the wavenumber k are scaled as the following: 2

β= -γ

∂ g ω; ∂η2

2

jkj2 = -

1∂ g q; κ ∂η2

ð13:52aÞ

Then the dispersion relation takes the form ω2 þ ðM- 1Þω þ ðKi þ 1Þωq - Kiqð1- qÞ = 0

ð13:52bÞ

Thus, different regimes of the decomposition are determined by the kinetic number Ki and interaction module M. The left-hand side of the dispersion relation (13.52b) is a second-degree polynomial in two variables. As known, solutions of this equation are conics with two branches. Because the discriminant of the dispersion relation (13.52b) or (13.49) is not negative, both of its branches are real (at least for the real wavenumbers q). Hence, the modes may grow (β > 0) or decay (β < 0) but cannot oscillate (Imβ ≠ 0). The lower branch is never positive, that is, does not have unstable modes and, hence, does not present any interest for us. The upper branch, that is, the one with the greater amplification factor, may have unstable modes and will be analyzed further. The wavenumber of the neutral mode (ω = 0) is q0 = 0 or 1. However, the two values may belong to two different branches. Let us find the wavenumber qm of the most dangerous mode, that is, the one with the greatest amplification factor ω. For this we differentiate (13.52b) with respect to q, equate dω/dq to zero, and present the result in the form M ðKi þ 1Þ =

½1 þ qm ðKi - 1Þ2 1 - 2qm

ð13:53Þ

Depending on the values of the interaction module M and kinetic number Ki, several cases of instability of the equilibrium state may be found. These cases are presented in Fig. 13.5 in the plane (Ki, M ) with the inserts representing the dispersion relations in the dimensional form β(k).

13.3

Emergence of a New Phase

327

Fig. 13.5 Instability diagram in the plane (Ki, M ). Blue line, M(Ki + 1) = 1; purple line, M = 1. Inserts: the amplification factors β for the unstable branches of the dispersion relation (13.49) as functions of the wavenumber |k| for the cases a, b, and c described in the text

Case a 0 < M < (Ki + l)-1 < 1, weak interactions. Equation (13.53) does not have solutions. Hence, the uniform mode (q = 0) is the most dangerous one with ω0 = 1 - M. This is similar to the evolution in isothermal systems (cf. Eq. (11.8b)). The difference is that the amplification factor of the most dangerous mode is determined by the thermal interactions and amplification factors of the other unstable modes depend on the coefficient of thermal conductivity—the semi-isothermal case. Case b (Ki + l)-1 < M < 1, medium interactions. Equation (13.53) has solutions: the uniform wavemode is unstable but the most dangerous mode has a finite wavenumber—intermediate case.

328

13

Multi-Physics Coupling: Thermal Effects of Phase Transformations

Case c M > 1, strong interactions. The uniform mode is neutral, ω0 = 0, and the wavenumber of the most dangerous mode is finite: 0 < qm < ½—semi-adiabatic case. Case d M ≫ 1, very strong interactions. An example is a system near the spinodal point

2

∂g=∂η = 0, ∂ g=∂η2 = 0 , that is, near the absolute stability limit of a

homogeneous equilibrium state. From the standpoint of thermal effects, Case d is the most interesting and deserves a rigorous study beyond the limit of small amplitudes. To examine the nonlinear regime of evolution of the unstable waves, we introduce a small parameter ε, scale the modes in TDGLE+GHE with respect to this parameter, and balance the linear and nonlinear terms of the same order of magnitude of ε. The parameter ε that we introduce here defines the departure from the spinodal point: 2

∂ g = - n1 ε 2 : ∂η2

ð13:54aÞ

As the most dangerous mode is long (k → 1) and slow (β → 0) (see Eq. (13.52a)), we scale the spatiotemporal coordinates with X = εx,

ð13:54bÞ

τ = ε3 t

and the disturbances of {η, T} as follows: η = η þ εν ξðX, τÞ,

T = T þ ενþ1 n2 θðX, τÞ,

ν>0

ð13:54cÞ

where the positive quantities ni and amplitudes ξ, θ are of the order of 1 and the exponent ν needs to be determined. For ν > 2 we can balance only linear terms. For ν = 2 we can balance the nonlinear terms at the leading order of ε if the temperature is high and the mixed partial is small: T=

n3 , ε

2

∂ g = n2 ε: ∂η∂T

ð13:54dÞ

Then TDGLE+GHE becomes 3

n22 θ = n1 ξ -

1∂ g 2 ξ þ κ∇2 ξ, 2 ∂η3

ð13:55aÞ

13.3

Emergence of a New Phase

329

n3

∂ξ = - λ∇2 θ: ∂τ

ð13:55bÞ

This system governs stability of the state η, T and shows that in the early stages of its decomposition, the temperature deviations are determined by the departure from equilibrium and that the heat sources are balanced by heat conduction in the system. Excluding θ from (13.55) and returning to the dimensional variables, we obtain for the OP field the nonlinear evolution equation: dη = dt

λ 2

T ∂ g=∂η∂T

2

∇2

∂g - κ∇2 η ∂η

ð13:56Þ

This equation means that in this case the growing waves of the new phase obey the nonlinear Cahn-Hilliard equation from spinodal decomposition (cf. Eq. (15.16)), so that the order parameter manifests temporary conservation law. Using the scaling (13.54), we can see that the mobility of this regime λ 2

T ∂ g=∂η∂T

2



1 ε

ð13:56aÞ

is large and independent of the relaxation constant γ of (11.1), which means that such decomposition is totally controlled by heat transfer. The scaling (13.54) also shows that 2

M=

T ∂ g=∂η∂T Cη ∂

2

g=∂η2

2

1  , ε

ð13:56bÞ

which means that, indeed, the interactions between the OP and temperature modes are strong. The scaling (13.54d) also manifests large entropy contribution to the free energy. The scaling (13.54c) shows that in this regime the fastest modes to develop are “quasi-isothermal” because the temperature deflection is of higher order than that of the OP. This regime is characterized by modulations of the OP field; it is analogous to the spinodal decomposition in a system with a conserved OP. The difference is, first, that in the latter case modulations accompany the process from the beginning to end, while in the present case the modulations are temporary and, second, that for a system with the nonconserved OP modulations are governed by the energy conservation instead of mass conservation as in the spinodal decomposition.

330

13.4

13

Multi-Physics Coupling: Thermal Effects of Phase Transformations

Non-isothermal Motion of a Curved Interface

In Chap. 7 we discussed several prominent thermal effects of phase transformations; dendritic growth was one of them. In this chapter we intend to see how we can use FTM to illuminate these effects. One way to use FTM is to solve coupled TDGLE (11.4) and GHE (13.45). Although complicated, it may be done with the help of the numerical methods. However, there is plenty of information about the system’s evolution that can be gleaned theoretically from the drumhead (sharp-interface) equations. To obtain these equations, we will use the averaging technique introduced in Sect. 9.5.2 and used in Sect. 11.4 where, in order to derive the drumhead equation for a moving interface, we introduced the time-dependent curvilinear coordinates (u,v, w,t) such that in the new coordinates, the OP is a function of one coordinate only, η = η(u); see Fig. 13.6 and cf. Fig. 9.9. That allowed us to characterize the interface by the thermodynamic and kinetic properties (σ, Lv , μ) and drumhead dynamic variables (v n , K0). In this section we extend our analysis on the systems of varying temperature. We will derive the drumhead equations that reflect the fact that the non-isothermal interface has finite thickness and curvature. We expect these equations to reveal the physical effects due to the release and redistribution of the latent heat, deviations from equilibrium, and multidimensionality; see Fig. 13.7.

Fig. 13.6 Curvilinear coordinate system (u,v,w) associated with a curved interface. v n , velocity of the interface motion; FGD, the Gibbs-Duhem force. APB, antiphase boundary; IPB, interphase boundary

F

APB

vn

a -phase

v,w 0

ub

y

ua

lI

u h

a

v

h b

F b -phase

IPB C

x

13.4

Non-isothermal Motion of a Curved Interface

331

diffuseness

Fig. 13.7 Three characteristic properties of moving interfaces that separate phases in the firstorder transformations Surface creation and dissipation

Tolman length

Heat trapping

Pattern formation

GibbsThompson

Stefan problem

Isothermal kinetics

13.4.1

non-isothermality

Generalized Stefan Heat-Balance Equation

Any heat released at the interface should be removed from it by means of thermal conduction mechanism. To derive the drumhead approximation of the dynamic equations for a piece of an interface moving in a variable temperature field, we transform TDGLE (11.4) and GHE (13.45) to the time-dependent curvilinear coordinates (u,v,w,t) and average them over the thickness of the interface. In the new coordinates, η = η(u) but T = T(u,v,w,t). However, if the characteristic length of the thermal field lT is much greater than that of the OP field Lv , then the temperature field is “enslaved” by the OP field, and we may expect that T = T(u) also. For an interface moving at the speed v n (cf. (7.25a)), lT =

α vn

ð13:57Þ

Then the criterion for the separation of the thermal and ordering scales can be expressed by the generalized Peclet number as follows (cf. (7.49)): Pe 

L v vn L v = ≪1 lT α

ð13:58Þ

Using the separation-of-scales condition, we present GHE (13.45) in the form

332

13

λ

Multi-Physics Coupling: Thermal Effects of Phase Transformations

dT d2 T þ kT du du2

þ Q T, η,

dη =0 du

ð13:59aÞ

where the source is in the energy representation: Q T, η,

dη = vn du

∂e ∂η

- κE T

d2 η dη þ 2K 0 du du2

dη du

ð13:59bÞ

and kT =

vn þ 2K 0 α

ð13:59cÞ

may be called the thermal curvature of the interface (cf. Eq. (11.29b)). For a plane interface (K0 = 0), we may integrate the quasi-stationary GHE (13.59) from uβ = -1 to uα = +1 and obtain the condition of conservation of energy (enthalpy) in the form C Tβ - Tα = L

ð13:60Þ

where T(-1) = Tβ and T(+1) = Tα. For a curved interface (K0 ≠ 0), there is no conservation of energy along the coordinate line (u, v = const, w = const). Absence of the conservation law does not allow us to resolve the large-scale thermal problem for a curved interface like it is done for a planar one, e.g., (13.60). Instead, we average GHE (13.59) over the thickness of the interface in the interval (uβ, uα), the end points of which are in the regions of the respective phases such that η(uβ) = ηβ and η(uα) = ηα; see Fig. 13.6. The difference from the isothermal case of Sect. 11.4 is that the temperature field at these points may not have reached its asymptotic values yet. Also, contrary to averaging TDGLE (11.28), averaging of GHE (13.59) does not u require a weight factor because uβα du QðT, ηe4 Þ ≠ 0. Thus, we use the averaging operator: Tf 



du f ðT ðuÞ, ηðuÞ, uÞ,

ð13:61Þ



which has a simple relation to the averaging operator used for TDGLE, A = T dη du, and obtain the latent heat and jumps of temperature and temperature gradient across the interface: T

∂e dη = LðT Þ, ∂η du

ð13:62aÞ

13.4

Non-isothermal Motion of a Curved Interface

T T

333

dT dT , = ½ T  = T u β - T ð uα Þ ≈ Lv du du

ð13:62bÞ

d2 T dT dT dT ðu Þ: = = u du du β du α du2

ð13:62cÞ

Then, application of (9.97), (11.29), (13.62) to (13.59) yields the drumhead approximation of the heat-balance equation for a curved interface: λ

dT þ k T ½T  þ ðL- 2εK 0 Þvn = 0 du

ð13:63Þ

which may be called Generalized Stefan Heat-Balance Equation (GSHBE), cf. (7.6). Here ε=σ-T

dσ dT

ð13:64Þ

is the interfacial internal energy. Differentiation in (13.64) is understood in the sense of disequilibrium because expressions for the surface energy at equilibrium, (9.64), and away from it, (11.32), coincide. As one can see from (13.59) to (13.64), an interface moving in a temperature field is characterized by the additional dynamic quantities [T], dT du and properties L, C, and λ.

13.4.2 Surface Creation and Dissipation Effect GSHBE (13.63) is a heat-balance interface condition. It differs from the traditional (Stefan) boundary condition, (7.6), in the terms kT[T] and (-2εK0v n ). The latter is due to the gradient internal energy contribution (~κ E) to the heat source (13.36a); it vanishes for a flat or immobile piece of interface, i.e., when the interfacial area does not vary. To clarify this effect, we may analyze the heat balance before and after a curved interface sweeps material during a first-order transition; see Fig. 13.8. The amount of heat released is called the heat of transformation. Traditionally, it is attributed to the product of the latent heat and the transformed volume: Lv n dvdwdt. However, one must realize that if the moving interface is curved, the area of the interface before and after the move is different by the amount of the surface area created or destroyed: 2K0v n dvdwdt. As the interface carries energy, the change of the surface area will result in the positive or negative additional quantity of heat liberated at the interface. A simple way to see this difference is to consider motion of a piece of interface dXn = v ndt bound by a fixed solid angle dΩ (Fig. 13.8): the area of the interface is smaller after the move if the velocity is directed toward the center of curvature of the interface and greater otherwise. As the latent heat is the internal energy difference of the α- and β-phases, the additional amount of heat due to surface

334

13

Multi-Physics Coupling: Thermal Effects of Phase Transformations e

Fig. 13.8 Motion of a piece of interface bound by a fixed solid angle dΩ. K, curvature of the piece before the move; v n , the normal component of the velocity of the interface; ε, the interfacial internal energy; L, the latent heat of transformation

Vn

K

Y

L

-1

dW

X

area difference should be proportional to the interfacial internal (not free) energy. Hence, the heat of transformation will differ from the above-described amount by the amount of the surface area created or destroyed times the surface internal energy; in the boundary condition (7.6), the latent heat L must be replaced by (L - εK). This effect, which may be called the surface creation and dissipation effect, vanishes for a flat or immobile interface when the interfacial area does not vary.

13.4.3 Generalized Emergent Equation of Interfacial Dynamics In Sect. 7.2 we obtained various regimes of plane interface motion without any consideration of the internal structure (diffuseness) of the interface, e.g., for a very thin interface. However, there is a group of thermal effects of motion of phaseseparating interfaces, which appear as a result of the interface having the internal structure and thickness. To reveal these effects, we expand the averaging of the curvilinear TDGLE (11.29) over the thickness of the interface on the case of varying temperature field. In many ways this procedure is similar to the one that led us in Sect. 11.4 to EEID (11.30). However, a significant difference exists. Because the temperature is not constant anymore, the relation dg = ∂g=∂η dη must be replaced by dg =

∂g ∂g dη þ dT ∂η ∂T

ð13:65Þ

13.4

Non-isothermal Motion of a Curved Interface

335

Then, using A operator for averaging TDGLE (11.29) and taking into account that dη/du vanishes at uβ and uα (see Fig. 13.6), we obtain the equation of motion for a non-isothermal phase-separating interface: ^ A

dη vn ^  s ∂T þ 2κK = ½g þ T γ du ∂u

ð13:66aÞ

where 1 dη s = s - κ s 2 du

2

ð13:66bÞ

and κs = -

dκ : dT

ð13:66cÞ

Using (9.98) and the fact that (dη/du)2 is a bell-like, even function of u, the lefthand side of (13.66a) may be represented as follows: σk η þ O L3v k3η

ð13:67Þ

where σ is the nonequilibrium surface energy (11.32) and kη was called the dynamic curvature of the interface: kη =

vn þ 2K 0 : m

ð11:29bÞ

The first term in the right-hand side of (13.66a) is the free energy jump across the interface where temperature changes together with the OP. The free energy jump in a system where the latent heat is temperature independent [L = const(T )] can be expressed as follows: ½ g = L

½T  TE - TI - sα ½T  þ C ½T  - T I ln 1 þ TI TE

ð13:68Þ

where TI is the average temperature of the interface, more specifically temperature of the U = 0 level surface. Substituting (13.67) and (13.68) into (13.66a), we obtain the Generalized Emergent Equation of Interfacial Dynamics (GEEID): 2

σkη = L

TE - TI 1 ½T  þ F GD þ C þ O ½T 3 , L3v k3η 2 TI TE

ð13:69aÞ

336

13

Multi-Physics Coupling: Thermal Effects of Phase Transformations

^ F GD  T

s - sα

∂T ∂u

ð13:69bÞ

GEEID reveals the “driving forces” for the interfacial motion: the free energy difference on both sides of the interface, L(TE - TI)/TE; the Laplacian pressure due to curvature (-2K0), (2.11); and the force, FGD, (13.69b), which appears due to the temperature gradient inside the transition zone. The latter may be called GibbsDuhem force because it may be derived from the Gibbs-Duhem relation. Notice that the driving forces in (13.69) have units of pressure because they act on a unit area of the interface. GSHBE (13.63) and GEEID (13.69) identify the local interfacial dynamic variables v n , K0, TI, [T], [dT/du] and relate them to the interfacial thermodynamic, C, L, σ, ε, Lv, and kinetic—λ(α), m—properties of the medium. These equations are independent of the history and may be used as the interface boundary conditions in a long-time, long-range problem of phase transformation where the heat diffusion is essential, e.g., see Sect. 13.6.

13.4.4

Gibbs-Duhem Force

To elucidate physical meaning of the Gibbs-Duhem force, the energy conservation condition (13.60) is not enough; we need a detailed knowledge of the temperature field inside the interface; see (13.69b). Let us solve the quasi-stationary GHE (13.59) for (dT/du) inside the interface using a method of asymptotic expansion in the case where the temperature gradient in the final phase (u = uβ) is zero. First, we set TI = Tβ. Then, we obtain an integral representation of the gradient: λ

∂T = - e - kT u ∂u

u

d~u Qð~ uÞekT ~u

ð13:70aÞ



Integrating (13.70a) by parts, we obtain λ

∂T =∂u

u uβ

d~uQð~uÞ þ kT

~u

u

d~u uβ



~ d~~uQ ~ u þ O L3v k3T

ð13:70bÞ

Expansion in (13.70b) in increasing powers of kT may be considered an expansion into “powers of disequilibrium.” If LvkT ≪ 1, that is, if the conditions (13.58), (9.99) are true, expansion (13.70b) can be truncated after the second term, and we can use the energy representation of the heat-source density (13.59b) with the equilibrium solution ηe4(u) from (9.60). Finally, substituting (13.70b) into (13.69b) gives us the expression for the Gibbs-Duhem force:

13.4

Non-isothermal Motion of a Curved Interface

337

vn C I - v I - 2K 0 I 3 λ 1 λ n 2

F GD = -

ð13:71Þ

where the entropy density moments Ii’s are defined as follows: I 1 = T^  ^ I2 = T

s - sα pðuÞ u

s - sα

ð13:71aÞ

d~u p ~ u

ð13:71bÞ



^ I3 = T

s - sα

u

d~u κE



2

dη d~u

þp ~ u

pðuÞ = e - eβ = T E s - sβ :

ð13:71cÞ ð13:71dÞ

Substitution of (13.71) into (13.69) yields another form of GEEID: L

T E - T β 1 ½T 2 σ I1 C 2 þ C = v þ 2σK 0 - 2 I 2 v2n - I 3 vn K 0 þ TE 2 Tβ m λ n λ λ

ð13:72Þ

Exact expressions for the moments Ii’s depend on the type of the potential used. However, one can see from (13.71a, 13.71b, 13.71c) that I3 ≈ I2 ≈ I1Lv. This means that the Gibbs-Duhem force (13.71) is either parallel or antiparallel to the interfacial velocity depending on the value of the moment I1. It is instructive to express I1 and FGD using only measurable quantities that characterize an interface such as the interface energy σ, entropy χ, and latent heat L. In the medium with κ = const(T ), the entropic representation of p(u) in (13.71d) yields I 1 = T E T  fs ð uÞ - s α g  s ð uÞ - s β 

δs2 þ 2

= T ET

ηe4 - ηβ ðηe4 - ηα Þ ½ s ðη - ηÞδs þ ½s2 ½η e4 ½η2

ð13:73Þ

where η = ηβ þ ηα =2. Using (9.69c) and the bell-like shape of δs, we obtain T  δs2 ≈

1 ΓðηÞ Lv s

2

ð13:74aÞ

T  fðηe4 - ηÞδsg ≈ 0 T

ηe4 - ηβ ðηe4 - ηα Þ ½η

2

≈-

ð13:74bÞ 1 L 6 v

ð13:74cÞ

338

13

Multi-Physics Coupling: Thermal Effects of Phase Transformations

Taking into account that [s(TE)] = -L/TE (see Eq. (2.8a)) and substituting (13.74) into (13.73), we obtain that I1 ≈

T E ðηÞ Γ Lv s

2

-

Lv 2 L 6T E

ð13:75Þ

Finally, substituting (13.75) into (13.71), we arrive at the linear approximation for the Gibbs-Duhem force: F GD ≈

v n Lv 2 T E ð η Þ L Γ λ 6T E Lv s

2

ð13:76Þ

Significance of this relation is in that it is expressed through measurable quantities and thermodynamic properties of the system only; yet it is applicable to many different situations. The type of a transition affects the relative magnitudes of ΓðsηÞ and L (e.g., L = 0 for APB), which in turn dramatically affects the magnitude of I1, being negative for a typical first-order transition and positive for a second-order transition. Hence, FGD propels the motion of an interface that appears after the firstorder transition serving as a driving force and opposes motion of an interface after the second-order transition manifesting a drag force; see (13.69) and Fig. 13.6.

13.4.5 Interphase Boundary Motion: Heat Trapping Effect Now let us analyze various regimes of interface motion for a first-order phase transformation. To be specific, let’s consider growth of the β-phase, e.g., crystal, replacing the α-phase, e.g., liquid. This case corresponds to the u-axis in Fig. 13.6 directed from β-phase to α-phase, and the growth of the β-phase corresponds to v n being positive. In Fig. 13.9 we summarize what we already know about the motion of a plane phase-separating interface. If the entire system is maintained at constant temperature T0, then the velocity of the interface is expressed by (11E.6a), and FTM allows us to relate the kinetic coefficient μ to other material properties; see (11E.6b). On the other hand, if T0 is only the initial temperature of the parent phase (α), then motion of the interface is more complicated. In the sharp-interface approximation of adiabatic crystallization (see Sect. 7.2.2), v = v0 ðSt- 1Þ,

for St > 1

ð7:23Þ

is the stationary velocity of the interface—the kinetic regime, T β = T E - L=CðSt- 1Þ is the temperature of the solid phase. Here

ð7:24aÞ

13.4

Non-isothermal Motion of a Curved Interface

Fig. 13.9 Different regimes of motion of a plane interface separating phases α and β after a first-order phase transition. (a) Peclet number Pe and (b) final temperature of the β-phase Tβ versus the initial supercooling (Stefan number) St of the α-phase. Blue lines, solution of the Stefan problem, (7.14b, 7.23, 7.28); red lines, solution of GEEID (13.72) for a plane interface (K0 = 0)

339

Tβ TE

b

Stefan number St 4

Pe a Heattrapping regime Diffusion regime

Kinetic regime

0 0.9 Sttr

1

1.1

1.2

Stefan number St

St 

C ðT E - T 0 Þ L

ð7:17Þ

is the Stefan number or dimensionless supercooling, and v0 

μL C

ð7:25cÞ

is the characteristic velocity. For St ≤ 1—the diffusion regime—there are no stationary solutions, and the temperature of the solid phase asymptotically approaches TE. These regimes are shown by the blue lines in Fig. 13.9a,b. Analysis of GEEID (13.72) with (13.75) reveals another regime of stationary motion (v n = const(t)) of a plane (K0 = 0) thin-but-diffuse interface—a regime of stationary growth of β-phase (v n > 0) at low supercooling of St < 1. The beauty of this regime is that the temperature of the β-phase after the transformation is above the equilibrium value of the melting point (Tβ > TE); see red lines in Fig. 13.9a,b. This effect is called heat trapping. Mathematically, it means that if I1 in (13.75) is negative and large enough, it is possible to cancel the first term in the right-hand side of (13.72). Physically, it means that if the Gibbs-Duhem force in (13.76) becomes positive and large enough; it propels the interface against the negative bulk driving force of L(TE - Tβ)/TE. However, in this case, to limit (13.72) by consideration of the linear-in-velocity term only would lead to a wrong result that v n < 0 for Tβ < TE, that is, the β-phase (crystal) would be shrinking although it is

340

13

Multi-Physics Coupling: Thermal Effects of Phase Transformations

favored by the phase diagram: its temperature is below the melting point. This means that, although condition (13.58) applies, to balance (13.72) we must take into account the nonlinear-in-velocity terms. As we can see from (13.72), for the heat trapping to be possible, the coefficient in front of the linear-in-v n term must be negative. To elucidate the heat-trapping effect, let’s analyze it in the system described by the tangential potential. Taking into account that for the tangential potential ΓðsηÞ = 0 and using (13.75) and (11E.6b), we obtain the heat-trapping criterion: Ht 

λ 1 λσT E = < mL2 Lv μLLv 6

ð13:77aÞ

where Ht is called the heat-trapping number. The criterion may be interpreted as the upper limit on the rate of thermal conduction or the lower limit on the interfacial thickness in the system. The latter means that the interface may be thin but of the finite thickness. Furthermore, using (13.46) and κ , W

ð11E:7Þ

1p κW , 6

ð9E:5Þ

Lv = 4 σ=

the heat-trapping criterion (13.77a) can be presented as follows: Ki < 4

2 L2 = : 3U WCT E

ð13:77bÞ

Moreover, for the tangential potential, the coefficients Ii of the Gibbs-Duhem force are I 1 = - 0:1583Lv

L2 TE

ð13:78aÞ

I 2 = - 0:0403L2v

L2 TE

ð13:78bÞ

L2 WL þ 0:0066L2v : TE TE

ð13:78cÞ

I 3 = - 0:0403L2v

Now, substituting (13.78) into (13.72) with K0 = 0 and taking into account (13.58), (7.17), and (13.60), which relates the final temperature of the β-phase Tβ to the initial temperature of the α-phase Tα = T0, we obtain an equation for the stationary velocity, v n = const(t), of a plane interface:

13.4

Non-isothermal Motion of a Curved Interface

St = 1 þ ðHt- 0:1583 ÞPe þ 0:0403Pe2 - O Pe3

341

ð13:79Þ

The function Pe(St) is depicted in red on Fig. 13.9a. If the criterion (13.77) is not satisfied, then the stationary regime (Pe > 0) exists for St > 1 only, and the difference from the sharp-interface approximation of adiabatic crystallization, (7.23), is insignificant. If the criterion (13.77) is satisfied, then the stationary regime (Pe > 0) exists also for 1-

ðHt - 0:1583 Þ2  Sttr ≤ St < 1 4 × 0:0403

ð13:79aÞ

Compare the red curve in Fig. 13.9a with the blue line, which expresses solution of the same problem without the effects of the interfacial diffuseness. Significant difference is that the diffuse-interface stationary solution does not vanish for T0 > TE - L/C and remains positive for TE - L/C < T0 < Ttr. One may say that in this case the kinetic regime of growth “penetrates” the temperature domain of the diffusion regime. However, the most interesting part of the effect is revealed when, using (13.60) and (13.79), we calculate the final temperature of the β-phase after the transformation. Then, we can see (red line in Fig. 13.9b) that this temperature is above the equilibrium point Tβ > TE. During this process the low-symmetry β-phase grows (v n > 0) at the expense of the high-symmetry α-phase at a temperature above the equilibrium point (Tβ > TE). In case of crystallization of water, for instance, that would have meant growth of superheated ice from supercooled water. For the crystallization of ice, however, criterion (13.77) is not satisfied but is quite feasible for crystallization of other substances. Another possibility of growing β-phase (v n > 0) with the temperature after transformation above the equilibrium value (Tβ > TE) is due to the Gibbs-Thompson effect, that is, the change of the equilibrium temperature due to the curvature of the phase-separating interface if the center of the curvature is in the α-phase (K0 < 0); see GEEID (13.72) and (2E.5).

13.4.6

APB Motion: Thermal Drag Effect

In Sect. 11.6 we considered motion of an isothermal APB, driven by its own curvature. Conventional logic dictates that APBs are not accompanied by temperature gradients and do not cause thermal effects because the latent heat that generates these effects vanishes in the second-order transitions, L = 0; see (2.8a) and (8.45b). What is overlooked by such logic is a contribution of the surface internal energy associated with the interface. To properly describe this effect, one may apply TDGLE (11.4) and GHE (13.45) to the APB motion; see Fig. 13.10. However, we intend to show here that the non-isothermal drumhead GSHBE (13.63) and GEEID (13.72), applied to the APB motion, describe the temperature

13

Fig. 13.10 Temperature distribution inside a curved APB moving toward its center of curvature at u = 0 numerically calculated using TDGLE (11.4) and GHE (13.45)

Multi-Physics Coupling: Thermal Effects of Phase Transformations 0.002

0.001

temperature increment G7

342

vn

0

dT du

[T]

-0.001

-0.002

Lv -0.003 0

40

80

120

160

200

distance from center u

waves of the amplitude [T] and average temperature gradient dT=du = ½T =Lv , defined in (13.62b, 13.62c) and shown in Fig. 13.10. Indeed, due to the symmetry of the continuous transition, we may assume that [dT/du] = 0 but [T] ≠ 0 (Why?). Hence, the wave represents a temperature double layer; see Fig. 13.10. Then a system of two simultaneous equations (13.63) and (13.72) with (13.75) for the three drumhead variables of the layer, v n , K0, [T], can be resolved as follows: ½T  =

2εK 0 , C 1 - Ki - 34 ð1 þ τÞq

vn = -

2αK 0 , Ki þ 34 ð1 þ τÞq q=

a : CT C

ð13:80aÞ ð13:80bÞ ð13:80cÞ

Notice that the temperature wave amplitude [T] is proportional to the curvature of the interface. For a spherical particle, (13.80b) can be resolved using the bubble differential condition: K_ 0 = - vn K 20 :

ð11:34cÞ

Also notice that the wave amplitude [T] is critically dependent p on the temperature τ. To see that we need to take into account that ε = σ þ Tχ / - τ and recall that

13.4

Non-isothermal Motion of a Curved Interface

343

internal energy excess

Fig. 13.11 Borrow-return mechanism. Internal energy of a substance as a function of an order parameter

'e D

E

transformation

J

order parameter K

2κ , að - τÞ

ð9E:6Þ

2 2aκð - τÞ3 , 3 1 χ= 2aκð - τÞ: Tc

ð9E:7Þ

LAPB = 2 σ=

ð9E:8Þ

Comparison of (13.80b) with (11.36a) reveals the thermal drag effect: a piece of APB with the temperature gradients inside the transition region moves slower than the isothermal one. The interfacial dynamics is limited not only by the mobility of the interface m but also by the thermal conductivity λ of the system with the kinetic number Ki measuring the relative roles of these processes. The thermal drag creates a temperature wave around the moving APB even when far away from the interface isothermal conditions are maintained. The drag effect is explained by the GibbsDuhem force FGD being antiparallel to the boundary’s velocity, hence, playing a role of the drag force; see Fig. 13.6. “Dissolution” of a small particle of a minority variant is caused by the Laplacian pressure from the curved interface. At the same time, the Gibbs-Duhem force generates the thermal pressure in the particle that opposes the Laplacian pressure. In the ideal insulator, that is, a material with λ = 0, these pressures may neutralize each other. The FTM provides an explanation of the drag effect based on a borrow-return mechanism; see Fig. 13.11. Both variants (α, β) on either side of the interface are characterized by the same amount of internal energy density: eβ = eα. Transformation inside the interface from one variant to the other, however, requires crossing the

344

13

Multi-Physics Coupling: Thermal Effects of Phase Transformations

internal energy barrier (maximum), which corresponds to the disordered phase with ηγ = 0. So, a small volume of substance must “borrow” a certain amount of energy proportional to Δe = eγ - eα from the neighboring volumes while moving uphill on the internal energy diagram (see Fig. 13.11) and “return” it later on the downhill stage of the transformation. The borrow-return mechanism entails the internal energy flux vector (13.43) through the interface. In the non-isothermal drumhead approximation, it is JE = - λ

dηe4 dT þ κ E vn du du

2

= vn e - eβ þ O Pe2 þ PeGe ≈

a 4 η - η2 - τ2 - τ vn 4

ð13:81Þ where we used the equilibrium solution ηe4(u) of Example 9.2, (8.4), and (13.5b) to calculate the last expression. Notice that the energy flux inside the interface does not vanish even when the energy densities outside the interface are equal, eβ = eα. One can see that JE is a bell-like function of space, peaked at the point where ηγ = 0. Such internal energy exchange requires a transport mechanism, which is served here by the heat conduction. Thus, the drag effect is due to the finite rate of heat transfer measured by the conductivity λ. Thermal drag occurs because the conversion of one variant into another is accompanied by the transmission of energy between neighboring pieces of a material, which cannot occur infinitely fast. Importantly, the thermal drag exists despite of the vanishing latent heat of the transition, which is the origin of the thermal effects in discontinuous transformations.

13.5

Variability of the System: Length and Energy Scales

To better understand various features and effects of the phase transformations with varying temperature fields, we need to analyze different length and energy scales relevant to the process. Let us concentrate on the first-order transformation with the tangential potential. To this end we have encountered three relevant energy scales: the thermal energy density CTE, the latent heat L, and the free energy density barrier W. To describe various thermal properties of the systems, they may be arranged into two dimensionless ratios: Q = L/CTE, (7.17b), and P = 6L/W, (13E.3a). Example 13.1 shows that a unified parameter—the thermodynamic number U = 1/PQ, (13E7a)— regulates thermal properties of adiabatic systems with large values of P. Then, together with the kinetic number Ki, (13.46b), U determines all different regimes of phase transformations in such systems. In other words, all different cases of the decomposition can be classified in terms of the two numbers, Ki and U; see Fig. 13.5. For instance, the heat-trapping criterion (13.77b) can be expressed as KiU < 2=3. This means that all thermodynamic systems may be divided into a few universality classes with similar thermal behavior depending on the magnitudes of U and Ki.

13.6

Pattern Formation

345

As we have found in Chap. 7, the thermal properties of the system give rise to the kinetic λ μL

ð7:25bÞ

γκL T Eσ

ð11E:6bÞ

lμ = μ= and capillary lc =

σCT E L2

ð7:60Þ

length scales, which, together with the fundamental length scale Lv = 4 κ=W, determine system’s behavior. Importantly the thermodynamic and kinetic numbers may be represented as the ratios of the length scales of the system as follows: U=

lc Lv

ð13:82aÞ

Ki =

lμ lc

ð13:82bÞ

This representation allows us to interpret the thermal effects as an interplay of different length scales in the system.

13.6

Pattern Formation

In this section, FTM will be applied to a few real-life problems which result in formation of complex structures—patterns. In some sense, this section is a bridge to Part III because it demonstrates many of the advantages of application of FTM. Up to this point, all the problems that we encountered were tackled with the help of different theoretical methods. These methods can also be applied to the problems of pattern formation in phase transformations; for instance, they tell us that patterns always form as a result of certain instability in the system. However, in this section we will take advantage of a very effective method of numerical simulations. The purpose of this section is not to present results useful for practical applications but to provide a useful framework for the theoretical and numerical analyses of the system. One of the advantages of the numerical simulations is that this approach allows intuitive, graphical analysis of the results. We intend to show that a combination of only two processes, phase transition described by TDGLE (11.4) and heat redistribution described by GHE (13.45), is capable of generating very complicated patterns, which are similar to those observed in experiments, specifically in

346

13

Multi-Physics Coupling: Thermal Effects of Phase Transformations

crystallization. To that end we did not include the thermal noise studied in Chap. 12 into this system of equations. The equations should be supplemented with a free energy potential that specifies the system and the boundary and initial conditions that specify the physical situation. To describe the system, we use the tangential potential (13E.1). For the boundary conditions, we choose thermal isolation of the system. Any realistic phase transformation starts with a nucleation stage when the first traces of the product phase emerge from the bulk of the parent phase. That stage was simulated in Sect. 12.7. In this section we do not intend to reproduce this stage adequately, and the simulations start—the initial conditions—with a very small fraction of the product phase β already present in the almost uniform parent phase α at the same temperature T0 < TE as the phase β. Although one can discretize these equations as they are, a more physically sound approach calls for the scaling of these equations. The latter has the following advantages. First, the scaling helps find the number of independent variables and reveal important physical quantities that determine behavior of the system. Second, computationally, it is easier to deal with dimensionless quantities than the dimensional ones. We scale the space, time, and temperature as follows: ~x =

W ~ T - T0 x, t = γWt, T~ = C L κ

ð13:83Þ

~ is the dimensionless size of the where the tilded variables are dimensionless and X system. From now on we will be using only the dimensionless variables; hence, we may drop the tildes without any confusion. Then the dimensionless evolution equations for our system take the form ∂η T - St = ∇2 η - ωðηÞ 1 - 2η þ 4U ∂t 6 ∂η ∂T = Ki∇2 T þ ∇2 η - ωðηÞð1- 2η - PÞ P ∂t ∂t

ð13:84aÞ ð13:84bÞ

where U, P, Ki, and St are defined above. Notice that the original, dimensional system of equations (13.45), (11.4), and (13E.1) has seven independent material properties that describe the system, (C, L, TE, W, λ, κ, γ), while the scaled system (13.84) has irreducible set of only three material parameters: (U, P, Ki). Together with the initial temperature T0 or supercooling St and the system’s size X, they determine different regimes of evolution of the system.

13.6.1

One-Dimensional Transformation

In Fig. 13.12 results of the numerical calculations of the discontinuous phase transformation in a simple one-component, 1d system under conditions of thermal

13.6

Pattern Formation

347

temperature

-----melting temp-----

temperature

temperature

order parameter

order parameter

--------solid phase

--------transition state

order parameter

liquid phase 0

100

200

distance into the sample

300

400

0

100

200

distance into the sample

300

400

0

100

200

300

400

distance into the sample

Fig. 13.12 Numerical simulation results of the 1d closed system (86) with U = 0.5, P = 2, Ki = 1, St = 0.505: (a) t = 1100, (b) t = 1500, (c) t = 4500

isolation are presented. The simulations started at the state just below the α-spinodal point with a small-amplitude noise added to the initial distribution of the OP. Significant advantage may be gained by looking at the simulation results against the backdrop of the equilibrium-state diagram in Fig. 13.1 and instability diagram in Fig. 13.5. The following features of the inhomogeneous transformation are observed. On the initial stage of decomposition (t < 1100, Figs. 13.1 and 13.12a), we observe formation of the structural period of the OP spacing and the drumhead interface. The oscillatory mechanism emerged from the finite wavelength instability of the adiabatically stable transition state; see case c in Fig. 13.5 and filled circle in Fig. 13.1. On the early stages (t ~ 1500, Fig. 13.12b), an almost perfect periodic domain structure with the wavenumber described by (13.53) was created. On the later stage (t ~ 2000, Fig. 13.12b), we observe the coarsening process, which starts practically immediately after the emergence of the almost perfect periodic domain structure. The coarsening takes one of two routes: dissolution of a layer accompanied by a local temperature dip or coalescence of two neighboring layers accompanied by a temperature spike. Both types eventually lead to a new equilibrium state with the new period. On the late stage (t > 4000, Figs. 13.1 and 13.12c), we observe the end of the first stage of coarsening with the formation of an almost perfect periodic structure of the β- and α-phase plates with the doubled period. It is customary to view coarsening as a curvature-driven motion. In this case, there would be no coarsening in a 1d system where all boundaries are flat. In fact, coarsening is driven by the reduction of surface energy, which makes coarsening subjected to the thermal effects. Analysis of the coarsening scenario in the 1d closed system reveals the mechanism of the sequential doubling of the structural period, which is completely different from the traditional Ostwald-Lifshitz-Slezov mechanism of coarsening discussed in Chap. 5. Notice that we have already come across this mechanism of structural coarsening in the study of the dendritic growth in Sect. 7.6.2.F.

348

13

13.6.2

Multi-Physics Coupling: Thermal Effects of Phase Transformations

Two-Dimensional Transformation

Even more interesting results come about in the numerical simulations of the twoand three-dimensional systems. In Fig. 13.13a,b, the 2d color maps of the OP and temperature fields after long-time (t ~ 1000) simulation of the transformations described by (13.84) in a large system of X = 1000 for the values of (U = 0.5, P = 20, Ki = 2, St = 0.5) are depicted. The simulations started with a small circular seed of the β-phase (initial radius equals ~20l ) in the sea of the α-phase, all at the temperature T0 < TE. Very quickly (td ~ 10) the system develops the drumhead interface, visible in Fig. 13.13a. However, growth of a circular disk is not a stable process, and after the time tMSV ~ 100 small protrusions develop, which later on turn into long needles. The process of breaking the spherical (circular) symmetry of the growing β-phase is called the Mullins-Sekerka-Voronkov instability; see Sect. 7.4. The distributions of temperature and OP along the axis of symmetry of the needle are presented in Fig. 13.14. Notice two important features of the temperature field. First, there is the jump of the temperature gradient across the drumhead interface at the tip of the needle due to the latent heat release. Second, there is visible overheating (T > TE) at the root of the needle due to the “negative” curvature of the needle’s interface close to the center (Why?). In Fig. 13.15 the 2d color maps of the OP and temperature fields of the simulations of the “dendritic forest” when the initial perturbations were placed on the plane crystal are depicted. To obtain the side branches, one has to “turn on” the Langevin force generator, which we considered in Sect. 12.4. It is of interest here to compare the present results with those of the method of cellular automata presented in Sect. 7.6. CAM is essentially an FTM coarse-grained on a larger length scale, a “poor man’s FTM” of sorts. For the method to work, the

100

100

200

200

300

300

400

400

500

500

600

600

700

700

800

800

900

900

1000

1000 100

200

300

400

500

600

700

800

900 1000

100

200

300

400

500

600

700

800

900 1000

Fig. 13.13 Simulations of the growth of a spherical (circular) particle of β-phase in a sea of α-phase in the system with U = 0.5, P = 20, Ki = 2.0, St = 0.5. (a) Order parameter field; (b) temperature field

Pattern Formation 1

0.8

0.6

scaled temperature

Fig. 13.14 Distribution of the temperature and OP along the axis of symmetry of the needle in Fig. 13.13 (the factor √2 is due to the diagonal direction of the needle). The equilibrium temperature corresponds to T = 0.5

349

order parameter

13.6

0.4

0.2

0 0

200

400

600

800

scaled distance from center x/ 2

cell size should be much larger than the interfacial thickness: lCA ≫ Lv. To derive the CAM heat-balance equation, we integrated (7.5) over the volume of the cell to obtain (7.70) and added the amount of the latent heat released in the cell, (7.71). The latter contribution can be derived using the expression (13.36a) for the heat source Q and the assumption that the OP field in the cell represents a train wave (interface), that is, it moves as a whole with the unchanging internal structure. Then, introducing the unit normal n directed toward the parent (liquid) phase (see Fig. 7.9) and using (2.8a) and (9.97a), we obtain

ΩCA

1

Qðx, t Þ d3 x = vn

ds S0

2

dn -1

∂e ∂ η ∂η - κE 2 = v n S0 L ∂η ∂n ∂n

ð13:85Þ

The CAM kinetic relation (7.73c) is the EEID (11.30) (disregarding the heattrapping effect). Juxtaposition of the FTM and CAM simulation results allows us to conclude that not all details of the OP field are important for the global structure formation and rate of transformation in a material system. For instance, the fine details of the OP distribution inside the solid-liquid interface are not essential for the dendritic structure that grows from the supercooled liquid.

2000

Multi-Physics Coupling: Thermal Effects of Phase Transformations

1800 1600

1800

1400

1600

1200

1400

1000

1200

800

1000

600

800

400

600 400

700

600

500

400

300

200

100

200

200 800

700

500

600

400

300

200

100

(a)

800

13

2000

350

(b)

Fig. 13.15 Numerical simulations of the “dendritic forest.” (a) Order parameter field; (b) temperature field

Exercises 1. 2. 3. 4. 5. 6.

Verify expression (13E.6) in Example 13.1. Verify expression (13.66a). Hint: take κ = func(T ). Verify expression for the Gibbs-Duhem force (13.71). Verify that for the tangential potential ΓðsηÞ = 0. Verify expressions (13.78). Verify that (13.80) is a solution of two simultaneous equations (13.63) and (13.72). 7. Verify expression (13.81).

References

References 1. R.K. Pathria, Statistical Mechanics (Pergamon Press, Oxford, 1972), p. 69 2. A. Umantsev, J. Chem. Phys. 107(5), 1600–1616 (1997) 3. A. Umantsev, J. Chem. Phys. 96(1), 605–617 (1992) 4. B.I. Halperin, P.C. Hohenberg, S.-K. Ma, Phys. Rev. B 10, 139 (1974) 5. F.R. Gantmacher, Applications of the Theory of Matrices (Interscience, New York, 1959)

351

Chapter 14

Validation of the Method

In Part II we have introduced the field-theoretic method of study of phase transformations and explored many of its properties. We saw how it can help us better understand the transformations at equilibrium and in dynamics. As materials science is moving into a new era of quantitative modeling and design of real materials, it is important to validate the method and assess its consistency with the broader body of knowledge. In this chapter we discuss the challenges of accomplishing that. Regarding the beams and columns configuration of FTM in Fig. P.1, in this chapter we are laying the crossbeam.

14.1

Physical Consistency

To large extent FTM owes its success to effectively breaking down the intractable physical description of a phase-transforming system into separate mechanisms, which have tractable physical descriptions. The separation into mechanisms is done by the coarse graining; see Appendix A. For instance, the overall transformation process we divided into the problems of the order parameter change, fluctuations, heat release, and heat distribution, which in reality proceed at once and can be described by the unified means, e.g., the equations of molecular dynamics. To maintain integrity of the method, we must restore consistency of the resulting parts. For that purpose, in Sect. 12.4, where we introduced the Langevin force, we validated its amplitude, and in Sect. 13.2, where we derived the heat equation, we validated its consistency with the phase-transition description. Often to validate the method researchers use the consistency of its results with the results of the sharp-interface approach, which treats a phase interface as a surface of application of ad hoc boundary conditions, e.g., see Sect. 7.1. A word of caution is in order here. The problem is that these conditions being conceived of, not derived, may be missing some important physics, which becomes apparent only in the course © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Umantsev, Field Theoretic Method in Phase Transformations, Lecture Notes in Physics 1016, https://doi.org/10.1007/978-3-031-29605-5_14

353

354

14

Validation of the Method

of a rigorous derivation. Indeed, compare (7.6) with (13.63) and (7.8b) with (13.72) and notice significant physical effects missing by the sharp-interface approach: the surface creation and dissipation in the first case and heat trapping in the second case. To avoid this problem one should also use the logic and experimental results in the process of validation. FTM compliance with its sharp-interface limit is desirable but not necessary. The discrepancy, however, must be physically explained. To validate FTM we rely on the so-called benchmark problems, which on the one hand can be solved by the method accurately enough and on the other hand have simple experimental interpretation. We can list here a few benchmark problems: (i) motion of a plane interface in isothermal conditions; (ii) martensitic, ferroelectric, and ferromagnetic phase transformations; (iii) dendritic growth in a singlecomponent system; (iv) spinodal decomposition in alloys; (v) precipitation in linear elastic medium; (vi) Ostwald ripening (coarsening) of the second phase, etc.

14.2

Parameters of FTM

One of the challenges of FTM is obtaining reliable material’s parameters. For instance, in Chapter 13 we used the following macroscopic thermodynamic and kinetic properties: specific heat C, latent heat L, equilibrium temperature TE (or critical temperature TC), and thermal conductivity λ, which can be found in tables of physical and chemical constants. However, for FTM to work, we need to know the mesoscopic parameters, which cannot be found in those tables. One way to obtain these parameters is to derive them from the microscopic models through the coarse-graining procedure (Appendix A). This is a treacherous road, not well travelled yet. An alternative way to determine the interfacial parameters is again to use the strategy of benchmark problems, which can be matched to the relevant experiments or atomistic (e.g., molecular dynamics) simulations. In other words, one needs to identify an experimentally measurable or atomistically simulated quantity and compare it to FTM counterpart parameter—a correspondence principle of sorts. Let us take as an example the Landau-Gibbs free energy expanded in powers of the OP (Chapter 8) with the coefficients of expansion (A, B) or (W, D). These parameters cannot be easily identified in experiments and obtained through direct measurements because they are not measurable quantities, that is, do not have direct experimental meaning. A first attempt in finding the coefficients of the LandauGibbs free energy expansion may consist in guessing their (T, P)-dependence, deriving the (T, P)-phase diagram of the system or its specific heat and compressibility, and comparing the theoretical results with the experimental values. That strategy seldom worked for the Landau potential because of a complicated relation between the free energy and the expansion coefficients; see (8.11). Identification of the material parameters was our primary motivation behind introducing the tangential potential in Sect. 8.4: for this potential, parameter D of (8.32) is equal to the difference of the free energies of the phases and can be found through the

14.2

Parameters of FTM

355

thermodynamic integration of the experimental data. Parameter W of (8.30) cannot be found from the thermodynamic measurements because the transition barrier height is not a thermodynamic (macroscopic) quantity. Equation (8.42) for the spinodals could have been used for identification of W. Unfortunately, in most experiments these points are not attainable, which renders this strategy impractical. Another interfacial parameter necessary for FTM to work is the gradient energycoarse-graining coefficient κ (Chapter 9). Notice that the barrier-height parameter W and gradient energy coefficient κ can be found from the measurements of the interfacial energy and thickness: κ : W

ð9E:4Þ

1p κW 6

ð9E:5Þ

Le4 = 4 σ=

These equations can be easily resolved for the parameters W and κ: 3 σL 2 e4 σ W = 24 : Le4 κ=

ð14:1aÞ ð14:1bÞ

This approach has one drawback: it is difficult to extract the interfacial thickness from the experimental (or atomistic simulations) measurements. Another benchmark problem useful for us is the problem of the structure factor K η,V ðk, t Þ, which is directly proportional to the intensity of the scattered radiation in any experiment on scattering of neutron, light, or X-rays. The wavevector k is the difference between the wavevectors of the incident and scattered radiations. Its small-k (long-wavelength) limit can be measured by the light scattering and its large-k (short-distance) part by the neutron scattering. Theoretically (see Sect. 12.5), equilibrium value of the structure factor can be found as the long-lime asymptotic limit of the following expression: K η,V ðk, t Þ = K η,V ðk, 0Þe2βðkÞt - 2

γkB T ! t→1 VβðkÞ V

2 kB T 2

∂ g ðηÞ ∂η2

þ κ j kj 2

ð12:56Þ

and the gradient energy coefficient can be identified as follows: -1

κ=

kB T dK η,V ðk, 1Þ V d j kj 2

ð14:2Þ

On the other hand, if the structure factor is measured for k = 0 in the domains of stability of α- and β-phases and then extrapolated into the metastable regions, then

356

14

Validation of the Method

2

we can obtain the value of ∂∂ηg2 ðηÞ in the entire region of the thermodynamic parameters (T, P). Recalling that 2

∂ g ðηÞ = ∂η2

A p 2 B2 - A þ B B 2 - A

ð8:15Þ

for the Landau potential and 2

∂ g ðηÞ = W ðP, T Þ ± 6DðP, T Þ ∂η2

ð8:36Þ

for the tangential potential where the right-hand side depends on the phase α or β of the system, one can resolve these relations for the parameters of the potentials (A, B) or (D, W ). Similarly, the surface tension can be extracted from the measurements of the interfacial structure factor; see (12.70). The rate constant γ (Chapter 10) can be extracted from the experimental (atomistic simulations) measurements of the speed of motion of the interphase interface in a transformation at a specified temperature below the equilibrium point TE, and compared to the theoretical expression for the speed, (11E.6). Then γ=

v σT E κL T E - T

ð14:3Þ

An alternative approach is to extract the rate constant γ from the expression for the dynamic structure factor; see (12.43) and (12.56). A similar strategy can be used to extract the FTM parameters of the system with the OP conservation constraint (see Sect. 15.1). For instance, in order to find the concentration gradient energy coefficient κC, one may identify the most pronounced wavelength of the structure, evolved in experiments or simulations, and compare it to the formula for the maximum marginal wavenumber kd, (15.29).

14.3

Boundaries of Applicability

One of the greatest advantages of FTM is that the method allows finding its own boundaries of applicability. They come from the thermodynamic constraints and limitations on the speed and dimensions of the evolving structures. Throughout the book we have identified several conditions of applicability. Here we will succinctly summarize these criteria: 1. Equation (8.2b) expresses limitations on the order of the polynomial expansion of the free energy in powers of the OP. It says that all essential (of the same order of magnitude) contributions into the free energy must be accounted for.

Boundaries of Applicability

357

2. The method is not applicable when the evolving structure has very fine scale. For instance, one of the limitations of the method is that the thickness of the interphase interface or APB is greater than the interatomic distance a: l≥a

ð14:4Þ

3. To deduce the OP evolution equations in Chaps. 10 and 11, we assume that other variables of the system do not change significantly during the time period essential to the evolution of the OP. For example, we assume that the pressure equilibrates much faster than the OP, which is a reasonable approximation. However, temperature in substances is known to equilibrate much slower than the pressure. Let’s see what that means for us. For FTM to be applicable, the time of temperature relaxation must be less (not necessarily much less) that the OP relaxation time. The latter was estimated in Sect. 10.2 as 2

1 ∂ gðηÞ τ= γ ∂η2

-1

ð10:12Þ

To estimate the former, we recall that τT = l2/α, where l is a characteristic length of the region of space affected by the temperature inhomogeneity and α is the thermal diffusivity. As a natural measure of l, we use the correlation radius: rC =

κ 2

∂ g ðηÞ ∂η2

ð12:15Þ

Then, as a criterion of applicability of FTM, we obtain Ki 

α >1 γκ

ð13:46bÞ

This criterion must be taken into consideration when one deals with the thermal effects of phase transformations (see Chap. 13): 4. Mean-field methods do not take into account thermal fluctuations in the system. Hence, fluctuations must be added into the theory “by hand.” In FTM they appear in the form of the Langevin force (see Chap. 12). Levanyuk-Ginzburg criterion, (12.26, 12.28), identifies constraints on the parameters of the theory. For instance, it says that the method is not applicable in the vicinities of the critical points, which are the high-fluctuations domains.

358

14

Validation of the Method

Exercises 1. Suggest an approach how to extract the rate constant γ from the expression for the dynamic structure factor in (12.43) and (12.56). 2. Experiments on crystallization of pure nickel produced the following data: interfacial (solid/liquid) thickness = 1 nm; interfacial energy = 0.464 J/m2; kinetic coefficient of growth = 0.39 m/sK. Given these data and assuming that crystallization of nickel can be described by the tangential potential, find the free energy barrier parameter W, gradient energy coefficient κ, and the relaxation parameter γ.

Part III

Applications

Chapter 15

Conservative Order Parameter: Theory of Spinodal Decomposition in Binary Systems

In Chap. 6 we described spinodal decomposition as a process of spatial separation of species taking place in thermodynamically unstable solutions of solids or liquids. A phase diagram of a spinodally decomposing system has a critical point, where the concentrations of both phases become equal. In this chapter we explore the heterogeneous states of the system, dynamics of the decomposition process, and the effect of the thermal fluctuations on it. Spinodal decomposition may be looked at as an example of a phase transformation described by an OP that obeys a law of conservation. Then, the mole fraction X may be put into correspondence with the OP η whose value cannot change freely while “searching” for the free energy minimum but must comply with the mass conservation condition. In this chapter we will be following the presentation of Johns Cahn and Hilliard, Mats Hillert, and H. Cook who laid out foundations of its theoretical description of the problem.

15.1

Equilibrium in Inhomogeneous Systems

To describe inhomogeneities in a system that may appear as a result of the spinodal instability, Cahn and Hilliard [1] introduced a continuously varying molar fraction X(r) of the species B and suggested to use the Ginzburg-Landau functional for the total Gibbs free energy of the system: G

1 gS ðX Þ þ κ C ð∇X Þ2 d 3 r, 2

gS 

GS vm

ð15:1Þ

In (15.1) gS(X) is the Gibbs free energy density, vm is the molar volume, κ C is the concentration gradient energy coefficient, and the integration is over the entire volume of the system V. In the following, vm and κ C will be considered constant, that is, independent of the composition or temperature of the solution.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Umantsev, Field Theoretic Method in Phase Transformations, Lecture Notes in Physics 1016, https://doi.org/10.1007/978-3-031-29605-5_15

361

362

15

Conservative Order Parameter: Theory of Spinodal Decomposition in. . .

As we know, in the state of thermodynamic equilibrium, X ðrÞ, the total free energy of a system that exchanges energy with environment approaches minimum, that is, G X < G fX g if X ≠ X. However, in a closed system, where mass exchange with the environment is not allowed, this condition is subject to the species conservation constraint (6.9), which is expressed as follows: X ðrÞd 3 r = X 0 V = const

ð15:2Þ

Notice from Appendix H that the proper minimization of the free energy in a closed binary system requires two species conservation constraints. In the case of a system where the molar volume is independent of the composition, the role of the second condition is played by the condition of conservation of the volume of the system: V

vm dn =

d3 r = const

ð15:3Þ

If the molar volume depends on the composition, a whole host of other effects, including coherency strain effect, come about, none of which are of particular importance to the subject of this chapter. In the calculus of variations (see Appendix B), minimization of the functional G, (15.1), under the constraints (15.2), (15.3) is called the isoperimetric problem. From the physics standpoint, the most appealing method to solve the problem is the method of Lagrange multipliers. It says that there exists a constant λ such that the functional Gþλ

X d3 r

ð15:4Þ

approaches an unconditional minimum at X = X. The volume conservation condition for a system with the concentration-independent molar volume, (15.3), is trivial and does not affect the minimization procedure. Hence, δG ∂gS - κ C ∇2 X = - λ  δX ∂X

ð15:5Þ

The value of the molar fraction X on the boundaries of the system may change freely. In this case the variational procedure yields the natural boundary condition: n∇X = 0 on S

ð15:6Þ

To find the Lagrange multiplier λ, one needs to substitute the solution X ðrÞ of (15.5), (15.6) into the conservation condition (15.2).

15.1

Equilibrium in Inhomogeneous Systems

363

Let us consider now a 1d inhomogeneous binary solution of infinite size, that is, the thermodynamic limit. As we saw in Chap. 9, in this case (15.5) has the first integral (cf. (9.30a)): gS X þ λX -

1 dX κ 2 C dx

2

= μ = constðxÞ

ð15:7Þ

Because in the thermodynamic limit the boundary conditions (15.6) are placed at x → ±1, all higher derivatives of X ðrÞ also vanish on the boundaries. Then (15.5) yields another boundary condition: ∂gS þ λ = 0 for x → ± 1 ∂X

ð15:8Þ

Hence, from (15.7), (15.8) we obtain gS ðX - Þ - X -

∂gS ∂gS ðX - Þ = μ = gS ðX þ Þ - X þ ðX þ Þ ∂X ∂X ∂gS ∂gS ðX - Þ = - λ = ðX þ Þ ∂X ∂X

ð15:9aÞ ð15:9bÞ

where X ± = X ðx → ± 1Þ are the terminal (bulk) values of the molar fraction of the solutions. Comparison of (15.9) and (H.12) shows that we recovered the condition of common tangency between the terminal solutions of the system X±. Consequently, they should be identified as X EαðβÞ . Then (15.9) can be used to find the constants λ and μ and constraint (15.2) can be used to find xE—the equilibrium fraction of α. Furthermore, solution of (15.7) with the boundary conditions X ± = X EαðβÞ represents the transition layer between the bulk regions. It is analogous to the transition layer studied in Chap. 9 if the OP η is replaced by the molar fraction X and functional (9.15) by (15.1). The η $ X analogy can also be used for the analysis of the multidimensional solutions of (15.5). Example 15.1 Transition Layer Between Bulk Regions Estimate the thickness and free energy of the transition layer between the bulk solutions of the molar fractions X Eα and X Eβ in a regular solution with GA ≈ GB, see Chap. 6. First notice that in this case (15.9) yields λ = 0, μ = gS X EαðβÞ

ð15E:1Þ

Then, adjusting the definitions of the interfacial thickness, (9.19f), and energy, (9.64), to the binary system, we obtain

364

15

Conservative Order Parameter: Theory of Spinodal Decomposition in. . .

max0 jX - X 0 j X Eβ - X Eα x, x = dX lE  = max dX dx dx ðX C Þ x

σE 

þ1 -1

κC

dX dx

X Eβ

2

dx =

X Eα

X Eβ - X Eα

ð15E:2Þ

2 gS ðX C Þ - gS X SαðβÞ 2κC gS ðX Þ - gS X SαðβÞ

=κC ð15E:3Þ

dX

Notice in (15E.2) that the layer has the greatest slope at the critical concentration. Substitution of the expressions from (6.4, 6.7, 15.1) reveals two important scales: length l

vm κ C RT C

ð15E:4aÞ

κ C RT C vm

ð15E:4bÞ

and surface energy σ

Let us analyze two limiting cases for these expressions: T → 0 and T → TC. Applying representation (6E.2a) of the equilibrium molar fractions, expanding (6.4) up to the fourth order in small εE, and then using the solution in (6E.3a), we obtain lE → l

1 TC - T TC



for T → 0

ð15E:5Þ

for T → T C

and 1

σE → σ

2

X ð1 - X ÞdX =

0

XC

TC - T TC

þεE - εE

π 2

ε2E - ε2 dε =

for T → 0 3π T C - T 4 TC

3=2

for T → T C : ð15E:6Þ

Notice the typical mean-field exponents (-½) and (3/2) for the thickness and interfacial energy in the limit of T → TC.

15.2

Dynamics of Decomposition

Description of the dynamics of decomposition in mixtures should take into account the mass conservation constraint (6.9). One way to describe evolution of the mixture is to write down a continuity equation [2, 3]:

15.2

Dynamics of Decomposition

365

∂X = - ∇JX ∂t

ð15:10Þ

where JX is a flux of species B. The conservative nature of the variable molar fraction X(r, t) is “automatically” accounted for by the general form of (15.10). Indeed, integrating this equation over the entire system and using the divergence theorem, we obtain ∂ ∂t

Xd3 r = -

nJX ds

ð15:11Þ

where integration in the right-hand side is over the boundary of the system S. In a closed system, that is, no mass exchange with environment, nJX = 0 on S

ð15:12Þ

Hence, the surface integral in (15.11) vanishes and the total amount of species B does not change in time. This is consistent with the conditions (6.9, 15.2). Notice that (15.10) is analogous to the energy density equation (13.41), which expresses the conservation of energy condition. In the naïve theory of diffusion, the flux JX in the continuity equation (15.10) obeys Fick’s law: the flux is proportional to the gradient of concentration. To generalize this equation on the case of spinodal decomposition, notice that, in fact, the driving force for diffusion is the difference of the chemical potentials for the species at two nearby points, that is, the chemical potential gradient. Compare (15.5) with (H.12), and notice that in our system the role of the chemical potential is played by the functional derivative δG=δX which, according to (15.5), is equal to a constant when the system is in equilibrium. Hence, a phenomenological flux equation can be written in the form JX = - M∇

δG δX

ð15:13Þ

where M is called a mobility or linear response coefficient. Substituting (15.13) into (15.10), we obtain an equation that describes evolution of the composition in a binary system [3]: δG ∂X = ∇M∇ δX ∂t

ð15:14Þ

Functional dependence of the linear response coefficient is a subject for discussion. Some authors suggested to use the molar-fraction dependence of the type M / X(1 - X) in order to offset the infinite values of the chemical potential gradient at the concentrations approaching those of the pure A (X → 1) or pure B (X → 0)

366

15

Conservative Order Parameter: Theory of Spinodal Decomposition in. . .

solutions. However, these situations are not typical for spinodal decomposition which takes place mostly inside the miscibility gap. That is why we are using M = constðX, xÞ:

ð15:15Þ

δG ∂X = M∇2 δX ∂t

ð15:16aÞ

δG ∂gS - κ C ∇2 X  δX ∂X

ð15:16bÞ

Then

The following expression for the flux of species B 2

∂ gS - κC ∇2 ∇X ∂X 2

JX = - M

ð15:17Þ

shows that the natural BC (15.6) and the mass conservation BC (15.12) are equivalent. To get a sense of the value of the linear response coefficient M, let us calculate the rate of the total free energy change that goes along with the evolution X(r, t) near the equilibrium state X ðrÞ: dG = dt

δG ∂X 3 d r=M δX ∂t

δG 2 δG 3 d r: ∇ δX δX

ð15:18aÞ

Here, to obtain the final formula, we used the formula for differentiation of a functional with respect to a parameter (see Appendix B) and condition (15.15). Then, applying a formula from the vector calculus ∇2 ðuvÞ = u∇2 v þ 2∇u  ∇v þ v∇2 u to u = v = δG=δX and the divergence theorem, we obtain dG =M dt



δG δX

2

δG δG n∇ ds : δX δX

d3 r þ

ð15:18bÞ

Then, applying (15.13) and BC (15.12), we obtain dG = -M dt



δG δX

2

d 3 r:

ð15:18cÞ

15.3

Evolution of Small Disturbances

367

Using the fact that the integral in (15.18c) is nonnegative and that there exists an equilibrium state X ðrÞ, which minimizes the functional G fX g, we arrive at the condition M≥0

ð15:19Þ

because otherwise the state X ðrÞ is not attainable. Also, we find from (15.18c) that the functional (15.1) is a Lyapunov functional for the system described by (15.14). There is another route to derive the composition equation, which is of interest for us here. Notice that (15.16) can be obtained from (11.1) by the formal substitution: - γ ) M∇2 ðrÞ

ð15:20Þ

This tells us that these equations have the common origin: they can be derived from the same master equation for the probability of the state characterized by X(ri, t) or η(ri, t) by either imposing the condition that the sum of all the changes of the variables is zero or not [4]. The common origin of the equations may also be understood by considering a general relationship between the rate of the OP change ∂η/∂t and the driving force δG/δη (see Sect. 10.5): ∂η =∂t

Γðr- r0 Þ

δG 0 3 ðr Þd r δη

ð10:19Þ

and imposing different properties, conservative or nonconservative on the kernel Γ(r).

15.3

Evolution of Small Disturbances

To analyze dynamics of the system controlled by (15.16), we consider first what happens to small deviations of the uniform state of concentration X0. Using the method of normal modes, described in Sect. 1.5, we write X ðr, t Þ = X 0 þ uðr, t Þ

ð15:21Þ

and linearize (15.16) about X0: 2

∂u ∂gS ∂ gS = M∇2 ðX 0 Þ þ ðX 0 Þu - κC ∇2 u : ∂t ∂X ∂X 2

ð15:22aÞ

Notice that, contrary to the expansion of ∂g/∂η about η in (11.6), the term ∂gS/ ∂X(X0) does not vanish now because X0 may not correspond to the “bottom of the

15

Conservative Order Parameter: Theory of Spinodal Decomposition in. . .

Fig. 15.1 Amplification rate βs vs wavenumber k of the normal modes for different values of X0: dashed line, ∂2gS/∂X20 > 0, the system is stable; solid line, ∂2gS/∂X20 < 0, the system is unstable

Es amplification rate

368

0

kd

wavenumber k

km

well” of gS(X). Yet, this term does not affect behavior of u(r, t) because it is a constant. Hence, 2

∂ gS ð X 0 Þ 2 ∂u =M ∇ u - κ C ∇4 u ∂t ∂X 2

ð15:22bÞ

Because of the conservation conditions (15.2, 15.3), the average of the deviation u(r, t) is zero: uðr, tÞd 3 r = 0

ð15:23Þ

Hence, for the solutions of Eq. (15.22), we may try the normal modes (sine waves) of u: uðr, t Þ = Aeikrþβt

ð15:24Þ

Substituting (15.24) into (15.22b), we find the dispersion relation (see Fig. 15.1): 2

βS ðkÞ = - Mk2

∂ gS ðX 0 Þ þ κ C k 2 , ∂X 2

k = jkj:

ð15:25Þ

If ∂2gS/∂X2(X0) > 0, then βS(k) < 0 for all k > 0, which means that the normal modes of all wavelengths decrease in amplitude: Ae+βt → 0 as t → 1. Hence, the system is stable. In this case for the normal modes with

15.3

Evolution of Small Disturbances

369

2

k≪

1 ∂ gS ðX Þ κ C ∂X 2 0

ð15:26Þ

we can drop the gradient energy term in (15.22b) and obtain a conventional diffusion equation: ∂u = D0 ∇ 2 u ∂t

ð15:27aÞ

with the diffusion coefficient: 2

D0  M

∂ gS ðX 0 Þ ∂X 2

ð15:27bÞ

The diffusion coefficient is a product of the mobility M and the module of stability of the solution. Because of the condition (15.19), D0(X0) is positive for all stable compositions of the solution. On the other hand, if ∂2gS/∂X2(X0) < 0, that is, X0 is in the spinodal region, then there are normal modes with βS(k) > 0, that is, increasing in amplitude: Ae+βt → 1 as t → 1. Hence, according to the definition in Sect. 1.2, the system is unstable. In this case the dispersion relation (15.25) takes the form (see Fig. 15.1) βS ðk Þ = Mκ C k2 k 2m- k2

ð15:28aÞ

2

km =

-

1 ∂ gS ðX Þ, κ C ∂X 2 0

ð15:28bÞ

where km is the wavenumber of the marginally unstable mode for which the destabilizing (spinodal) and stabilizing (gradient energy) contributions into the free energy cancel out. In the case of ∂2gS/∂X20 < 0, formally, the diffusion coefficient in (15.27b) becomes negative—“uphill diffusion”—see Fig. 6.1b. The species acquire a tendency to cluster instead of diffuse and the dynamic problem takes on an entirely different complexity. Obviously, the gradient energy term in (15.22b) cannot be dropped because it plays the leading role in opposing the clustering forces. The amplification rate βS in (15.28) shows that the clustering prevails at small k’s and spreading at large k’s. As a result, the most rapidly growing mode—“most dangerous mode”—is that for which the wavenumber is equal to k k d = pm 2

ð15:29Þ

370

15

Conservative Order Parameter: Theory of Spinodal Decomposition in. . .

Fig. 15.2 Scaled wavenumbers of the marginally unstable km (solid line) and “most dangerous” kd (dashed line) normal modes

lk 2

km 1.5

kd 1

0.5

0

0

temperature T/TC

TS/TC

1

Compare (15.28) with (11.8b, 11.10) and Fig. 15.1 with Fig. 11.1, and notice that (1) in the case of spinodal decomposition, the long-wavelength disturbances grow very slowly, while in the case of a first-order transition, the growth is fast— explosive—(2) in the case of spinodal decomposition, the amplification rate of the unstable modes reaches maximum at the finite wavenumber leading to the growth of waves, while in the case of the first-order transition, this wavenumber is zero and the insipient structure is homogeneous; and (3) presence of the finite-wavenumber maximum is a direct consequence of the conservation condition (15.2). On the other hand, compare (15.28) with (13.49) and Fig. 15.1 with Fig. 13.5, and notice that the pattern formation in presence of the thermal effects in Sect. 13.6 has features of both spinodal decomposition and isothermal first-order transition. In the regular-solution approximation for GS(X) (see (6.6, 6.8, 15E.4a)), the marginal wavenumber km can be expressed as k2m =

1 T 4T C X 0 ð1 - X 0 Þ l2

ð15:30aÞ

Comparison of this expression with (6.13a) shows that for a given composition of the solution X0, the marginal wavenumber km decreases with temperature from the maximum value of 2/l at T = 0 to 0 at T = TS; see Fig. 15.2. Expression (15.30a) can be resolved for the temperature

15.5

Role of fluctuations

371

T = T C 4- ðlk m Þ2 X 0 ð1- X 0 Þ

ð15:30bÞ

and presented in the (X0, T ) plane (see Fig.6.1c), which allows interpretation of the spinodal curve as the locus of points where km = 0.

15.4

Pattern Formation

The linear stability analysis presented above suggests the following scenarios of evolution of the concentration in the solutions starting from any almost homogeneous initial distribution X(r, t). If the average concentration X0 is outside the miscibility gap, the solution is stable and will remain as an almost homogeneous state. If X0 is between the miscibility curve X EαðβÞ ðT Þ and spinodal X SαðβÞ ðT Þ , the solution is still stable with respect to small variations, that is, locally stable, but unstable globally that is with respect to full separation into two solutions of concentrations X Eα and X Eβ . In this case the process of separation will start with the formation of a small nucleus of the concentration close to, e.g., X Eβ surrounded by large region where the concentration had depleted from X0 to X Eα . Various scenarios considered in Chap. 4 are relevant here. If X0 is inside the spinodal X Sα < X 0 < X Sβ , then the solution is unstable even with respect to small variations of concentration, and the initial solution will be decomposing from the beginning. The initial deviations from the average concentration may be divided into the sine waves. Those with the wavenumbers k greater than the marginal shrink, while those with the wavenumbers smaller grow with the amplification factor βS(k). Because the growth rate is proportional to the exponential of βS(k), which has a steep maximum, it is possible to concentrate on the growth of the most dangerous mode kd and those which are near that one. Such a picture may be valid only for as long as the deviations u are small and the nonlinear terms in the expansion of (15.22a) can be ignored, that is, early stages of the spinodal decomposition. Later stages of the process must be addressed by more sophisticated theoretical or numerical methods. However, the deep physical analogy between the spinodal decomposition and phase transformation with the energy conservation constraint (see Chap. 13) allows us to expect certain scenarios of the late-stage evolution, e.g., the period doubling in coarsening.

15.5

Role of fluctuations

The experimental results, however, did not follow the latter scenario closely even when experimenters were pretty sure that they were dealing with the early stages of spinodal decomposition. Among the most notable discrepancies, they named the

372

15

Conservative Order Parameter: Theory of Spinodal Decomposition in. . .

following: (1) the amplitudes of the sine waves rise much more slowly than exponentially (Aeβt); (2) the dispersion relation does not follow (15.28); (3) the observed values of kd decrease with time. The conclusion was that although the linearized thermodynamic analysis of the spinodal instability contained many of the crucial features of the phenomenon, at least one more ingredient was missing. The key to the problem were the thermal fluctuations that may drive the system away from the linear, thermodynamic regime even in the early stages. Following in the steps of Chap. 12, a consistent way to include the thermal fluctuations into the FTM description of the spinodal decomposition is to add the Langevin-type force to the evolution equation: δG ∂X = M∇2 þ ζ ðr, t Þ: δX ∂t

ð15:31Þ

As ζ(r, t) represents the random force exerted by the rapid, thermally equilibrated modes of the system, the average value of ζ(r, t) is zero (cf. (12.31a)). To find the autocorrelation function of ζ(r, t), we may use the substitution (15.20) for (12.48). Then hζ ðr, t Þζðr0 , t 0 Þi = - 2MkB T∇2 ðrÞδðr- r0 Þδðt- t 0 Þ

ð15:32Þ

Equation (13.31) with G from (15.1) and ζ calibrated by (15.32) describes evolution of the conserved quantity X, the molar fraction, in a process like the spinodal decomposition. Solutions of this equation may be obtained by direct numerical simulations. Some key features of the solutions, however, may be revealed through the analysis of the two-point correlation function (cf. Sect. 12.2): Sðjr- r0 j, t Þ  huðr, t Þuðr0 , t Þi

ð15:33Þ

(In this section, instead of K, we use designation S for the correlation function and its Fourier transform, the structure factor, because this is a customary designation in the literature on the spinodal decomposition.) The averaging in (15.32) and (15.33) is performed with respect to P{u}—the probability distribution function for u(r, t). Notice that the correlation function in (15.33) is assumed to be translation invariant. The Fourier transform of S(r, t), called the structure factor Sðk, t Þ, is Sðk, t Þ = Sðr, t Þ =

Sðr, t Þ e - ikr dr

1 ð2π Þ3

Sðk, t Þeikr dk

ð15:34aÞ ð15:34bÞ

Expanding (15.31) about X0, multiplying the product u(r, t)u(r0, t) by the probability distribution function and averaging, taking the Fourier transform of the

15.5

Role of fluctuations

373

averaged quantity, and using (15.32), we obtain the evolution equation for Sðk, t Þ (cf. (12.55)): 3

4

1 ∂ gS ∂S ∂ gS ðX 0 ÞS3 þ ðX ÞS þ ⋯ = 2βS ðkÞS þ 2Mk B Tk2 - Mk 2 3 3 ∂X 4 0 4 ∂t ∂X

ð15:35Þ

Here βS(k) is the amplification factor given in (15.25), and the quantities denoted by Sn ðk, t Þ are the Fourier transforms of the higher-order two-point correlation functions Sn ðjr- r0 j, t Þ  un - 1 ðr, t Þuðr0 , t Þ

ð15:36Þ

where the subscript is dropped for n = 2. Neglecting the correlation functions of orders higher than two and the fluctuation term (2MkBTk2), we obtain the linear evolution equation for the structure factor, which follows from the linear evolution equation for the normal modes (15.22b). Retaining the fluctuation term gives us the so-called Cook’s equation [5], which is an improvement over (15.22, 15.23, 15.24, 15.25) in that the stable modes with k > km are predicted to equilibrate at S = k B T= k 2- k2m , rather than relax to zero. But the unstable modes in Cook’s approximation still exhibit unlimited exponential growth. Equation (15.35) indicates that the latter can be stopped only by the nonlinear terms of the free energy expansion that couple to the higher-order correlation functions (15.36). The problem in (15.35) is to find a physically reasonable and mathematically tractable way of relating Sn to S. Langer [6] obtained an approximation, which is reasonable for the critical quench, that is, lowering the temperature from T > TC to T < TC at X = XC. In these conditions we may assume that the probability distribution P{u} is always a , symmetric, Gaussian function of u, centered at u = 0. Then, all Sn s with odd numbers vanish and S4 ðk, t Þ = 3 Sð0, t Þ Sðk, t Þ:

ð15:37Þ

where S(0, t) = ; see (15.33). If in (15.35) we drop the terms with n > 4, we find that the resulting equation of motion for S has the same form as in the linear theory, but the previously constant module of stability ∂2gS/∂X20 is now replaced by the time-dependent expression: 2

4

1 ∂ gS ∂ gS ð X Þ þ ð X Þ u2 ð t Þ C 2 ∂X 4 C ∂X 2

ð15:38Þ

Because ∂4gS/∂X4C is a positive constant and is a positive, increasing function of time, the marginal wavenumber

374

15

Conservative Order Parameter: Theory of Spinodal Decomposition in. . . 2

kCm =

-

4

1 ∂ gS 1 ∂ gS ðX Þ þ ð X Þ h u2 ð t Þ i κC ∂X 2 C 2 ∂X 4 C

ð15:39Þ

must decrease. This implies coarsening of the emerging structure, which is observed in experiments and numerical simulations.

Exercises 1. Derive an expression for the wavelength of the “most dangerous mode” of fluctuations in the regular solution, and plot its temperature dependence as a function of the undercooling below the spinodal temperature TS; see (6.13b).

References 1. J.W. Cahn, J.E. Hilliard, Free energy of a nonuniform system. III. Nucleation in a two-component incompressible fluid. J. Chem. Phys. 31, 688–699 (1959) 2. M. Hillert, Acta Metall. 9, 525 (1961) 3. J.W. Cahn, Acta Metall. 9, 795 (1961) 4. H. Metiu, K. Kitahara, J. Ross, J. Chem. Phys. 65, 393 (1976) 5. H.E. Cook, Acta Metall. 18, 297 (1970) 6. J.S. Langer, Annal. Phys. (N.Y.) 65, 53 (1971)

Chapter 16

Complex Order Parameter: Ginzburg-Landau’s Theory of Superconductivity

In this chapter we lay out the phenomenological theory of superconductivity where the OP is a complex number and demonstrate how the method can help in calculating different properties of a superconductor. This is also an example of the multi-physics coupling in phase transitions. For better understanding of the material, it is useful to review Sects. 3.4 and 8.7.

16.1

Order Parameter and Free Energy

There are systems where the OP should be represented by a complex function: η = jηjeiΦ

ð16:1Þ

Here Φ is the complex argument or phase of η. Most interesting examples of such systems are those where the quantum nature of matter is essential for its macroscopic behavior, for instance, a superconducting metal. Full theory of superconductivity is very complicated and is not finished yet. In this section we will discuss only the situations where the phenomenological Ginzburg-Landau theory is applicable [1, 2], leaving the question of the physical conditions of applicability to the end. For a superconducting phase of the material, the OP may be associated with the wave function of the superconducting electrons. Then, according to the quantum mechanics, the OP is a complex quantity defined with accuracy of an arbitrary additive phase, and all observable quantities must depend on η and η* (complex conjugate, CC) in such a way that they do not change if η is multiplied by a constant of the type eiθ:

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Umantsev, Field Theoretic Method in Phase Transformations, Lecture Notes in Physics 1016, https://doi.org/10.1007/978-3-031-29605-5_16

375

376

16

Complex Order Parameter: Ginzburg-Landau’s Theory of Superconductivity

η → ηeiθ , η → η e - iθ

ð16:2Þ

The OP should be normalized such that in the superconducting phase |η|2 = ns where ns is the density of the “superconducting electrons.” Let’s, first, consider a homogeneous superconductor outside of a magnetic field. Then η is independent of the spatial coordinates, and the free energy density of a superconductor can be expanded in powers of the OP as follows: 1 gðη, η ; P, T Þ = g0 ðP, T Þ þ aðP, T Þηη þ bðP, T Þðηη Þ2 þ ⋯ 2

ð16:3Þ

where g0 is the free energy density of a normal phase (ηn) and a, b > 0 are constants. The requirement of invariance to the addition of the arbitrary phase θ, (16.2), yields that the free energy expansion (16.3) cannot contain terms of the third order in η (or η*). Hence, in the absence of the magnetic field, the superconductivity is a phase transition of the second kind. In inhomogeneous systems we have to account for the contributions of the components of ∇η into the free energy density. Taking into account that the complex OP is invariant with respect to the transformation (16.2), this contribution may be presented as const×|∇η|2. Recall that the OP η is introduced here as an effective wave function of the superconducting electrons. Then the operator (-ħ∇) represents the quantum mechanical operator of momentum of the particle, and the contribution of the inhomogeneities into the free energy density of the system is analogous to the density of the kinetic energy in quantum mechanics. It can be presented as const j∇ηj2 =

1 j - iħ∇ηj2 2m

where ħ = 1.05 × 10-34Js is Plank’s constant and m is an effective mass of a particle. Thus, the free energy density of an inhomogeneous system takes the form gðη, η , ∇η; P, T Þ = g0 þ

ħ2 b j∇ηj2 þ ajηj2 þ jηj4 2m 2

ð16:4Þ

In quantum mechanics the vectors i

iħ ħ ðη∇η - η ∇ηÞ = jηj2 ∇Φ 2m m j  qi

ð16:5aÞ ð16:5bÞ

are identified, respectively, as the probability flux density and electrical current density due to a particle with the electrical charge q and the wave function η(r, t). These expressions can also be used in our case assuming that η(r, t), m, and q = -|e| are the effective wave function, mass, and charge of the superconducting charge carries (e is the electron’s charge). A few years after the introduction of the GL

16.1

Order Parameter and Free Energy

377

theory, it was discovered that the electrical current in a superconducting metal is transferred by paired electrons. This means that e and m in the GL theory should be replaced by 2e and 2 m. However, because we do not intend to present here a quantitative theory of superconductivity, we will leave e and m in their original form. In a superconducting state, the conducting electrons break up into two parts, normal and superconducting, with the respective current density jn and js. Contrary to the normal current, the superconducting one does not carry heat or dissipate energy; hence, it can be sustained without external sources. So far, we have been ignoring presence of the magnetic field in the material, which appears as a result of the current and external sources. Now we have to ask the following questions: What will happen if the superconductor is placed into a magnetic field H created by the external sources? How do we introduce the magnetic field into the description of the system? What are the additional contributions into the free energy due to the presence of this field? First, recall that in Sect. 8.7 discussing the influence on the transition of the external real-valued field H (which might be magnetic), we described it by the contribution (-Hη) into the free energy density. However, such approach does not work for a complex OP because this contribution does not have the right symmetry. Second, as known [3], if a material is introduced into the external magnetic field of strength H, it will develop the magnetic moment M which will change the field in the material. The latter may be characterized by the magnetic induction (in CGS unit system): B = H þ 4πM

ð16:6Þ

B satisfies Maxwell’s equations (we consider only the case of the stationary field): ∇×B=

4π j c

∇∙B=0

ð16:7aÞ ð16:7bÞ

Third, in order to satisfy the condition of gauge invariance in the magnetic field, the operator ∇ must be transformed as follows [3]: ∇→∇-

ie A ħc

ð16:8Þ

where A is the magnetic vector potential related to the induction as B=∇×A

ð16:9Þ

Fourth, the work done on the material to increase the field from B to B + dB is HdB/4π. As the magnetic moment M of the normal phase (non-ferromagnetic) is practically zero, introduction of the magnetic field will change the free energy density of this phase as follows:

378

16

Complex Order Parameter: Ginzburg-Landau’s Theory of Superconductivity

gn ðP, T, HÞ = g0 þ

B2 8π

ð16:10Þ

The “self-contribution” of the magnetic energy density should be included into the free energy density because the magnetic field itself may be influenced by the state of ordering. Thus, the free energy density of an inhomogeneous superconductor in a stationary magnetic field is gðη, η , ∇η; P, T, BÞ = g0 þ

B2 ħ2 þ 8π 2m

∇-

2 ie b A η þ ajηj2 þ jηj4 ħc 2

ð16:11Þ

and the total free energy of the whole system is G= B 2 ħ2 þ 8π 2m

= G0 ðP, T Þ þ

16.2

∇-

gd 3 x 2 b ie A η þ ajηj2 þ jηj4 2 ħc

d3 x

ð16:12Þ

Equilibrium Equations

At equilibrium the total free energy G must take on the smallest possible value. Hence, equations that describe the fields at equilibrium should be obtained by minimizing G with respect to the three independent functions (η, η*, A). Indeed, η and η* are independent because a complex quantity is characterized by two real ones, and A is independent because the OP field affects the magnetic field through the currents in the system. Varying G in (16.12) with respect to η* and applying the Gauss theorem, we obtain δG = 

δη δη

2 ie ħ2 ∇ - A η þ aη þ bjηj2 η 2m ħc

∇-

ie A η ∙ n ds ħc

d3 x þ

ħ2 2m ð16:13Þ

where n is the unit vector normal to its surface and the second integral is taken over the surface of the superconductor. Setting δG = 0 and assuming that the variation δη* is arbitrary inside the superconductor and on its surface, we obtain the following equation and the BC:

16.2

Equilibrium Equations

-

379

1 e 2 - iħ∇ - A η þ aη þ bjηj2 η = 0 2m c e - iħ∇ - A η ∙ n = 0 on S c

ð16:14Þ ð16:15Þ

Varying G with respect to η yields the equations which are CC to (16.14, 16.15). Varying G with respect to A, using the following formula from the vector calculus U ∙ ð∇ × VÞ - V ∙ ð∇ × UÞ = ∇ ∙ ðV × UÞ and applying the Gauss theorem, we obtain δG = þ

δA •

1 4π

iħe  1 e 2 1 Ajηj2 d3 x ðη ∇η - η∇η Þ þ ð∇ × BÞ þ 2mc m c 4π

δA • ðB × nÞds

ð16:16Þ

Setting δG = 0 and assuming that the variation δA is arbitrary inside the superconductor and on its surface, we obtain Maxwell’s Eq. (16.7a) with the current density j (cf. (16.15, 16.8)): j= -

e iħ  e ðη ∇η - η∇η Þ þ jηj2 A m 2 c

ð16:17Þ

and the BC: B×n=0

on S

ð16:18aÞ

This is not surprising because the stationary Maxwell’s equations can be derived from the variational principle of thermodynamics [3]. Notice that in (16.17) j represents the superconducting current only, because the normal current dies out at equilibrium. The BC (16.18a) yields that on the boundary of the superconductor, Bn = 0, Bt = B

ð16:18bÞ

Applying Maxwell’s Eq. (16.7a) to BC (16.18a), we obtain a BC on the current: 4π j ∙ n = ð∇ × BÞ ∙ n = ∇ ∙ ðB × nÞ þ B ∙ ð∇ × nÞ = 0 c

ð16:18cÞ

Notice that this BC can also be derived if BC (16.15) and its CC are applied to the current defined by (16.5a,b, 16.8). Equations (16.7a, 16.14, 16.17) with BC (16.15,

380

16

Complex Order Parameter: Ginzburg-Landau’s Theory of Superconductivity

16.18c) define the distributions of the OP and magnetic fields in the superconducting material at equilibrium. Let’s analyze now the homogeneous solutions of the equilibrium equations. First, notice that if B = 0, then we may choose the gauge A = 0 which will allow us to define the OP in the (16.14, 16.17) as a real number (e.g., the absolute value of the wave function), that is, to reduce our equations to those that have been analyzed in Chap. 8. Second, for weak but nonvanishing magnetic field, the equations take the form aη þ bjηj2 η = 0

ð16:19aÞ

e2 2 jη j A mc

ð16:19bÞ

j= -

The ηn = 0 solution of (16.19a) corresponds to the normal phase; then (16.19b) tells us that jn = 0. For the superconducting phase, ns = jηs j2 = -

a b

ð16:20aÞ

This phase becomes stable when a < 0 (see Chap. 8). Then j s = - ns

jaje2 e2 A= A mc bmc

ð16:20bÞ

Applying operator ∇ × to both sides of (16.20b), using Maxwell’s Eqs. (16.7a,b, 16.9) and the formula from vector calculus ∇ × ð∇ × UÞ = ∇ð∇ ∙ UÞ - ∇2 U we obtain the London equation [4]: ∇2 B =

1 c B, δ = 2j ej δ2

bm c = π j aj 2j e j

m π ns

ð16:21Þ

Notice that London equation is gauge invariant. Let’s use this equation to find the magnetic field distribution close to a plane boundary of a superconductor with vacuum, which is considered to be a yz-plane with the x-axis directed into the superconductor; see Fig. 16.1a. Then B = B(x) and, as it follows from (16.18b), Bx = 0, that is, only the component of the magnetic field parallel to the surface is not zero: Bt ≠ 0. Then (16.21) and BC (16.18a) take the form d2Bt/dx2 = Bt/δ2 and Bt(0) = B0, where B0 is the external field applied parallel to the surface. Hence

16.2

Equilibrium Equations

381

Fig. 16.1 (a) A boundary of a superconductor with vacuum. (b) Superconductor/normal phase-transition layer

z

y x

B0

Hc

s

s

n

(b)

(a)

Bt = B0 e - x=δ

ð16:22Þ

As we can see, the magnetic field penetrates only a thin surface layer of thickness δ of the superconducting sample and decays quickly below this layer. In the theory of superconductivity, this is called the Meissner effect. Now let’s calculate the free energy of a superconductor, e.g., a long cylinder, placed into a constant magnetic field parallel to its axis. In this case the equilibrium equations should be obtained by minimizing not the free energy G but the following potential: Φðfηg; P, T, HÞ = Gðfηg; P, T, BÞ -

1 4π

H ∙ B d3 x

ð16:23Þ

The integral in this potential, as we discussed above, accounts for the work done by the external forces in order to maintain H = const(t). Let’s call it the Gibbs magnetic potential. Equation (16.23) is an example of the Legendre transformation (B, G) → (H, Φ), analogous to (V, F) → (P, G); see Appendix F. As the OP is not involved into this transformation, minimization of Φ yields the same Eq. (16.14) and BC (16.15). Variation of the additional term in the Gibbs magnetic potential gives δ

H ∙ B d3 x =

H ∙ ð∇ × δAÞ d 3 x =

δA ∙ ð∇ × HÞ d3 x þ

ðn × HÞ ∙ δA ds

ð16:23aÞ Because there are no external currents jext, the volumetric term in this expression vanishes: ∇×H=

4π j =0 c ext

ð16:23bÞ

382

16

Complex Order Parameter: Ginzburg-Landau’s Theory of Superconductivity

Hence, the equilibrium Eq. (16.17) remains the same, but all parameters now should be expressed as functions of H instead of B. The surface term will change the BC (16.18) to ðB- HÞ × n = 0 on S

ð16:18dÞ

For the normal phase, ηn = 0, jn = 0, Mn = 0, and B = H. Hence, Φn = G0 -

H2 V 8π

ð16:24Þ

In the bulk of the superconducting phase, jηs j = jaj=b and Bs = 0. As the magnetic field penetrates only a thin surface layer of the superconducting material, the contribution of the first two terms in the integral in (16.12) is proportional to the thickness of this layer times the cross-sectional area and can be neglected compared to the contribution of the last two terms, which is proportional to the total volume of the sample V. Hence, Φs = G0 -

a2 V 2b

ð16:25Þ

Comparing expressions (16.24) and (16.25), we can see that application of the strong enough magnetic field makes the normal phase thermodynamically more favorable than the superconducting one. As the condition of the phase equilibrium is Φs = Φn, we obtain an expression for the critical field, that is, the external magnetic field strength Hc that renders the superconducting phase thermodynamically (globally) unstable: H c = 2jaj π=b

ð16:26Þ

Assuming the linear temperature dependence of the coefficient a: a = αðT- T c Þ

ð16:27Þ

where α > 0 is a constant and Tc is the critical temperature of the transition (cf. Sect. 8.6.2), we obtain H c = 2α

π ðT - T Þ b c

ð16:28Þ

This relation is applicable only to T → Tc-0 because in the derivation of (16.19a, b), we used the condition of not strong magnetic field. Notice that our analysis of the superconducting transition yields that this is a second-order transition for H = 0 and the first-order one for H ≠ 0.

16.3

16.3

Surface Tension of the Superconducting/Normal Phase Interface

383

Surface Tension of the Superconducting/Normal Phase Interface

As we concluded in Chap. 9, the first-order transitions allow for a state of phase coexistence, that is, the state where two phases, which are at equilibrium with each other, are separated by a transition zone of particular thickness. Hence, coexistence of the superconducting and normal phases is possible at T < Tc and H = Hc. In the theory of superconductivity, such state is called intermediate. We define the interfacial energy (surface tension) as the excess of the total Gibbs magnetic potential of the system with the interface, per unit area of the interface, compared to that of one of the equilibrium phases. Let us calculate the surface tension σ ns and thickness of the plane n/s transition layer, which we choose to be parallel to yz-plane with the x-axis directed into the superconducting phase; see Fig. 16.1b. Then, cf. (9.65a): 1

σ ns =

ðφ - φn Þdx

ð16:29aÞ

-1

where all quantities depend on x only and φ=g-

a2 H∙B and φn = g0 2b 4π

ð16:29bÞ

For the vector potential, which has not been “calibrated” yet, we choose the transverse (or Coulomb or London) gauge: ∇A = 0

ð16:30Þ

Then we obtain that dAx/dx = 0 and we can choose Ax = 0. Remaining two components of the vector potential may be chosen as Az = 0 and Ay = A(x). Hence, Bz = dAy/dx. As the term iA∇η in the equilibrium Eq. (16.14) and the free energy density (16.11) vanishes, we arrive at the following boundary-value problem: -

d 2 η e2 1 - ħ2 2 þ 2 A2y η þ aη þ bjηj2 η = 0 2m dx c

4π e iħ  dη dη dBz =η -η dx c m 2 dx dx

e þ jηj2 Ay c

x → - 1 : n - phase, η = 0, Bz = H c a x → þ 1 : s - phase, jηj2 = - , Bz = 0 b The surface tension is

ð16:31aÞ ð16:31bÞ ð16:32aÞ ð16:32bÞ

384

16

Complex Order Parameter: Ginzburg-Landau’s Theory of Superconductivity

σ ns =

1 2 ħ2 Bz- 2H c Bz þ 8π 2m

e 2 2 2 b a2 dη 2 þ Ay jηj þ ajηj2 þ jηj4 þ ħc 2 dx 2b

d3 x

ð16:33Þ Given the BC (16.32) and formula (16.26), it is not surprising that the integrand vanishes in the bulk of both phases. Notice that the OP in (16.31, 16.32a,b) may be selected as a real function. Then the variables may be scaled as follows: x ~x = , ~η = η δ

~ Ay ~ dA B b ~ , B= = z , A= Hcδ d~x Hc j aj

ð16:34Þ

and (16.31, 16.32) may be represented in the scaled form: d2 η = κ2 dx2

A2 - 1 η þ η3 2

ð16:35aÞ

d2 A = Aη2 dx2

ð16:35bÞ

dA = 1, dx dA x → þ 1, η = 1, B = = 0, dx x → - 1, η = 0, B =

ð16:36aÞ ð16:36bÞ

where κ=

mc jejħ

b 2π

ð16:37Þ

and the tildes have been dropped. “These equations, unfortunately, cannot be integrated in quadratures, but we can provide its first integral” [1]: 2 dη κ2 dx

2

þ 2- A2 η2 - η4 þ

dA dx

2

= const = 1

ð16:38Þ

where the value of the constant comes from the BC (16.36a). Then expression (16.33) takes the form

References

δH 2c σ ns = 8π

385 1

-1

2 dη κ 2 dx

δH 2c = 4π

2

þ A2- 2 η2 þ η4 þ

1

-1

δH 2c = 8π

2 dη κ2 dx

2

þ

dA dA -1 dx dx

1

-1

dA -1 dx

dA -1 dx

dx

2

dx

ð16:39aÞ

ð16:39bÞ

2

- η4 dx

ð16:39cÞ

Notice from (16.39c) that the surface tension may vanish. One can prove (see [1]) that this takes place for the critical value of 1 κ = p 2

ð16:40Þ

Vanishing of the surface tension is an important phenomenon which comes about as a result of the interaction of the OP and the magnetic fields. It causes many interesting effects in superconductors, including appearance of the electromagnetic vortices. As for the thickness of the transition zone, (16.35a) yields that the length scale of the OP variation is ξ = κδ

ð16:41Þ

This length is equivalent to the correlation radius of Sect. 12.1 because it determines the range of statistical correlations of the OP fluctuations.

Exercise 1. Verify expressions (16.39a,b,c) for the surface tension.

References 1. V.L. Ginzburg, L.D. Landau, Zurn. Exper. Theor. Phys. 20, 1064 (1950) in Russian 2. L.D. Landay, E.M. Lifshitz, Statistical Physics, Part 2 (Pergamon Press, Oxford, 1980) 3. F.E. Low, Classical Field Theory. Electromagnetism and Gravitation (Wiley-Interscience, New York, 1997) 4. F. London, H. London, Proc. Roy. Soc., A 149, 71 (1935)

Chapter 17

Multicomponent Order Parameter: Crystallographic Phase Transitions

As we stated in Chap. 8, a phase transition is associated with symmetry change. Many cases of phase transitions cannot be described adequately by a scalar order parameter. They require an OP of more complicated structure, e.g., a “vector.” An example in the previous chapter, to some extent, is a case in point. Another example is a crystallographic transition, i.e., symmetry change in a crystalline solid. In this section we will consider only equilibrium characteristics of such transformations, which are well described by the Landau theory of phase transitions.

17.1

Invariance to Symmetry Group

Consider a transition from a “high-symmetry” crystalline phase to a “low-symmetry” one, which may be characterized by the densities of atoms ρ0(r) and ρ(r) so that ρðrÞ = ρ0 ðrÞ þ δρðrÞ

ð17:1Þ

The functions ρ0(r) and ρ(r) have different symmetries, that is, sets (groups) of the transformations of the coordinates Γ with respect to which the functions are invariant. In this case Γ(ρ) = Γ(δρ) is a subgroup of Γ(ρ0) because otherwise no symmetry change occurs at the transformation point. As known from the group theory [1], an arbitrary function may be represented as a linear combination of the base functions {φ1, φ2, . . .} which transform through each other under any transformation from the group Γ. Moreover, the base functions may be broken into a number of the linearly independent sets where the functions from each set transform through the base functions of the same set only and the number of the base functions in the set cannot be reduced any further. Such sets are called irreducible representations and play a special role in the Landau theory because the symmetry change associated with a particular transition may be described entirely by one of the irreducible representations. Then, we represent © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Umantsev, Field Theoretic Method in Phase Transformations, Lecture Notes in Physics 1016, https://doi.org/10.1007/978-3-031-29605-5_17

387

388

17 Multicomponent Order Parameter: Crystallographic Phase Transitions f

δρðrÞ =

ηi φi ðrÞ

ð17:2Þ

i=1

where {φi} is the set of normalized base functions of the group Γ(ρ0) and f is the order of the representation. Because any transformation from the group Γ(ρ0) transforms the base functions {φi} through each other leaving the coefficients {ηi} unchanged, we may think of it as transforming {ηi}’s leaving {φi}’s unchanged. If the set {φi} is specified, the coarse-grained free energy G of a homogeneous crystal with the density of atoms ρ(r) becomes a function of T, P, and {ηi} and may be expanded about ρ0(r) if δρ(r) is small compared to ρ0(r). However, this is an expansion in powers of ηi as opposed to δρ(r) because the coarse-grained free energy of a homogeneous system cannot depend on the space coordinate r. As the free energy of the crystal is independent of the choice of the coordinate system, it must be invariant with respect to the transformations of the group Γ(ρ0). Hence, the free energy expansion in powers of ηi should contain only invariant combinations of certain powers I(n)(ηi) and {ηi} may be called a multicomponent order parameter (MOP). First, notice that the expansion of G in {ηi} contains no linear invariants. Indeed, f

existence of such, e.g., i=1

αi ηi = const, would mean that the set of base functions

{φi, i = 1, 2, . . ., f} is not linearly independent, which contradicts our assumption. Second, the only invariant of the second order—a positive definite quadratic form— can always be normalized to the sum of the squares, I ð2Þ ðηi Þ =

f i=1

η2i . Third, the

invariants of different orders I(n)(ηi) are independent of each other. Hence, the free energy expansion should take the form f

GðT, P, fηi gÞ = G0 ðT, PÞ þ AðT, PÞ i=1

Bp ðT, PÞI ðp3Þ ðηi Þ þ

η2i þ p

Cq ðT, PÞI ðq4Þ ðηi Þ þ ⋯ q

where I ðp3Þ ðηi Þ and I ðq4Þ ðηi Þ are invariants of the third and fourth order, respectively. Crystallographic symmetries of the system determine what invariants should be included into the expansion, which in turn determines what kinds of transitions may take place in the system. The coefficients of expansion determine the characteristics of the transition, like phase diagrams and thermodynamic quantities. Thus, the proper irreducible representation converts description of the system through the atomic density into the one through the MOP.

17.2

17.2

Inhomogeneous Variations

389

Inhomogeneous Variations

As in the case of a one-component OP, the total free energy of an inhomogeneous system is an integral over the entire volume of the system G=

ð9:15aÞ

g d3 x V

of the free energy density g, which depends on the variables that characterize the state of the system and their spatial derivatives. Hence, the spatial derivatives of MOP must be included into the free energy density function: p

g T, P, fηi g, ∂ ηi =∂xpk p

A physically consistent functional dependence of g on ∂ ηi =∂xpk is the subject of this subsection. Because the equilibrium free energy comes from minimization of the integral (9. p 15a), two points regarding the functional dependence of g on ∂ ηi =∂xpk should be kept in mind. First, only spatial derivatives of the first order should be included into the first approximation of g because through the integration by parts, the higher derivatives can be reduced to lower derivatives plus the surface term. Second, the inhomogeneous contributions may be broken into invariant combinations and considered separately. Hence, g should not contain linear combinations of (∂ηi/∂xk) because they may be integrated into the surface term (recall the Gauss theorem). The bilinear in MOP and linear in spatial derivatives terms may be divided into two contributions, symmetric ηi

∂ηi ηj ∂ηj ∂η þ ηj i = ∂xk ∂xk ∂xk

ð17:3aÞ

∂ηj ∂η - ηj i ∂xk ∂xk

ð17:3bÞ

and antisymmetric ηi

The former is not essential because they are perfect differentials and can also be integrated into the surface terms. The latter, called Lifshitz invariants, cannot be integrated out and deserve special attention. Notice that the group Γ(ρ0) transforms components {∂ηk/∂xi} as the components of the position vector times the components of the MOP. Hence, the Lifshitz invariants transform as the components of the vector times the “antisymmetric square”:

390

17 Multicomponent Order Parameter: Crystallographic Phase Transitions

ηk ðrÞηl ðr0 Þ - ηl ðrÞηk ðr0 Þ

ð17:4Þ

This means that the Lifshitz invariants are not true scalar invariants of the group Γ(ρ0), but they can be linearly combined into such. If the system of interest supports only homogeneous equilibrium phases, then the Lifshitz invariants must be absent because their presence does not allow a homogeneous in MOP state to minimize the total free energy. (Why?) The condition of absence of the Lifshitz invariants in the free energy expansion is equivalent to the condition that the antisymmetric square (17.4) does not transform as a vector. Crystallographic conditions for this are beyond the scope of this book, and they can be found in Ref. [1]. If, on the other hand, the system of interest may sustain heterogeneous equilibrium phases, e.g., the so-called incommensurate phases, then presence of the Lifshitz invariants is warranted. In the present section, we consider systems that support the homogeneous phases only. Hence, the inhomogeneous part of the free energy density should include only the bilinear in MOP and spatial derivatives terms: g = gðT, P, fηi gÞ þ κklij

∂ηk ∂ηl ∂xi ∂xj

ð17:5Þ

where κklij is the fourth-rank tensor of the gradient energy coefficients and the summation over the repeated indices is assumed. For the homogeneous state of the crystal to be stable, the inhomogeneous term should allow partitioning into positive definite invariants of the second order in the components of the gradient {∂ηk/∂xi}. Hence, the transformation properties of the components of the gradient {∂ηk/∂xi} determine the symmetry of the tensor κ klij . Also, symmetry of the tensor κklij is determined by the symmetry of the system, e.g., crystalline anisotropy (cf. Eq. (9.3) and the discussion afterward).

17.3 Equilibrium States Equilibrium states of the system described by the free energy (9.15a, 17.5) obey the following simultaneous equations (see Appendix B): ∂g δG ∂g ∂  =0 δηk ∂ηk ∂xi ∂ ∂ηk ∂xi Let us consider a system where κ klij = δkl δij κki =2, that is,

ð17:6Þ

17.3

Equilibrium States

391

1 g = gðT, P, fηi gÞ þ κki 2

2

∂ηk ∂xi

ð17:7Þ

Then the simultaneous equations (17.6) take the form 2

∂ ηk ∂g - κki = 0, ∂ηk ∂x2i

k = 1, . . . , f

ð17:8Þ

Notice that in the second term, there is no summation over k but there is summation over i (Why?). Thermodynamic stability of a heterogeneous equilibrium state ηEk ðxÞ is determined by the sign of the second variation of the functional: δ2 G =

1 2

δηl H δηk d3 x

ð9:121Þ

V

where now δη = (δηk). The sign of the second variation is determined by the spectrum of its Hamilton’s operator: 2

HΨk,n = Λn Ψk,n ;

H

∂ g ηE ðxÞ ∂ηk ∂ηl k ∂

2

∂ g ∂ηk ∂xi



∂ηl ∂xj

∂ ∂ ∂xi ∂xj

ð17:9Þ

Similar to the case of the one-component OP, the gradient Ψi,k,n =

∂ηEk ∂xi

ð17:10Þ

is the Goldstone mode of the equilibrium state, that is, the eigenfunction with zeroth eigenvalue: Λn = 0. A particularly interesting situation arises in a one-dimensional system where all variables depend on one coordinate only, e.g., x1 = x. Then (17.8) takes the form κk

d 2 ηk ∂g = , dx2 ∂ηk

k = 1, . . . , f

ð17:11Þ

Using the thermomechanical analogy of Sect. 9.3.3, these equations may be interpreted as describing a conservative mechanical system of f interacting particles with x as the time, ηk coordinate of the k-th particle, κk its mass, and (-g) the potential energy of the whole system. Such a system has the first integral—the total mechanical energy, kinetic plus potential [2]. To see this in our case, we multiply both sides of (17.11) by (dηk/dx) and sum them up over k. Then

392

17 Multicomponent Order Parameter: Crystallographic Phase Transitions 2

d 1 k dηk κ dx dx 2

=

dg ∂g dηk = dx ∂ηk dx

ð17:12Þ

Hence, the “conservation of mechanical energy” expression takes the form 1 k dηk κ 2 dx

2

- gðT, P, fηi gÞ = - μ

ð17:13Þ

where μ (negative of the “total energy”) is the value of g inside the homogeneous phases (cf. Eq. (9.30a)). An interesting application of this relation appears if one wants to calculate the interfacial energy σ between two coexisting phases. As we discussed in Sect. 9.3.7, 1

1

ðg- μÞdx

ðg - μÞdx = 2

σ= -1

ð9:65a; bÞ

-1

Then, using (17.7) for (9.65a) or (17.13) for (9.65b), we obtain the expression 1

σ=

κk -1

dηk dx

2

dx

ð17:14Þ

which may be interpreted that the interfacial energy is, so to speak, a weighted sum of the squares of all the gradients of the OPs. As an example of a system with MOP, let us consider the following free energy density: 1 1 1 g = g0 ðT, PÞ þ A η21 þ η22 þ C 1 η21 η22 þ C 2 η41 þ η42 2 2 4

ð17:15Þ

This potential describes transitions in a crystalline structure that belongs to the crystallographic point group D4h. It may be visualized by a body-centered tetragonal Bravais lattice in which the transitions constitute displacements of the central atom away from its most symmetric position [3]. The surfaces (g - g0) as functions of (η1, η2) for A = -1, C2 = 1 and different values of C1 are depicted in Fig. 17.1. At C1 = C2 the surface (g - g0) is called the “Mexican hat potential” because it is a function of (η21 þ η22 ) only. The homogeneous equilibrium states of this system are obtained from the following system of simultaneous equations:

17.3

Equilibrium States

393

(a) 0.2

free energy

0.1 0 -0.1 -0.2 -0.3 1.5 1 0.5 0 -0.5 -1 -1.5

-1.5

-1

0

-0.5

order parameter eta2

0.5

1

1.5

order parameter eta1

(b) 0.2

free energy

0 -0.2 -0.4 -0.6 -0.8

1.5 1 0.5 0 -0.5 -1 -1.5

-1.5

-1

order parameter eta2

0

-0.5

0.5

1

1.5

order parameter eta1

Fig. 17.1 The surfaces (g - g0) from (17.15) as functions of (η1, η2) for A = -1, C2 = 1 and different values of C1: 1.25 (a) and -0.25 (b)

∂g = η1 A þ C1 η22 þ C 2 η21 = 0 ∂η1

ð17:16aÞ

∂g = η2 A þ C1 η21 þ C 2 η22 = 0 ∂η2

ð17:16bÞ

The system has nine solutions, which may be divided into three groups: E0 = ðη1 = η2 = 0Þ

ð17:17aÞ

394

17 Multicomponent Order Parameter: Crystallographic Phase Transitions 1.5 E2

E1 BB m o

0.5

BB mo

1

a

K 0

m o

E0  o m 

-0.5

o m





-1 mo

-1.5 -1.5

-1

0

-0.5

0.5

1

1.5

K Fig. 17.2 Equilibrium states E0, E1, E2 of the system with the free energy density, (17.15), for -A/ C2 = 1 and C1/C2 = -0.25 in the order parameter plane (η1, η2). Colored trajectories are the domain-wall transition paths for different values of the gradient energy coefficients: κ = 0 (purple), κ = 0.2 (pink), κ = 0.5 (red), κ = 1 (black), κ = 1.5 (blue), and κ = 1 (brown). For C1 = 0 states E2 will be in the corners of the square described around the circle “a”; for C1/C2 = 1, on the arch of the circle of radius equal to “a”; for C1/C2 > 1, on the circle of radius smaller than that of “a.” Dashed lines are the trajectories of the domain-wall transition paths for C1/C2 > 1

E1 = η1 = 0, η22 = -

A C2

or

η21 = -

E2 = η21 = η22 = -

A ,η =0 C2 2

A C1 þ C2

ð17:17bÞ ð17:17cÞ

In Fig. 17.2 the equilibrium states E0, E1, E2 are depicted in the plane (η1, η2). Conditions of local stability of the equilibrium states are the following: 2

DðE Þ 

2

2

∂ g ∂ g ∂ g ∂η1 ∂η2 ∂η21 ∂η22

2

2

> 0 and g11 ðE Þ 

∂ g >0 ∂η21

ð17:18Þ

Applying this criterion to the states (17.17a,b,c), we obtain their respective regions of local stability:

Equilibrium States

395

Fig. 17.3 Projection of the parameter space (A, C1, κ 2) of the system with the free energy density (17.7, 17.15) on the plane. The phase (stability) diagram (A, C1) for C2 = const > 0

A

E0

C2

no stable states

E2

C1

C2

0

E2

"mexican hat"

E0 OP independence

17.3

E1

N1 N2 E0 : A > 0 E 1 : A < 0,

ð17:19aÞ

C1 >1 C2

E 2 : A < 0, - 1
0. As you can see, this system has three phase-transition lines: (A = 0, C2 < C1 < C2), (A = 0, C1 > C2), and (A < 0, C1 = C2). All transitions are of the second kind because at the transition lines, the stable states exchange their stabilities with the unstable states and there are no regions of coexistence of the stable states (phases). Analysis of inhomogeneous equilibrium states of the system presents another interesting problem for us. As we saw in Sect. 9.3, the stable 1d inhomogeneous state represents a transition zone between two phases separated by an unstable barrier state. Using the free energy (17.15) for (17.11), we obtain κ1

d2 η1 = Aη1 þ C 1 η1 η22 þ C 2 η31 dx2

ð17:20aÞ

396

17 Multicomponent Order Parameter: Crystallographic Phase Transitions

κ2

d2 η2 = Aη2 þ C 1 η21 η2 þ C 2 η32 dx2

ð17:20bÞ

Physically, there are two main types of the interfaces: interphase boundaries between two phases of different symmetries, e.g., solid/liquid or fcc/bcc, and antiphase boundaries (APB, domain walls) between two different variants of the same phase, e.g., magnetic or order/disorder. For the system described by the free energy (17.15), only the second type, APB, is possible. (Why?) The APBs exist only for A < 0 because the phase E0 has only one variant. Six different types of APBs may be found in each phase, E1 and E2. Their symmetries depend on the kinds of variants that are connected by the APB and the barrier state that separates them. The base symmetries, (E1←E0→E1 or E1←E2→E1) and (E2←E0→E2 or E2←E1→E2), are broken by the gradient energy anisotropy. As a result, there appear six main types of the APBs, which may be found in phase E1 or E2 in different modifications; see Fig. 17.2. Their interfacial energies depend on the values of C1, C2, and κ 1, κ 2. The effect of these parameters on the interfacial energy of the domain wall depends on its symmetry. If C1 = 0, the simultaneous equations (17.20) break down into two independent equations for two different OPs. This means that the domain wall consists of two parts which coexist without interactions. In the scaled units x → x, κ 1 =C 2

p

σ → σ, κ1 C2

a= -

A , C2

c=

C1 , C2

κ=

κ2 κ1

ð17:21Þ

the dimensionless parameter κ is a measure of the gradient energy anisotropy in the OP space (η1, η2). In the isotropic E2 phase (κ = 1), the OP symmetry is not broken, and the transition path is represented by the straight-line diagonals connecting the equilibrium states in the OP plane; see Fig. 17.2. The energy of the special cases of (E2←E0→E2) interfaces are σ E2 ← E0 → E2 ða,- 1 < c < 1, κÞ =

p p 2 2ð1 þ κ Þa3=2 =3, if c = 0 p 3=2 4 2a =3ð1 þ cÞ, if κ = 1

ð17:22aÞ

For the non-special cases of c and κ, the solutions may be obtained numerically using the ideas of the thermomechanical analogy from Sect. 9.3.3. In Fig. 17.2 the results of the computations are presented in the OP plane. Notice two features of the trajectories. First, in both cases they pass through the state E0 as the barrier, which is the local maximum of the free energy; see Fig. 17.1b. Second, they press to the “hard” OP axis more than to the “soft” one. (Why? Hint: think about the interfacial thicknesses of the “soft” and “hard” OPs separately). The interfacial energies of the respective domain walls are

Equilibrium States

Fig. 17.4 Scaled interfacial energy σ of the domain walls in the phase E2 at c = -0.25 as a function of the scaled gradient energy coefficient κ. Blue curve, (E2←E0→E2) type, (17.22); green curve, (E2←E1→E2) type, (17.23)

397 3

(  m ( o ( 

interfacial energy V

17.3

2

(m(o(

1

0 0

0.5

1

1.5

2

gradient energy coefficient N

σ E2 ← E0 → E2 ða= 1, c = - 0:25, κÞ =

2:16, if κ = 0:5 2:80, if κ = 1:5

ð17:22bÞ

For the (E2←E1→E2) interfaces, the symmetry between the OPs is broken even if κ = 1, and the transition-path trajectories are represented by the respective hyperbolae in the OP plane; see Fig. 17.2. The interfacial energies of these layers are σ E2 ← E1 → E2 ða,- 1 < c < 1, κÞ →

2

2ð1 - cÞa3=2 =3ð1 þ cÞ, κ → 0

2

2κ ð1 - cÞ a3=2 =3ð1 þ cÞ, κ → 1

ð17:23Þ

The type of the interface, (E2←E0→E2) (17.22) or (E2←E1→E2) (17.23), depends on the symmetries of the terminal phases that the interface connects. Both types of interfaces are thermodynamically stable, that is, their Hamilton’s operators (17.9) do not have negative eigenvalues. In Fig. 17.4 the interfacial energies of the two types of interfaces are plotted as functions of κ. As you can see, the interfaces of different types may coexist in the same phase although they have different amounts of energy. In the E1 phase (c > 1), there are two types of the APBs: (E1←0→E1) where only one OP varies while the other one is zero and (E1←2→E1) where both OPs vary simultaneously. The symmetry of the (E1←0→E1) interfaces is broken by the gradient energy coefficients, and the trajectories of these interfaces in the OP plane represent, respectively, the horizontal and vertical straight lines; see Fig. 17.2. Trajectories of the (E1←2→E1) interfaces in the OP plane are arches. The interfacial energies of the (E1←0→E1) interfaces are

398

17 Multicomponent Order Parameter: Crystallographic Phase Transitions

p 2 2 3=2 σ E1 ← E 0 → E1 ða, c > 1, κÞ = a 3 p 2 2 p 3=2 κa σ E1 ← E0 → E1 ða, c > 1, κ Þ = 3

ð17:24aÞ ð17:24bÞ

Exercises 1. Find the equilibrium states of the free energy density that describes transitions from fcc to L10 crystalline structure: 1 1 1 g = g0 þ A η21 þ η22 þ η23 þ Bη1 η2 η3 þ C 1 η21 þ η22 þ η23 2 2 3 1 1 2 4 4 4 þ C 2 η1 þ η2 þ η3 þ κ j∇η1 j þ j∇η2 j2 þ j∇η3 j2 4 2

2

References 1. L.D. Landau, E.M. Lifshitz, Quantum Mechanics. Non-Relativistic Theory (Pergamon, Oxford, 1958) 2. H. Goldstein, Classical Mechanics (Addison-Wesley, Reading, MA, 1980), p. 5 3. P. Toledano, V. Dmitriev, Reconstructive Phase Transformations (World Scientific, Singapore, 1996).; see also L.D. Landay, E.M. Lifshitz, Statistical Physics (Pergamon Press, Oxford, 1958).

Chapter 18

“Mechanical” Order Parameter

Consider a system, which is described by two scalar OP fields, ψ(r, t) and η(r, t), of different nature. The OP ψ represents generalized coordinate characterized by the mass density ρ, while the OP η(r, t) is “massless,” that is, its mass density is zero. Then, in addition to the potential energy and gradient energy, the system is characterized by kinetic energy. In the spirit of our previous discussion, we will assume that the field η(r, t) is responsible for dissipation of the mechanical energy of the system. The field ψ(r, t) may be called mechanical (Lagrangian) and the field η(r, t) thermodynamic. In the field theory (see Appendix D), such system is described by the Lorentz-invariant Lagrangian density: lfψ, ηg =

1 ∂ψ ρ 2 ∂t

2

-

1 1 κ ð∇ψ Þ2 - κ η ð∇ηÞ2 - uðψ, ηÞ 2 ψ 2

ð18:1Þ

where κψ , κη are the gradient energy coefficients of the fields ψ(r, t), η(r, t) and u is the potential energy density of the system, assumed to be nongravitational. For the Lagrangian density (18.1), the mechanical energy of the system is (see Eq. (D.18)) E

d3 x Ω

=

∂l ∂ψ -l ∂ð∂ψ=∂t Þ ∂t

d3 x Ω

1 ∂ψ ρ 2 ∂t

2

1 1 þ κ ψ ð∇ψ Þ2 þ κ η ð∇ηÞ2 þ uðψ, ηÞ 2 2

ð18:2Þ

Many physical systems can be described by the fields ψ(r, t) and η(r, t). The following two examples are of interest to us. Martensitic transformation is a distortion of a crystalline lattice that does not need long-ranged diffusion of atoms. It is often characterized as a polymorphic, e.g., fcc-to-hcp, diffusionless transformation controlled by shear stress and strain. The field ψ may describe distribution of shear © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Umantsev, Field Theoretic Method in Phase Transformations, Lecture Notes in Physics 1016, https://doi.org/10.1007/978-3-031-29605-5_18

399

18 “Mechanical” Order Parameter

400

strain in the material of mass density ρ with the gradient energy coefficient κ ψ related to the nonlocal elastic behavior of the lattice. However, the purely mechanical description does not account for the dissipative interatomic interactions, which lead to the losses of the mechanical energy. These interactions may be described by the field η(r, t). Another example comes from the area of cracks propagation in brittle materials. In this case the Lagrangian field ψ may characterize displacement of the material perpendicular to the plane of the crack; then, displacement gradients correspond to components of strain. The dissipative field η represents defective material and distinguishes between broken (η = 0) and unbroken (η = 1) states. For the equation of motion of the field ψ(r, t), we may try to use the Lagrange equations (D.14); their right-hand sides may be expressed as the partial functional derivatives of E with respect to the spatial variations of the respective fields. So, for the field ψ(r, t), we have 2

ρ

δE ∂ ψ =δψ ∂t 2

ð18:3Þ

For the massless field η(r, t), the Lagrange equation of motion (D.14) turns into an equation of equilibrium, which means that the dynamical evolution of this field is not governed by Hamilton’s principle. Hence, the Lagrange equation for this field must be replaced with the dissipative one. For the massless field η(r, t), deviation of the partial functional derivative of E from zero is a measure of the deviation of the system from equilibrium, that is, the thermodynamic driving force. Hence, we may assume that the rate of change of the field η(r, t) is proportional to this force: ∂η δE = -ε δη ∂t

ð18:4Þ

This equation is analogous to TDGLE (11.1) with the mechanical energy E replacing the free energy G. Let us calculate the rate of dissipation of the mechanical energy E. Differentiating (18.2) with respect to time as a parameter, integrating by parts, and dropping the boundary term, we obtain the time derivative of the mechanical energy E: 2

dE = dt

d3 x ρ Ω

∂ ψ ∂ψ δE ∂ψ δE ∂η þ þ δη ∂t ∂t 2 ∂t δψ ∂t

ð18:5Þ

Due to the equations of motion (18.3, 18.4), this is dE = -ε dt

d3 x Ω

δE δη

2

ð18:6Þ

“Mechanical” Order Parameter

18

401

This equation shows that, indeed, changes of the field energy E occur due to evolution of the field η(r, t) only with no influence of the field ψ(r, t). As the energy of a mechanical system cannot be produced but may be dissipated, we conclude that ε ≥ 0. According to the Lagrangian field theory (Appendix D), the states of mechanical equilibrium are described by the following simultaneous equations: δE ∂u  - κ ψ ∇2 ψ = 0 δψ ∂ψ

ð18:7aÞ

δE ∂u  - κ η ∇2 η = 0 δη ∂η

ð18:7bÞ

Notice that in the nongravitational system, the mass density ρ has no bearing on the equilibrium states. For a homogeneous equilibrium state, (18.7) yields that ∂u ∂u ðψ, ηÞ = ðψ, ηÞ = 0 ∂ψ ∂η

ð18:8Þ

For a heterogeneous equilibrium state, let us consider an interface, that is, a 1D transition region between large regions of two different homogeneous equilibrium states (ψ α, ηα) and (ψ β, ηβ). Such state has the first integral (D.19): dψ 1 κ 2 ψ dx

2

1 dη þ κη 2 dx

2

- uðψ, ηÞ = constðxÞ  - ν,

ð18:9Þ

The first integral (18.9) yields that in the terminal phases uðψ i , ηi Þ = ν, i = α, β

ð18:10aÞ

dψ dη = 0, at x → ± 1, = dx dx

ð18:10bÞ

du ðψ , η Þ = 0, at x → ± 1 dx i i

ð18:11aÞ

because

and

because

18 “Mechanical” Order Parameter

402

du ∂u dη ∂u dψ = þ : dx ∂η dx ∂ψ dx

ð18:11bÞ

The interface can be characterized by the surface tension þ1

κη

σ= -1

dη dx

2

þ κψ

dψ dx

2

dx = σ η þ σ ψ ;

ð18:12Þ

where σ η and σ ψ are the partial tensions: þ1

ση 

κη

dη dx

κψ

dψ dx

-1 þ1

σψ  -1

2

dx

ð18:13aÞ

dx

ð18:13bÞ

2

To analyze the dynamical properties of the system described by (18.3, 18.4), we look first at the evolution of the small deviations of the fields from the homogeneous equilibrium {ψ i, ηi} described by (18.10): φ = ψ ðr, t Þ - ψ i

ð18:14aÞ

θ = ηðr, t Þ - ηi

ð18:14bÞ

Expanding the potential energy function about the equilibrium state {ψ i, ηi}, we obtain 1 1 uðψ, ηÞ = ν þ uψψ φ2 þ uψη φθ þ uηη θ2 þ h:o:t 2 2

ð18:15Þ

where 2

upq =

∂ u ðψ , η Þ; p, q = ψ, η: ∂p∂q i i

ð18:15aÞ

Then dropping the higher-order terms (h.o.t.), (18.3, 18.4) transform to 2

ρ

∂ φ = κψ ∇2 φ - uψψ φ þ uψη θ ∂t 2

ð18:16aÞ

18

“Mechanical” Order Parameter

403

1 ∂θ = κη ∇2 θ - uηψ φ þ uηη θ ε ∂t

ð18:16bÞ

Here uψ η plays the role of the interaction coefficient in the sense that if uψ η = 0, the two fields have no linear interactions and evolve independently.1 In this case the independent fields ψ and η have significantly different dynamical properties: evolution of the field ψ is described by the wave equation with the phase velocity cph ≥ cψ 

κψ ρ

ð18:17Þ

and linear dispersion proportional to uψ ψ , while evolution of the field η is described by the heat equation with Newton’s cooling (or heating) proportional to uηη, which determines growth or decrease of the field. If the interaction coefficient is not zero, uψ η ≠ 0, properties of the fields described by (18.16) change significantly. To reveal an important property that appears as a result of the interaction, we could represent the waves φ(r, t) and θ(r, t) as Fourier modes and study their spectrum, as we have done that in Sect. 11.2. However, we find it more instructive to demonstrate a different method here. It will suffice to consider evolution of the homogeneous modes φ(t) and θ(t) only. Then (18.16b) can be resolved as follows: t

eεuηη ðt

θ=

0

- tÞ

- εuηψ φðt 0 Þ dt 0 = -

-1

uηψ uηη

1 n=0

dðnÞ φ=dt ðnÞ n - εuηη

ð18:18Þ

The second equality in this expression is obtained by applying Taylor’s formula to φ(t′). Restricting the series in (18.18) by the first three terms and substituting them into (18.16a), we obtain ρ-

u2ψη ε2 u3ηη

2 d2 φ uψη dφ þ þ dt 2 εu2ηη dt

uψψ -

u2ψη φ=0 uηη

ð18:19Þ

This equation is analogous to an equation of 1d motion of a particle with the generalized coordinate φ in a potential field ~φ2 and medium with a dissipative force which depends on the speed of the particle. The generalized mass of the particle and the potential function are affected by the interaction coefficient uψ η, and the dissipation coefficient

1

To be exact—almost independently because still there may be nonlinear interactions between these fields

18 “Mechanical” Order Parameter

404

α=

u2ψη εu2ηη

ð18:20Þ

is proportional to the interaction coefficient squared. Similarly, taking into account the gradient energies of the field ψ and η, we can derive from (18.16) the telegraphic equation [1], which describes propagation of the damped waves in a medium with damping proportional to the interaction coefficient squared. The series in (18.18) can be approximated by the first three terms only if the characteristic time of oscillations of the field ψ is much greater than that of the field η. If the ψ - η interactions are not strong, u2ψη ≪1 uψψ uηη

ð18:21Þ

then the former time is ρ=uψψ and the latter is (εuηη)-1; see (18.16). Hence, the condition of applicability of (18.19) is εuηη

ρ ≫1 uψψ

ð18:22Þ

This means that the dissipation coefficient (18.20) is small: p α ≪ ρuψψ

ð18:23Þ

Equations (18.19)–(18.23) shed light on the physical nature of dissipative forces in mechanical systems: they come from the interactions with other degrees of freedom, dynamics of which is dissipative. The dynamic equations (18.3, 18.4, 18.7) support the 1d train waves traveling with speed v. The waves are described by the nonlinear equations: κψ 1κη

v2 c2ψ

d2 ψ ∂u = dx2 ∂ψ

d2 η v dη ∂u þ = dx2 ε dx ∂η

ð18:24aÞ ð18:24bÞ

Compared to the system of two Lagrangian fields (see Appendix D), the system of equations (18.24) does not have the first integral. Exact solutions of (18.24) with interacting fields ψ and η are possible but not known to the author. However, there is a lot that we can learn about the system of equations (18.24) without solving it exactly. Let us multiply (18.24a) by dη/dx and (18.24b) by dψ/dx, add them together, and integrate the sum from -1 to +1. Then, using (18.12) we obtain the relation for the wave speed

Reference

405

v=

εκ η uðηα , ψ α Þ - u ηβ , ψ β : ση

ð18:25Þ

Notice that, contrary to the Lagrangian system of Appendix D, for the system (18.24) the selection problem—finding the unique value of the velocity of the train wave—is resolved. This is a consequence of the dissipative nature of the field η. The velocity v is proportional to the potential energy difference between the equilibrium states α and β with the coefficient of proportionality that may depend on the speed v through the partial tension σ η. However, to estimate the kinetic coefficient in (18.25), we can use an equilibrium estimate of the surface tension or abbreviated action as σ ηl ≈ κ η[ηα - ηβ]2 where l is the thickness of the transition region; see Eqs. (D.26, D.28) and Example 9.1. Then v ≈ εl

uð η α , ψ α Þ - u η β , ψ β ηα - ηβ

2

:

ð18:26Þ

Reference 1. A.N. Tikhonov, A.A. Samarskii, Equations of Mathematical Physics (Dover, New York, 1963), p. 72

Chapter 19

Continuum Models of Grain Growth

Many solid materials consist of small crystallites called grains, which are domains of the same phase with different spatial orientations of their crystalline lattices. The grains are separated from each other by the grain boundaries (GBs), which are 2d extended defects with positive excess free energy and, therefore, thermodynamically unstable. Due to their global instability, curved GBs are known to move toward the centers of their curvatures. However, overall, the GBs move such as to increase the average size of the grain R and reduce the total GB area and thus the total free energy of the system. This is another example of the coarsening evolution of thermodynamic systems (cf. Chap. 5).

19.1

Basic Facts About Growing Grains

Simple dimensional analysis provides an approximate time dependence for Rðt Þ. Indeed, if we assume that GB energy is the only driving force for its motion, then [1] dR a =þ dt R

ð19:1Þ

where the coefficient a is a product of the GB energy and mobility. Notice that the right-hand side of (19.1) is positive because we apply this formula to the average size R, not the individual grain radii (cf. Eqs. (11.34c) and (11.44)). Solution of (19.1) is 2

2

R - R0 = 2at

ð19:2Þ

where R0 is the average grain size at t = 0 (cf. Eqs. (11.36b) and (11.45)). However, R is not the only characteristic that determines global evolution of the grain structure because grains of the same R can have different topological © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Umantsev, Field Theoretic Method in Phase Transformations, Lecture Notes in Physics 1016, https://doi.org/10.1007/978-3-031-29605-5_19

407

408

19

Continuum Models of Grain Growth

properties. The latter are characterized (in 2d case) by the topological class of a grain, k—the number of neighbors (or sides or triple junctions). Then the area of the 2d grain A obeys the following relation [2]: dA aπ = ðk- 6Þ dt 3

ð19:3Þ

which shows that grains with more than six sides grow, while those with fewer than six sides shrink. A variety of models of grain growth have been proposed with a common feature that the GBs have zero thickness—the sharp-interface models. However, recently several authors proposed rather different models of grain growth in which the GBs are assumed to be diffused, that is, have finite thickness. To describe evolution of the grain structure, these models use the gradient-flow paradigm, which includes the Langevin TDGLE with random forces, with or without the cross terms (cf. Eq. (12.29a)): ∂ηi δG δG þ ξi ðr, t Þ = - γ ij δηi δηj ∂t

ð19:4Þ

and the free energy functional G=

g d3 x

ð9:15aÞ

Ω

with different free energy density functions g, which are the subject of discussion in this section. It is not our intention here to pass judgment on which model is better— this is a complicated question, which is still very much under consideration in the literature. Our goal here will be only to introduce the models and present their advantages and disadvantages.

19.2 Multiphase-Field Models Based on some similarities between the properties of the grains and antiphase domains (see Chap. 17), Chen et al. [3] proposed a model, in which a polycrystalline microstructure is described by a number of OP fields, {ηi(r, t); i = 1, . . ., N}, that designate different orientations of the grains. The free energy density of the system is

19.2

Multiphase-Field Models

409

g = gfηi ðr, t Þg þ

1 2

N

κi j∇ηi j2

ð19:5Þ

i=1

where κi are the gradient energy coefficients. The main requirement for the homogeneous part of the free energy density is that it has degenerate minima of equal depth. A simple function which satisfies this requirement is

N

g = g0 ðP, T Þ þ A

i=1

1 2 1 4 η þ η þ η2i 2 i 4 i

N

η2j

ð19:6Þ

j=1 j≠i

It has 2N minima located at {ηi = ±1; ηj = 0, j = 1, . . ., i - 1, i + 1, . . ., N} in the N-dimensional space, each one representing a specific crystallographic orientation of a grain. The free energy of a plane GB is σ GB =

1

N

-1 i=1

dηi dx

2

dx

ð19:7Þ

The main contributions into the GB energy between two grains of orientations ηi and ηj come from the gradients of ηi and ηj because other OPs practically do not vary in the transition zone. This model provides correct temporal variation of the average grain size as being asymptotically proportional to the square root of time and reasonably well reproduces evolution of the grain-size distribution function and local topological classes (number of sides) of the individual grains. However, deficiencies of the model are prominent. First, the model allows for a finite number of orientations only, while in a real material, any grain orientation is possible. Second, the set of 2N allowed orientations is doubly degenerate, which allows the orientations {ηi = +1; ηj≠i = 0} and {ηi = -1; ηj≠i = 0} to form, so to speak, an anti-orientation GB with a finite GB energy; see Sect. 17.3. Yet in a real material, these orientations are totally equivalent and do not form a GB. Overall, the model treats the grains as variants of the same phase, which is not the case. To describe evolution of multiphase systems, Steinbach et al. [4] developed a method where, instead of the OPs, the main field variables are partial phase contents pi(r, t), i = 1, . . ., N. These phase-field variables are not independent; they are defined on the Gibbsian simplex

410

19

Continuum Models of Grain Growth

N

pi ðr, t Þ = 1

ð19:8aÞ

i=1

and allowed to vary only between 0 and 1: 0 ≤ pi ≤ 1

ð19:8bÞ

The free energy density of the system is N

g=

gi ðP, T Þpi þ i=1

1 2

i j=1

αij p2i p2j þ κij pi ∇pj - pj ∇pi

2

ð19:9Þ

where gi(P, T ) is the free energy density of the individual phase i. Notice that the gradient energy contribution in (19.9) is a weighted sum of the squares of the Lifshitz invariants; see (17.3b). For the dynamics of the system, the authors used the relaxation ansatz (19.4) with the cross terms. The method has a number of inconsistencies. First and foremost, application of the relaxation ansatz (19.4) is not justified here because vanishing of the variational derivative is not the condition of equilibrium. Indeed, because the system is defined on the geometrically bounded simplex (19.8), the total free energy minimum can be achieved on its boundary where the functional (9.15a) is not differentiable. The method also has a computational disadvantage in modeling a multiphase system as it needs N field variables pi’s to describe a system with N phases, while other methods require ~log2N OP fields. However, the multiphase method had some success in simulations of evolution of the grain structure in monatomic systems as the average grain size was found to be proportional to time in power ~0.38.

19.3 Orientational Order Parameter Field Models Morin et al. [5] introduced a model of evolution of a polycrystalline material in the d-dimensional space where the grains are characterized by a d-component vector field η(r, t) whose magnitude represents the local crystalline order and direction represents orientation of the grain. The latter is characterized by the angle θ(r, t)  arcos(iη/|η|) that the vector field η(r, t) makes with the fixed direction in space i. The free energy density of the system is g = f jηðr, t Þj2 , cðr, t Þ, ∇η, ∇c, cos½Nθðr, t Þ

ð19:10Þ

where c(r, t) represents the local atomic concentration and N is the order of breaking the rotational symmetry of the system, that is, the number of distinguished

19.3

Orientational Order Parameter Field Models

411

orientations of grains. Evolution of the system is governed by d TDLGLEs (19.4) for the nonconservative field η(r, t) and a Langevin-Cahn-Hilliard equation for the conservative field c(r, t): δG ∂c þ ζ ðr, t Þ = M∇2 δc ∂t

ð15:31Þ

The long-time behavior of the studied 2d system was dominated by the scaling regime when the structure factors of the concentration and ordering exhibited the so-called Porod’s law, that is, S(q,t) → q-(d+1) for q → 1, where q is the magnitude of the Fourier wavevector. This was attributed to the coupling between the nonconservative η(r, t) and the conservative c(r, t) fields. However, validity of this result may be called into question by the contribution to the free energy density cos½Nθðr, t Þ jηðr, t Þj2

2

ð19:11Þ

which violates isotropy of the system. Indeed, just a mere rotation of the reference frame (i, j, k) causes irreducible changes in the free energy differences between the grains of different orientations, which in turn should change the dynamics of the system. Lusk [6] suggested a model where the lattices of differing orientations are distinguished by a set of lattice parameters {s1, s2, . . ., sN} where N is the number of allowed grain orientations. However, in contrast to the previous model, these parameters do not enter the free energy function. Instead, the free energy of the system depends on the gradients of these parameters only: N

ð∇si Þ2

g = gðη, P, T Þ þ λðηÞ

ð19:12Þ

i=1

The lattice parameters obey the relaxation dynamics of the type (19.4). Using the matched asymptotic analysis, the author was able to recover the GB motion by mean curvature. He also obtained 1d analytical and numerical solutions for a stationary GB. However, this solution presents a GB as a layer of melt sandwiched between two solid crystallites—a wetted GB model, which is not the case in the polycrystalline materials far from melting point. Kobayashi, Warren, and Carter (KWC) [7] introduced a model of grain structure, which allows for an arbitrary orientation of grains and is invariant with respect to the reference frame rotation. In the 2d realization of the model, the grains are characterized by two nonconservative fields, the OP field η(r, t), which is interpreted as the level of local crystalline order, and the field θ(r, t), which is interpreted as the local orientation of the grain lattice with respect to the fixed axis in space, say x. The latter is similar to the model [5]. However, what is different from that model is that the free

412

19

Continuum Models of Grain Growth

energy density in KWC model depends only on the powers of the gradient of the field θ(r, t): 1 1 g = gðP, T; ηÞ þ κð∇ηÞ2 þ αðηÞj∇j þ βðηÞj∇j2 2 2

ð19:13Þ

The presence of the linear term |∇θ| is required in the model for the localization of the GB at equilibrium. The equilibrium structure of the GB is accompanied by lowering of the value of the OP η, that is, disordering of the GB. Evolution of the ordering field is governed by the relaxation dynamics of type (12.29) with the relaxation coefficient that has singular dependence on η: Pðη, ∇η, ∇θÞ

dθ δG =dt δθ

P,T

= ∇ αðηÞ

∇ þ βðηÞj∇j j∇j

ð19:14Þ

The authors were able to reproduce some of the features of the grain structure in materials like dependence of the GB energy on GB misorientation (change in angle across the boundary), GB wetting and motion, and grain rotation (time change of orientation in the grain interior). However, due to significant singularity in the free energy density (19.13), for its computational implementation, the method requires a number of physically unappealing fixes like the singular mobility P(η, ∇η, ∇θ) or smoothing functions to emulate the singular term |∇θ| in (19.14). These measures make the model computationally taxing.

19.4

Phase-Field Crystal

Elder et al. [8] introduced a method for modeling of transformations in materials including effects related to multiple crystal orientations. They called it the phasefield crystal because it describes phase changes as evolution of the atomic density field according to dissipative dynamics driven by minimization of the free energy of the system. The free energy density is approximated as g=

1 φ α þ λ q20 þ ∇2 2

2

1 φ þ βφ4 4

ð19:15Þ

where α, β, λ, q0 are material parameters and φ(r, t) is the deviation of the atomic density from the density of the liquid state, which is at equilibrium with the solid state at the temperature of the transformation. In the framework of this method, formation of a crystalline solid is signaled not by the value of an OP being greater than zero, but by the field φ(r, t) being unstable to the formation of a periodic structure. A full nonlinear solution of the minimization problem is very complicated even in 1d. The authors studied a one-harmonic-mode approximation of the linearized problem (the free-field problem). In the 2d small-α limit, the functional

References

413

G=

gd3 x

ð9:15aÞ

V

with the free energy density (19.15) is minimized by the deviation 4 1 φm = φ þ φþ 5 3

α - 15 - 36φ2 β

p 3x y 1 y cos - cos cos 2q0 2q0 2 q0

ð19:16Þ

which represents a triangular distribution of “particles” with the reciprocal lattice vectors 1 b1 = q0

p

1 3 1 i þ j , b2 = j q0 2 2

ð19:17Þ

For this solution the parameter q0 represents the distance between the nearestneighbor “particles,” which correspond to the atomic positions. To obtain the “phase diagram” between the average density φ and parameter α, the authors used the Maxwell equal-area construct. Given that the field φ(r, t) is conservative and assuming that its dynamics is dissipative and driven by the minimization of the free energy, the authors used the Cahn-Hilliard equation (15.14) for the evolution of the field. Using the phase-field-crystal method, the authors obtained reasonable scenarios of evolution of the grain structure in materials, including grain growth and rotation.

References 1. I.M. Lifshitz, Sov. Phys. JETP 15(5), 939 (1962) 2. W.W. Mullins, J. Appl. Phys. 27(8), 900–904 (1956) 3. L.-Q. Chen, W. Yang, Phys. Rev. B 50(21), 15752–15756 (1994) 4. I. Steinbach, F. Pezzolla, Phys. D 134, 385–393 (1999) 5. B. Morin, K.R. Elder, M. Sutton, M. Grant, Phys. Rev. Lett. 75(11), 2156–2159 (1995) 6. M.T. Lusk, Proc. R. Soc. Lond. A 455, 677–700 (1999) 7. R. Kobayashi, J.A. Warren, W.C. Carter, Phys. D 140, 141–150 (2000) 8. K.R. Elder, N. Provatas, J. Berry, P. Stefanovic, M. Grant, Phys. Rev. B 75, 064107 (2007)

Appendix A: Coarse-Graining Procedure

Our intention here is to show how a set of many microscopic variables can be converted into a smaller set of mesoscopic variables or even one continuous function through the procedure called coarse graining(CG). There is no denial that after CG certain features of the system are lost; the hope is, however, that the essential ones are retained. We will demonstrate the CG using a simple model of ferromagnetism. Each atom in a crystal is supposed to have a magnetic moment μ0, which may point in any direction. The concept may be clearly demonstrated on a particularly simple example of a unidirectional ferromagnetic “crystal,” that is, a lattice of atoms with the magnetic moments pointing in only one of the two directions, upward or downward. Such model is called Ising model of a ferromagnet and the atomic magnetic moments—Ising spins. The state of each atom is represented by a variable σ i (i = 1, 2, . . ., N; N being the total number of atoms in the “crystal”), which takes on the values +1 or -1. Neighboring atoms in the lattice experience exchange interaction of the strength J > 0 and sign that depends on whether the moments are parallel or antiparallel, so that J(σ i = σ j) = -J and J(σ i = -σ j) = +J. Therefore, the magnetic interaction energy of the atoms can be expressed by the following Hamiltonian: H fσ i g = - J

σiσj

ðA:1Þ

nn < ij >

where the summation is over the nearest neighbor pairs of atoms. We will look at the one- and two-dimensional Ising models. If the system is interacting with a heat reservoir of temperature T, then each of the N Ising spins σ i is a random variable. According to Boltzmann’s principle, the probability distribution of the microstate {σ i} is as follows:

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Umantsev, Field Theoretic Method in Phase Transformations, Lecture Notes in Physics 1016, https://doi.org/

415

416

Appendix A: Coarse-Graining Procedure

Pfσ i g =

1 - βH fσi g e Z

ðA:2Þ

where β = 1/kBT (kB is Boltzmann’s constant) and Z is the normalization constant called the partition function: e - βH fσi g ;

Z= fσ i g

 fσ i g

σ1

σ2



ðA:3Þ σN = ± 1

where the summation is over all N Ising spins. The expectation value (average) of the spins is as follows: σj =

σ j Pfσ i g:

ðA:4Þ

fσ i g

As known [1], the thermodynamic properties of the system can be easily expressed through the partition function. For instance, the Helmholtz free energy is as follows: F ðV, T Þ = -

ln Z β

ðA:5Þ

For the stability of the system, this free energy must be a convex (concave) function of its extensive (intensive) variables, e.g., ∂2F/∂T2 ≤ 0. The concept of a phase transition is associated with nonanalytic, singular behavior of the thermodynamic functions that describe the system. A statistical mechanical approach to a phase transition is to write down the partition function Z of a system of N spins interacting with each other according to the Hamiltonian H fσ i g. It must be such a function of its variables—volume V and temperature T—that the regions of analytic behavior are bounded by a curve on which the function is not analytic. The problem of the statistical mechanics description of phase transitions is that the partition function of a finite number N of the spins is a well-behaved, analytic function of its variables, hence, does not describe a phase transition. The resolution of the analyticity paradox was found in the thermodynamic limit, that is, in the limit (V, N ) → 1. The trick is that even though Z N ðV, T Þ is analytic for every finite N, it is still possible for zðV, T Þ = lim ZN ðV, T Þ=N to be nonanalytic. N →1

To describe the spin dynamics let us consider the one-dimensional array of N spins with the probability distribution P{σ i, t}, which is a function of time that obeys the master equation [2]:

Appendix A: Coarse-Graining Procedure

d Pfσ i , tg = dt

N

417 N

wj σ j

Pfσ i , t g þ

j=1

wj - σ j P σ 1 , . . . - σ j , . . . σ N , t

ðA:6Þ

j=1

The first term in the right-hand side describes destruction of the microstate {σ i} by a flip of any of the spins, while the second term describes creation of the microstate by the spin flip from any of the microstates {σ 1, . . . -σ j, . . . σ N}. If we want to describe a tendency for each spin to align itself parallel to its nearest neighbors, we may choose the transition probabilities wj(σ j) to be of the following form: wj σ j =

1 1 γ 1- ασ j σ j - 1 þ σ jþ1 2 2

ðA:7Þ

Here γ/2 is the rate per unite time at which the spin flips from either state to the opposite if it were disconnected from the other spins, and α describes the tendency of spins toward alignment. Positive values of α favor parallel configurations (ferromagnetism), negative values favor antiparallel configurations (antiferromagnetism), and in all cases |α| ≤ 1. Comparing the equilibrium state (A.1, A.2) with the asymptotic state of the master equation (A.6, A.7), we may identify the following: α = tanh

2J kB T

ðA:8Þ

If we multiply both sides of the master equation (A.6) by σ j, substitute (A.7), and sum over all values of spin variables, we obtain an equation: 1 d σ j ðt Þ 1 = - σ j ðt Þ þ α σ j - 1 ðt Þ þ σ jþ1 ðt Þ γ dt 2 1 þ α σ j - 1 ðt Þ - 2 σ j ðt Þ þ σ jþ1 ðt Þ , 2

= - ð1 - α Þ σ j ðt Þ

ðA:9Þ

which shows that the rates of change of the average spins depend on the states of the neighbors. Extension to the 2D case is straightforward. Unfortunately (A.1–A.9) are not very helpful analytically because the Hamiltonian (A.1) depends on very many (N ) independent variables. The number of variables of the Hamiltonian (A.1) can be significantly reduced if we are not interested in all the details of the system’s behavior on the scale of the interatomic distances a. The 2D “crystal,” for instance, can be divided into square blocks, each one consisting of b × b elementary cells; see Fig. A.1. For each block we define the block spin as a sum of b2 Ising spins divided by b2, that is, as the block mean Ising spin. Designating blocks by the position vectors of their centers, we obtain the following:

418

Appendix A: Coarse-Graining Procedure

ba

a

Fig. A.1 Partitioning of the lattice of the cell size a by the blocks of b × b cells. Each block is associated with a spin (black arrows) between -1 and 1

σ ð xÞ = h σ i b 2 =

1 b2

b2

σjP σj

σj = j=1

ðA:10Þ

fσ j jb2 g

Notice the change of the structure of the independent variables. First, there are N/b2 block spins instead of N Ising ones. Second, although the block spins are discrete variables as the Ising ones, their “degree of discreteness” changes. Indeed, while the Ising spin is just a simple binary variable, the block spin can take on (b2 + 1) different values, that is, practically becomes a continuous variable for large b. Hence, the coarse-graining smoothes out the variables at the expense of their “information content”: the Ising spins describe interactions on the interatomic scale a, while the block spins on the block scale ba. How can we find a block-spin Hamiltonian from the Hamiltonian (A.1) for the Ising spins? Let us find the probability distribution of the block spins P{σ(x)}. This can be done using simple rules of the probability theory. Consider joint probability distribution function p(q1, q2) of two random (continuous) variables q1 and q2. If we are interested only in one variable, say q1, we can obtain its probability distribution p′(q1) by integrating p(q1, q2) over q2: p0 ð q1 Þ =

dq2 pðq1 , q2 Þ

If we are interested in the mean value of q1 and q2, that is, q = (q1 + q2)/2, then:

Appendix A: Coarse-Graining Procedure

p0 ð qÞ =

419

dq1 dq2 pðq1 , q2 Þδ q-

q þ q2 q 1 þ q2 = δ q- 1 2 2

For as long as we consider only functions of q, their average values can be calculated using p′(q) or p(q1, q2) yielding the same result because the δ function replaces integration over q with the integration over (q1, q2). For instance: 0

q 1 þ q2 2

dq q2 p0 ðqÞ =

q2 =

2

Using these rules, we write the block-spin probability distribution function as follows: Pfσ ðxÞg = P0 fσ i g =

e - βH fσi g fσ i g

δ σ ð xÞ x

x

b2

σj j=1

b2

1 δ σ ð xÞ - 2 b

=

1 b2

ðA:11Þ

σj j=1

Now, using Boltzmann’s principle, we can define the block-spin Hamiltonian H{σ(x)}: Pfσ ðxÞg 

1 - βH fσðxÞg e Z

ðA:12Þ

where Z is the partition function of the CG system:

Z=

e

- βH fσ ðxÞg

fσi jN=b g 2

1 ≈ 2 b

1

N b2

-1 i=1

dσ i e - βH fσðxÞg

ðA:13aÞ

þ1

 fσ i jN=b2 g

⋯ σ ðx1 Þσ ðx2 Þ

ðA:13bÞ σ ðxN=b2 Þ = - 1

The procedure (A.10–A.13) of reducing the Ising Hamiltonian to the block Hamiltonian is called the Kadanoff transformation: H fσ ðxÞg = K b H fσ i g

ðA:14Þ

where we define K1 = 1. The transformation Kb can be applied multiple times with the property that KbKb′ = Kbb′. Each application of the Kadanoff transformation smoothes out the spin variables at the expense of the reduction of the spatial resolution of the Hamiltonian. The block Hamiltonian H{σ(x)} contains parameters

420

Appendix A: Coarse-Graining Procedure

that average out interactions of the b2 Ising spins on the scales of the block (≤ ba). In this respect the exchange interaction constant J of the Ising Hamiltonian (A.1) is the averaged out interatomic interactions on the scale ≤a. There are several advantages in dealing with the smoothed, averaged variables instead of the discrete ones. First, one can use the tools of the calculus. Second, one can apply the Fourier transformation to the spin variables and represent the Hamiltonian H as a function of the Fourier components σ(k). In this case the function H{σ(k)} contains only the components with the wave vectors |k| ≤ 2π/ba because the wave vectors outside this sphere describe details of the functional behavior on the scales smaller than ba, which were eliminated from the Hamiltonian H{σ(x)}. Third, dynamics of the smoothed variables is more tractable. How can we transform the block-spin variables into field variables? First, notice that: σiσj =

nn < ij >

1 σ - σj 2 nn < ij > i

2

þ

f ðσ i Þ

ðA:15Þ

i

where the function f(σ i) depends on the number and orientation of the nearest neighbors, that is, the symmetry of the “crystal.” Also notice that the discrete quantity (σ i - σ j)/a is the first approximation to the continuum quantity |∇σ| where the gradient is taken in the direction j → i. Then, replacing the summation over all the nearest neighbor pairs by the integration over the space of the system, we obtain the so-called Ginzburg-Landau (GL) Hamiltonian : H GL fσ ðxÞg =

κ d 2 x f ðσ Þ þ j∇σ j2 2

ðA:16Þ

which is a continuum equivalent of the discrete Ising Hamiltonian (A.1). Parameter κ in (A.16) is proportional to the number of the nearest neighbors: κ ≈ JN nn < ij >

ðA:17Þ

hence, determines the radius of correlations in the system. The thermodynamic properties of the system can be expressed through the partition function (A.12), (A.13), and (A.16). Because the GL Hamiltonian (A.16) was obtained through the Kadanoff transformation (A.14), it is expected to possess at least some of the properties of the free energy F(V, T ) (A.5). That is why it is often called the CG free energy or the effective Hamiltonian. As HGL{σ(x)} depends not only on the thermodynamic variables like temperature and volume but also on the internal field variable {σ(x)}, it may not be convex (concave) function of its extensive (intensive) variables in the entire domain of definition. However, great advantage of the block GL Hamiltonian HGL{σ(x)} over the actual free energy F(V, T ) is that it is much easier to calculate the former than the latter.

Appendix A: Coarse-Graining Procedure

421

Moreover, noticing that the discrete quantity (σ j-1 - 2σ j + σ j+1)/a is the first approximation to the continuum quantity ∇2σ, the dynamic spin variable equation (A.9) hints at the following continuum equivalent: 1 ∂σ ðx, t Þ α = - βσ ðx, t Þ þ ∇2 σ ðx, t Þ, γ 2 ∂t

β>0

ðA:18Þ

The GL Hamiltonian (A.16) can be extended on the systems more complicated than the 2D free Ising spins. First, the system may be subjected to the external field σi

B, which adds the contribution - μ0 B

to the Ising Hamiltonian and (-μ0Bσ)

i

to the integrand of the GL Hamiltonian (A.16). Second, magnetization of an atom may not be directed parallel to the same axis, which makes the “spin” a vector with n components. Third, the system may vary in all three spatial directions, which changes the integration in the GL Hamiltonian . Further development of the continuum model may be achieved by noticing the connection between the parameters α and κ through the relations (A.8) and (A.17). However, the CG procedure breaks down the consistency between other modes of the model, e.g., thermal, which must be restored independently.

Appendix B: Calculus of Variations and Functional Derivative

Many physical problems may be reduced to a problem of finding a function y of x (space or time), which delivers a maximum (or minimum) to the integral: b

I = I ½ y ðx Þ  =

F x, y, a

dy dx dx

ðB:1Þ

Such an integral is often called a functional. It is a generalization of a function in that it is a number which depends on a function rather than on another number [3, 4]. To find a maximum (or minimum) of the functional means to find a y(x), which makes the functional I stationary, that is, if the function y(x) is replaced by y(x) + δy(x), I is unchanged to order δy(x), provided δy(x) is sufficiently small. We may reduce this problem to the familiar one of making an ordinary function stationary. Indeed, consider the replacement: δyðxÞ = εuðxÞ

ðB:2Þ

where ε is small and u(x) is arbitrary but such that (Fig. B.1): uð aÞ = uð bÞ = 0

ðB:3Þ

Then, consider x, y, and y′ as independent variables of F and obtain the following: b

I ðε Þ = a

b

F ðx, y þ εu, y0 þ εu0 Þ dx = I ð0Þ þ ε a

∂F ∂F u þ 0 u0 ∂y ∂y

dx þ O ε2

ðB:4Þ

If I[y(x)] is to be stationary, then we must have the following:

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Umantsev, Field Theoretic Method in Phase Transformations, Lecture Notes in Physics 1016, https://doi.org/

423

424

Appendix B: Calculus of Variations and Functional Derivative

dI dε

ε=0

=0

for all uðxÞ

Thus, we require that for all u: b a

∂F ∂F u þ 0 u0 ∂y ∂y

dx = 0

ðB:5Þ

Integrating the second term in (B.5) by parts, the equation becomes as follows: b a

∂F ∂F d ∂F u dx þ u 0 dx ∂y0 ∂y ∂y

x=b x=a

=0

ðB:6Þ

The integrated part vanishes because of (B.3). The fundamental lemma of the calculus of variations says that if the integral is to vanish for arbitrary u(x), then the term in the parenthesis must vanish [3]. Therefore we must require the following: ∂F d ∂F =0 dx ∂y ∂y0

ðB:7Þ

This differential equation is called the Euler-Lagrange equation; solutions of this equation are called extremals. As expected, this equation is Euclidean-invariant, i.e., invariant under the combination of translations and rotations in the space. When combined with the appropriate boundary conditions, (B.7) is equivalent to the original variational problem. Properties of the functional in (B.1) may be also revealed by its variation δI defined as the change of the order not higher than the first in the small parameter ε in (B.2). Let δy(x) be nonzero only in a small neighborhood of x. Then: b

δI = a

δI δy dx δy

ðB:8Þ

where δI/δy is called the variational or functional derivative of I with respect to y. b To find its mathematical expression, we divide δI by a δy dx and find that δI/δy is the left-hand side of (B.7). There is one caveat in the derivation of the Euler-Lagrange equation (B.7). For the expansion in (B.4) to be valid, we assumed that y(x) and y(x) + δy(x) are close not only “point by point” but with respect to their derivatives also. This may be verified by a measure of function’s magnitude, called the norm ||y(x)||. If the norm is defined, then ||y1(x) - y2(x)|| is a measure of the proximity of the functions. The full set of functions y(x) with a norm is called a Hilbert functional space. Although a few different definitions of the norm are possible, we find the following one to be the most useful for our purposes:

Appendix B: Calculus of Variations and Functional Derivative Fig. B.1 Function y(x) and its variations of different smoothness, δy1 and δy2. Inset: transversality condition (B.13)

425

y(x)+dy2

y

y(x)+dy1 y(x) dy

y

Dy

g(x,y)=0 a

b b+d x

b x

a kyðxÞk  max jyðxÞj þ max jy0 ðxÞj a 0 2 2

ðE:7Þ

In this case: ~η =

-a tanh c

-a x 2κ

and by scaling the eigenvalues and independent variable as follows:

ðE:8Þ

Appendix E: Eigenfunctions and Eigenvalues and Sturm’s Comparison Theorem

Qp =

2 ð1Þ E ,z= a p

-a x, f p ðzÞ = Ψðp1Þ ðxÞ 2κ

445

ðE:9Þ

the problem (E.4) can be reduced to the following: d2 þ 2 - 6 tanh 2 z f p ðzÞ = Qp f p ðzÞ dz2

ðE:10Þ

Using representation (E.10) and the following formula: d tanhðxÞ 1 = = 1 - tanh 2 ðxÞ dx cos h2 ðxÞ

ðE:11Þ

we can find two bound states with p = 1 and 2 and a continuum of the scattering states with p ≥ 3 for the Hamiltonian Ĥ. The eigenvalues and non-normalized eigenfunctions of the bound states are as follows; see Fig. E.1: p = 1, p = 2,

Fig. E.1 The “quantum mechanical potential 2 energy” uðzÞ = ∂ gð~ηÞ=∂η2 for the potential function (E.7)—black line—and two bound states with p = 1, blue line (Goldstone mode), and p = 2, green line. The two bound-state energy levels E ðp1Þ are shown in respective colors

Q1 = 0,

Q2 = - 3,

f 1 = 1 - tanh 2 z f 2 = tanh z

ðE:12aÞ

1 - tanh 2 z

ðE:12bÞ

2

1

0

-1 -4

-2

0

spatial coordinate z

2

4

446

Appendix E: Eigenfunctions and Eigenvalues and Sturm’s Comparison Theorem

The eigenvalues and non-normalized eigenfunctions of the scattering states are as follows:

f p = ei

p

p ≥ 3, Qp < - 4, - Qp - 4z

- Qp - 3 - 3 tanh 2 z þ 3i

- Qp - 4 tanh z

ðE:13Þ

Rescaling (E.12, E.13) back to Hamiltonian’s representation, we can see that the bound state f1 represents the Goldstone mode of ~ηðxÞ. Using the following formula: tanhðyÞ - tanhðxÞ =

sinhðy - xÞ coshðyÞ coshðxÞ

ðE:14Þ

we can see that the Goldstone mode represents a small shift of ~ηðxÞ in the normal direction. The second bound state is as follows: ð1Þ

p = 2, E2 = -

3 d~η ð1Þ a > 0, Ψ2 ðxÞ / ~η 2 dx

ðE:15Þ

The eigenvalues of the scattering states are as follows: ð1Þ

ð1Þ

p ≥ 3, E ðp1Þ > - 2a > E 2 > E1 = 0:

ðE:16Þ

If there is a need to solve an eigenfunction/eigenvalue problem for the threedimensional Hamiltonian operator Ĥ, one may attempt to find particular solutions by separating the variables. If this works, a general solution can be expressed in terms of a linear combination of the separated solutions, which satisfy the boundary conditions with |Ψ(|x| → 1)| < 1. The sum of the separation constants is called the dispersion relation. For instance, for the following: 2

H ð~η; xÞ 

∂ g ð~ηÞ - κ∇2 , H ð~η; xÞΨðp3Þ ðxÞ = E ðp3Þ Ψðp3Þ ðxÞ ∂η2

ðE:17Þ

we find that: 3Þ 3Þ ðxÞ / eiqx2 Ψðp1Þ ðxÞ; E ðq,p = E ðp1Þ þ κq2 Ψðq,p

ðE:18Þ

where x2 = (y, z), q = (ky, kz), and q = |q|. Equation (E.18) means that slight corrugations of ~ηðxÞ are the eigenfunctions of the three-dimensional Hamiltonian operator Ĥ. Indeed, using the formula (E.14), the corrugation Δη can be represented as follows:

Appendix E: Eigenfunctions and Eigenvalues and Sturm’s Comparison Theorem ð1Þ

Δη  ~η½x- X ðx2 Þ - ~ηðxÞ ≈ - X ðx2 ÞΨ1 ðxÞ, X ðx2 Þ =

A e q q

iqx2

447

ðE:19Þ

The eigenvalue of this eigenfunction is as follows: ð3Þ

Eq,1 = κ

q

q2 :

ðE:20Þ

Appendix F: Fourier and Legendre Transforms

Fourier Transform The goal of any functional transformation is to express the information contained in the function in a more convenient way. Let f(x) be a smooth (continuous, together with its derivatives) function in the interval (-X/2, +X/2). Then f(x) may be expanded in the Fourier series: þ1

f X ðnÞe

f ðxÞ =

i2πnx X

ðF:1aÞ

n= -1 þX=2

1 f X ð nÞ = X

f ðxÞ e -

i2πnx X

ðF:1bÞ

dx

- X=2

which represents a discrete Fourier transform (FT) in one variable. Actually, the Fourier series is defined also beyond the range (-X/2, +X/2), that is, the function f(x) is periodically continued on (-1, +1) with fX(x + X) = fX(x). If X → 1, it is possible to replace: þX=2

þ1

dx → - X=2

dx; -1

1 X

þ1 n= -1

þ1

1 → 2π

dk; -1

k

2πn X

ðF:2Þ

and we arrive at the Fourier integral transform in one variable:

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Umantsev, Field Theoretic Method in Phase Transformations, Lecture Notes in Physics 1016, https://doi.org/

449

450

Appendix F: Fourier and Legendre Transforms þ1

1 f ðxÞ = 2π

f ðk Þeikx dk

ðF:3aÞ

f ðxÞe - ikx dx

ðF:3bÞ

-1 þ1

f ðk Þ = -1

The number k is called the wavenumber, the function exp(ikx) the Fourier mode, and the function f ðk Þ the Fourier amplitude or transform. Notice that the Fourier mode with k = 0 is just a const = 1 while the Fourier modes with large k are periodic functions with small periods. To express the physical information in a function by the Fourier series, the maximum number k is restricted by the interatomic distances: maxk ≈ 1/a. Substituting (F.3b) into (F.3a), one can derive a useful relationship: þ1

1 δðx- x Þ = 2π 0

eikðx - x0Þ dk

ðF:4Þ

-1

where δ(x) is the Dirac function. The Fourier amplitudes may be renormalized so that the FT equations look more symmetric with respect to the functionpand amplitudes. Then, in both equations (F.1), there appears the coefficient 1= X and in both p equations (F.3)—1= 2π . However, in this book, we adopt the original normalization of the Fourier amplitudes. If a smooth function f(r) depends on several variables, e.g., r = (x, y, z), then it can be conveniently represented by its discrete FT as follows: f ð rÞ =

f V ðkÞeikr

ðF:5aÞ

f ðrÞe - ikr dr

ðF:5bÞ

fkg

f V ðkÞ =

1 V V

where k turns into a wavevector (reciprocal vector): fk g  k x , k y , k z =

2πnx 2πny 2πnz , , X Y Z

ðF:6Þ

and summation over {k} means triple summation over {-1 < nx < +1, -1 < ny < +1, -1 < nz < +1}.

Appendix F: Fourier and Legendre Transforms

451

The properties of the 3D discrete FT are as follows: 1. The uniform (k = 0) Fourier mode of a function f(r) represents the volume average of this function—homogeneous part of the function: f V ð 0Þ =

1 V

f ðrÞ dr

ðF:7Þ

V

2. For a real-valued function f(r), in the general case, the amplitudes are the complex numbers. Their real and imaginary parts are respective measures of the even and odd contributions into the original function. The amplitudes of the opposite reciprocal vectors are complex conjugates: 

f V ð- kÞ = f V ðkÞ

ðF:8Þ

3. The gradient of f(r) is as follows: ∇f ðrÞ = i

kf V ðkÞeikr

ðF:9Þ

f kg

4. Using (F.4) three times, one can see that as V → 1: eikr dr → ð2π Þ3 δðkÞ = ð2π Þ3 δðkx Þδ k y δðkz Þ

ðF:10Þ

V

which means that: V → δðkÞ ð2π Þ3

ðF:10aÞ

5. Using properties #2 and #4 one can derive Parseval’s equality: 0

f V ðkÞf V ðk0 Þeiðkþk Þr dr

f 2 ðrÞdr = V

V

0

f V ðkÞf V ðk0 Þ

= fkg fk0 g

f kg

f k g f k0 g

V ð2π Þ3

=V

eiðkþk Þr dr → V → 1 ðF:11Þ

V

f V ðkÞf V ðk0 Þð2π Þ δðk þ k0 Þdk0 3

V

f ðkÞf V ð - kÞ = V f kg V

f kg

f V ðkÞ

2

452

Appendix F: Fourier and Legendre Transforms

Similarly, if X, Y, Z → 1, it is possible to replace: dr →

dr;

1 V

V

→ fkg

1 ð2π Þ3

dk;

ðF:12Þ

and arrive at the integral FT in three variables: f ð rÞ =

1 ð2π Þ3

f ðkÞ =

f ðkÞeikr dk

ðF:13Þ

f ðrÞe - ikr dr

6. Borel theorem. We define the convolution of two functions, f(t) and g(t), to be the following integral: þ1

f ðyÞgðx- yÞdy

ðf  gÞ ð xÞ  -1

Then (Borel theorem): ðf  gÞ = f ðkÞgðkÞ

ðF:14aÞ

Reversing this relation, we obtain the following: f ðxÞgðxÞ = f ðkÞ  gðkÞ

ðF:14bÞ

Example F.1 FT of a Function orr = |r| Find the FT of the function e-αr/r, which depends on |r| only. 4π e - αr - ikr dr = e 2 r α þ jkj2 α2

dk eikr e - αr = 2 3 4πr þ jkj ð2π Þ

ðF:15aÞ ðF:15bÞ

These relations can be obtained by noticing that the function φ = e-αr/r satisfies the differential equation:

Appendix F: Fourier and Legendre Transforms

453

Δφ - α2 φ = - 4πδðrÞ Multiplying both sides of this equation by e-ikr and integrating over the entire space, we recover (F.15a).

Legendre Transform Another way to reveal information encoded in a function f(x) is through the Legendre transform (LT). If the function f(x) is strictly convex (second derivative never changes sign or is zero), then the function’s derivative with respect to x: s=

df , dx

ðF:16Þ

can replace x as an independent variable of the function. To reveal the symmetry associated with the LT, it is accustomed to redefine the function as follows: gðsÞ = s  xðsÞ - f ðxðsÞÞ

ðF:17Þ

The redefined function g(s) has the property that its derivative with respect to s is x: dg dx df dx = xð s Þ þ s = xðsÞ ds ds dx ds

ðF:18Þ

The properties of the LT are as follows: 1. The LT of an LT is the original function. To see this, it is instructive to rewrite (F.17) in a symmetric way: gðsÞ þ f ðxÞ = s xðsÞ

ðF:17aÞ

2. The extremes of the original function and the transformed one are related as follows: f ðxmin Þ = - gð0Þ

ðF:19aÞ

gðsmin Þ = - f ð0Þ

ðF:19bÞ

3. The higher derivatives of the original and transformed functions also show symmetric relationships:

454

Appendix F: Fourier and Legendre Transforms

d2 g d2 f  =1 ds2 dx2

ðF:20Þ

LT can be applied to a function that depends on many variables—multivariable LT.1 Example F.2 Find the LT of a quadratic function f(x) = αx2/2. For this function we can easily find that: sðxÞ = αx s xð s Þ = α 1 2 gð s Þ = s 2α

ðF:21aÞ ðF:21bÞ ðF:21cÞ

Substitution of (F.21a) into (F.21c) yields the following: gðsÞ =

1 2 αx ðsÞ 2

ðF:22Þ

which shows that the LT of a quadratic function is equal to the original function expressed through the original independent variable. Of course, this is the case of a single classical particle of mass α moving with a velocity x where f(x) and g(s) are the particle’s kinetic energy expressed as a function of the velocityx or momentum s.

1

An in-depth discussion of the Legendre transform can be found in [7].

Appendix G: Stochastic Processes

Suppose a physical system of interest can be found in different states of an ensemble Ω = {ω}, and suppose that we need to measure a quantity f that varies not only from state to state but also with time t. If the states Ω depend on other parameters not known precisely, which is usually the case in physical systems, the quantity f appears to be random. In this case, the quantity is said to fluctuate; the function f(Ω, t) is called a stochastic process and may be characterized by various average quantities. There are different ways to define these quantities. For instance, one may define a “time average” of f over a certain period of time T: T

1 fT  T

f ðΩ, t Þdt

ðG:1Þ

0

Instead of averaging over the period of time T, we may identify different states of the ensemble Ω = {ω}, introduce a probability density P(ω) of the state such that the probability for the system to be in the states with {ω < Ω < ω + dω} is as follows: PðωÞdω,

ðG:2Þ

and define the average as follows: þ1

f ðω, t ÞPðωÞdω

hf ðω, t Þi 

ðG:3Þ

-1

Of course, the probability density P(ω) is normalized such that:

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Umantsev, Field Theoretic Method in Phase Transformations, Lecture Notes in Physics 1016, https://doi.org/

455

456

Appendix G: Stochastic Processes þ1

PðωÞdω = 1

ðG:4Þ

-1

The quantity (G.3) is called “ensemble average.” The probability density may itself depend on time, P(ω, t), making the process time-dependent. A process is called stationary (do not confuse with equilibrium) if all the functions that characterize it are invariant in time, that is, do not change if every time variable t is replaced by t + s where s is an arbitrary time interval. For very many systems, which are called ergodic, the time average over a long period of time is equal to the ensemble average: f 1 = hf i

ðG:5Þ

The Master and Fokker-Plank Equations To describe a time-dependent stochastic process one can write an evolution equation for the probability density of the state ω, P(ω, t). In a short period of time dt, this quantity will decrease if the system makes a transition from the state ω to one of the states between ω′ and ω′ + dω′; let’s denote the probability of this event as W(ω′|ω) dω′dt. However, during the same period of time, P(ω, t) will increase if one of the states between ω′ and ω′ + dω′ makes a transition to ω. The probability of the latter event is W(ω|ω′)dω′dt. Here we assumed a “Markovian” property of the system expressed by the fact that “jump probability” W depends on ω and ω′ but not the previous states of the system. Integrating over all states ω′, we find the desired equation which is called the master equation: ∂Pðω, t Þ = ∂t

1

dω0 ½W ðωjω0 ÞPðω0 , t Þ - W ðω0 jωÞPðω, t Þ

ðG:6Þ

-1

The transition probability W may be expressed as a function of the starting point and the size of the jump: W ðωjω0 Þ = W ðω0 ; λÞ,

λ  ω - ω0 :

ðG:7Þ

To derive the Fokker-Plank equation from (G.6), we use the basic assumption about the system that only small jumps between the neighboring states occur, that is, W(ω′; λ) is a sharply peaked function of λ but varies slowly with ω′. Then we may expect the solution P(ω, t) to vary slowly with ω also. It is then possible to expand

Appendix G: Stochastic Processes

457

the first term in the integrand of (G.6) into a Taylor series near ω and retain only the terms up to the second order in λ: 1

∂Pðω, t Þ = ∂t

dλ W ðω; λÞPðω, t Þ - λ -1

∂ fW ðω; λÞPðω, t Þg ∂ω

ðG:8Þ

2

þ

1 2 ∂ λ fW ðω; λÞPðω, t Þg - W ðω; - λÞPðω, t Þ 2 ∂ω2

Note that the dependence of W(ω; λ) on λ is fully maintained because we do not expand it with respect to this argument. Integrals of the first and fourth terms cancel. Integrals of the other terms can be written with the aid of the jump moments: 1

λν W ðω; λÞ dλ

aν ðωÞ 

ðG:9Þ

-1

The result is the Fokker-Plank equation: 2 ∂Pðω, t Þ ∂ 1 ∂ =fa1 ðωÞPðω, t Þg þ fa ðωÞPðω, t Þg 2 ∂ω2 2 ∂ω ∂t

ðG:10Þ

This equation can be presented as a continuity equation for the probability density: ∂Pðω, t Þ ∂J ðω, t Þ =∂t ∂ω

ðG:11Þ

and a “constitutive equation” for the probability flux J(ω, t): J ðω, t Þ = a1 ðωÞPðω, t Þ -

1 ∂ fa ðωÞPðω, t Þg: 2 ∂ω 2

ðG:12Þ

Such representation helps interpret the first term of the flux as the drift term and the second one as the diffusion term. The Fokker-Plank equation has a stationary solution, that is, the one with J(ω, t) = 0: Ps ðωÞ =

const exp 2 a 2 ð ωÞ

ω 0

a 1 ð ω0 Þ 0 dω a 2 ð ω0 Þ

ðG:13Þ

Many features of the process can be illuminated by using a potential U(ω):

458

Appendix G: Stochastic Processes

U ðωÞ  -

a1 ðωÞdω

ðG:14Þ

Using the Fokker-Plank equation, one can derive the following equations for the evolution of the average quantities: d h ωi = h a 1 ð ωÞ i dt

ðG:15aÞ

d ω2 = ha2 ðωÞi þ 2hω a1 ðωÞi dt

ðG:15bÞ

Indeed, differentiating (G.3) with respect to time, using the Fokker-Plank equation, and integrating by parts two times, we have the following: dhωi = dt

1

-1

∂P ω dω = ∂t

1

-1

∂J 1 ω dω = - ωJ þ a2 P 2 ∂ω

1 -1

1

þ

a1 Pdω -1

As the flux J and density P vanish at ±1, we obtain (G.15a). A similar procedure leads to (G.15b). Defining the average and variance of the random variable ω as follows: ϖ  hωi,

σ 2  ðω - ϖ Þ2 ,

ðG:16Þ

expanding aν(ω) near the value of ϖ, and retaining only the leading terms, one can derive a system of coupled evolution equations: 2

dϖ 1 ∂ a1 ðϖ Þ = a1 ð ϖ Þ þ σ 2 dt 2 ∂ω2

ðG:17aÞ

dσ 2 ∂a = a2 ðϖ Þ þ 2σ 2 1 ðϖ Þ dt ∂ω

ðG:17bÞ

Although the retained terms in (G.17a) were obtained as expansion terms of the first and second order, they can be of the same order of magnitude. This is a consequence of the fact that averaging of an alternating quantity yields greater cancellation than averaging of its square. It is important to notice here that the system (G.17) can be derived directly from the master equation (G.6), which means that these equations are accurate up to the second order at least. Equation (G.17a) shows that the evolution of the average value is not determined only by its own value but is influenced by the fluctuations around it also. The macroscopic approximation consists in ignoring these fluctuations, hence, keeping only the first term in the expansion (G.17a). The zeroes of the first jump moment (G.9) may be called the nodal points. Equation (G.17a) shows that in the macroscopic

Appendix G: Stochastic Processes

459

approximation of a stationary process the average value tends to one of the nodal points, if such points exist. Equation (G.17b) shows that the tendency of the variance σ 2 to increase at a rate a2 > 0, see (G.9), will be kept in check by the second term if a01 (ϖ) < 0. Then: a2

σ2 → 2

∂a1 ∂ω

ðG:17cÞ

and the criterion of the validity of the macroscopic approximation becomes the following: a2 ≪ 4

1 a1 ∂a ∂ω 2

∂ a1 ∂ω2

ðG:18Þ

This criterion shows that the fluctuations are always important for the evolution of the physical system near a nodal point. However, the level of importance of the fluctuations is different in different systems. In the following sections, we will elucidate the cases of systems where the second jump moment does not vary significantly: a2 ðωÞ = constðωÞ > 0

ðG:19Þ

This condition corresponds to the additive fluctuations as opposed to the multiplicative ones when the fluctuation strength depends on the stochastic variable itself. The latter case creates the so-called Ito-Stratonovich dilemma [8], which is beyond the scope of this book.

Decomposition of Unstable States Consider a process of decomposition of an unstable state in a physical system. Mathematically, it can be described as switching of ∂a1/∂ω in (G.17) from negative to positive value so that the system finds itself in a state, which suddenly becomes unstable although it was stable before. Suppose that the first jump moment of the process vanishes at ω = ω0 and has an essential nonlinearity: 1 a1 ðωÞ = a11 ðω - ω0 Þ þ a12 ðω - ω0 Þ2 , a11 ≠ 0, a12 ≠ 0 2 This process has two nodal points:

ðG:20Þ

460

Appendix G: Stochastic Processes

ω0

and ω1 = ω0 - 2

a11 a12

ðG:20aÞ

which are the critical points of the potential U(ω), (G.14): U ðωÞ = const -

a11 a ðω - ω0 Þ2 - 12 ðω - ω0 Þ3 2 6

ðG:21Þ

Let us first analyze the process with the following: a11 > 0

ðG:22Þ

Then, introducing the scaled variables: τ = a2 t; vðτÞ =

a2 2 a12 ðϖ - ω0 Þ; wðτÞ = 12 σ ; a11 a211

ðG:23aÞ

scaled potential: uð v Þ =

a212 1 1 U ð ωÞ = - v 2 - v 3 ; 3 2 6 a11

ðG:23bÞ

and scaled parameters: α=

a a11 > 0; β = 12 : a2 a11

ðG:23cÞ

we obtain the scaled form of the evolution equations (G.17): α α v ð 2 þ vÞ þ w 2 2

ðG:24aÞ

w_ = β2 þ 2αwð1 þ vÞ

ðG:24bÞ

v_ =

where the dot means differentiation with respect to τ. In Fig. G.1 is depicted the (v,w)-phase plane of the system of equations (G.24) and the potential (G.23b). Evolutionary system (G.24) has the stationary points ðv_ s = w_ s = 0Þ that satisfy the following conditions: ws = - vs ð2 þ vs Þ

ðG:25aÞ

β2 = - 2αws ð1 þ vs Þ

ðG:25bÞ

Substituting ws in (G.25b) with (G.25a), we transform the simultaneous (G.25) into a single nonlinear equation for vs, which is also depicted in Fig. G.1:

Appendix G: Stochastic Processes

461

Fig. G.1 Phase plane (v, w) of the dynamical system (G.24) with α > 0. Purple line, potential u(v), (G.23b); black line, stationary points ws(v), (G.23b); red line, function z(v) (G.26); blue lines, stable stationary branches; o, the nodal points; horizontal dash, graphical solution of (G.26)

4

w u z

ws

3

2

1

"1"

"2" 0

"0" -2

v–

-1

v+

0

v -1

z ð v s Þ  vs ð 1 þ vs Þ ð 2 þ v s Þ =

β2 2α

ðG:26Þ

Analysis of this equation shows that there are three branches of the stationary points: s = 0, 1, 2. If β = 0, which is the case if a12 = 0 but a11 ≠ 0, then the stationary points are as follows: ðv1= - 2, w1 = 0Þ, ðv2= - 1, w2 = 1Þ, ðv0= 0, w0 = 0Þ

ðG:27Þ

that is, the nodal points of (G.20) with zero variance and the intermediate point with the finite variance, see Fig. G.1. If β ≠ 0, then: ð- 2 < v1 < v - , w1 > 0Þ; ðv - < v2 < - 1, w2  1Þ; ðv0 > 0, w0 < 0Þ

ðG:28Þ

where: p v±  - 1 ±

3 3

As σ 2 > 0, the third relation in (G.23a) yields the following constraint:

ðG:29Þ

462

Appendix G: Stochastic Processes

w>0

ðG:30Þ

which makes the “0” branch superfluous. On the “1” branch, the average of the stationary process ϖ 1 deviates from the node ω1. To evaluate the deviation, we need to compare it with the square root of the variance σ 1 at the stationary point. Using (G.23a, G.25) we obtain the following: ϖ 1 - ω1 =

ð - v1 Þ

1 - ð 1 þ v1 Þ

β2 2 σ 2α 1

ðG:31aÞ

which means that: ϖ 1 - ω1 ≪

σ 2s

if

β2 ≪1 2α

ðG:31bÞ

There is a limit to the nonlinearity beyond which the evolutionary system (G.24) has no stationary points. Indeed, differentiating z(v) and equating it to zero, we obtain the following: z0 ðvs Þ = 2 þ 6vs þ 3v2s = 3ðvs- vþ Þðvs- v - Þ = 0

ðG:32Þ

Substituting the extreme values vs into (G.26), we obtain that for -2 < vs < 0: 4 β2 ≤ 2ð max - 2 < vs < 0 jzðvs ÞjÞα = p α 3 3

ðG:33Þ

Although σ 2 (or w) explicitly represents the fluctuations of the physical system, the stationary points of the evolutionary system (G.24) need to be analyzed on their stability because this system is an expansion of the master equation. Representing the variable as v = vs + δv, w = ws + δw, substituting this into (G.24), linearizing the system, and taking into account (G.26), we obtain the following: _ = αð1 þ vs Þ δv þ α δw δv 2

ðG:34aÞ

_ = 2αws δv þ 2αð1 þ vs Þδw δw

ðG:34bÞ

The characteristic equation of this system is as follows: k 2 - 3αð1 þ vs Þk þ α2 2 þ 6vs þ 3v2s = 0

ðG:35Þ

Notice that the free term of the characteristic equation is proportional to z′(vs). (Why?)

Appendix G: Stochastic Processes

463

As known, for the stability of a system of linear differential equations, the real parts of all the roots of the characteristic equation must be negative. For (G.35) with the condition (G.22), this is the case if: vs < - 1

ðG:36aÞ

ðvs- v - Þðvs- vþ Þ > 0

ðG:36bÞ

Condition (G.36a) is a simple consequence of condition (G.30) and brings no additional restrictions on the stationary branches. Condition (G.36b) shows that the “1” branch is stable while the “2” branch is unstable; see Fig. G.1. Hence: v1 - stable;

v2 - unstable;

v0 - superfluous

ðG:37Þ

If condition (G.22) is not true, the branches “1” and “0” switch the roles, while branch “2” remains stationary unstable. Summarizing results for the decomposition process, the evolutionary system (G.24) shows that the average value of the process will be moving from (v ≈ v0) where it was stable before the switch toward that of the stable stationary state (v1, w1). For a weakly nonlinear process (β → 0), this value (v1) stays well inside the root mean square range (w1) of the nodal point ω1 (v1 = -2), while for a strongly nonlinear process this is not true. During this transition the variance w goes through the maximum when the system is about half-way to the final state. The dimensionless time scale of the process is 1/α and the dimensional one is 1/|a11|.

Diffusion in Bistable Potential Consider a stochastic process that can be described by the Fokker-Plank equation (G.10) with a2 = const(ω) as in (G.19) and the potential function U(ω), (G.14), which has three nodal points ωa, ωb, ωc, of which ωa and ωc are locally stable and ωb is unstable, cf. (G.21). The potential function U(ω) and stationary probability density, (G.13), are depicted in Fig. G.2. A system in ωb would be caused to move into either ωa or ωc by the smallest external perturbation via a process of decomposition similar to the one described in the previous section. Potentials having such characteristics are called bistable. Analysis of the evolution equations (G.17) for such system shows that there is a domain of attraction Da such that if ϖ(0) 2 Da then ϖ(t) → ωa for t → 1. Of course, same applies to ωc. Yet this description is not entirely correct, because even when the system is inside Da there is still a probability, however small, for a giant fluctuation to occur, which takes it across ωb into Dc. Thus, fluctuations give rise to a macroscopic effect. The problem of evolution of the system may be reduced to a first-passage problem: suppose at t = 0 a system starts out at ωa; how long will it take it to reach the state ωc for the first time? If at ωc we set the absorbing boundary condition,

464

Appendix G: Stochastic Processes

Fig. G.2 Potential U (black line) and probability distribution P (blue line) as functions of the state variable ω

P

U

Za

Zb

0.0

Zc

0.5

1.0

Z

the average or mean first passage time is called the escape time τac. For the FokkerPlank equation (G.10), one finds the following: τac =

2 a2

ωc ωa

dω Ps ðωÞ

ω -1

Ps ðω0 Þdω0

ðG:38aÞ

where Ps(ω) should be taken from (G.13). In case of weak fluctuations: a2 ≪ U ðωb Þ - U ðωa Þ

ðG:39Þ

(Ps)-1 of (G.13) is sharply peaked at ω = ωb, that is, the statistical probability of the unstable state ω = ωb is much smaller than that of the stable ones; see Fig. G.2. In this case the escape time τac is much longer than the time needed to establish local equilibrium in each separate valley, and integration in (G.38a) can be performed using the Laplace method of asymptotic expansion [9]: τac = where:

2π sa 2Uðaωb Þ Ze 2 a2

ωc

e ωa

- a1 jU 00 ðωb Þjðω - ωb Þ2 2

dω = 2Zπ sa e

2U ðωb Þ a2

π ðG:38bÞ a2 jU 00 ðωb Þj

Appendix G: Stochastic Processes

465

π sa =

ωb

1 Z

-1

Ps ðωÞdω

ðG:38cÞ

is the splitting probability, that is, probability for the particle to be found left of the potential barrier at ωb, and Z=

þ1 -1

Ps ðωÞ dω

ðG:38dÞ

is the normalization constant (partition function) of the distribution (G.13). In (G.38b) the exponential (Arrhenius factor) is inversely proportional to the probability of the barrier state, U″ is the second derivative of the potential, and the square root (the Zeldovich factor) expresses the probability for the variable ω to return back from the region beyond the barrier. In the parabolic approximation: 1 U ðωÞ = U ðωa Þ þ U 00 ðωa Þðω - ωa Þ2 þ O ðω - ωa Þ3 2

ðG:40aÞ

and Zπ sa =

ωb

e

-

2U ðωÞ a2

-1

dω ≈ e =e

-

-

2U ðωa Þ a2

þ1

e-U

00

ðωa Þðω - ωa Þ2 =a2



-1

2U ðωa Þ a2

ðG:40bÞ

πa2 U 00 ðωa Þ

Substitution of (G.40b) into (G.38b) yields the following: τac =

U ðωb Þ - U ðωa Þ 2π exp 00 a2 =2 U ðω a Þj U ðω b Þj 00

ðG:40cÞ

Notice that the escape time is very sensitive to the height of the potential barrier, U(ωb) - U(ωa). In fact, the concept of escape time can be extended on non-smooth potentials which obey the condition (G.39). Depending on the analytical properties of the potential U(ω) at the point of minimum ωa, the splitting probability and escape time take on different values. For the potential: U ðω Þ =

U ðωa Þ þ U 0 ðωa Þðω - ωa Þ þ O ðω - ωa Þ2 ,

for ω ≥ ωa

1,

for ω < ωa

we have the following:

ðG:41aÞ

466

Appendix G: Stochastic Processes

Zπ sa =

ωb -1

e - 2U ðωÞ=a2 dω ≈ e - 2U ðωa Þ=a2

þ1

0

e - 2U ðωa Þðω - ωa Þ=a2 dω

ωa

ðG:41bÞ

2U ðωa Þ a2 = e a2 0 2U ðωa Þ

and τac =

U ð ωb Þ - U ð ω a Þ πa2 exp a2 =2 jU 00 ðωb Þj

1 U 0 ð ωa Þ

ðG:41cÞ

Compare the expressions of the escape time, (G.40c) and (G.41c), and notice that the difference in “smoothness” of the potential U(ω) at the point of minimum ωa, (G.40a) or (G.41a), does not change the exponential but leads to different dependences of the prefactor on the fluctuation strength a2.

Autocorrelation Function The average values do not characterize the stochastic processes completely. For instance, they say nothing about the internal mechanism that makes the quantity to fluctuate. We need a measure of the influence of the value of the fluctuating variable at the moment t1 on its value at the moment t2 > t1. Such measure is expressed by the time average autocorrelation function: T

1 f ðω1 , t 1 Þf ðω2 , t 2 Þ  lim T →1 T

f ðω1 , t 1 Þf ðω2 , t 2 Þdt 1

ðG:42Þ

0

One can also introduce a correlation function between two different processes f ðω1 , t 1 Þgðω2 , t 2 Þ, but we will not need that. The ensemble average autocorrelation function can be introduced with the help of the joint distribution function P(ω1,t1; ω2,t2) such that: Pðω1 , t 1 ; ω2 , t 2 Þdω1 dω2 is the probability for the system to be in the states with {ω1 < Ω < ω1 + dω1} at time t1 and in the states with {ω2 < Ω < ω2 + dω2} at time t2. Then: þ1

þ1

hf ðω1 , t 1 Þf ðω2 , t 2 Þi 

f ðω1 , t 1 Þf ðω2 , t 2 ÞPðω1 , t 1 ; ω2 , t 2 Þdω1 dω2 ðG:43Þ -1

-1

Appendix G: Stochastic Processes

467

In the spirit of the ergodic hypothesis, the time average and ensemble averageautocorrelation functions are equal: f ðω1 , t 1 Þf ðω2 , t 2 Þ = hf ðω1 , t 1 Þf ðω2 , t 2 Þi

ðG:44Þ

Below are some of the properties of the autocorrelation function of a stationary process: 1. For a stationary process, the average hf(ω, t)i is time-independent, and autocorrelation function hf(ω1, t1)f(ω2, t2)i depends only on the time interval s = t2 - t1. 2. For s = 0 the autocorrelation function is the mean square value of f, hf 2(ω, t)i and, hence, must be positive definite. In a stationary system, it is independent of t, that is, a constant. 3. The function hf(ω1, t)f(ω2, t + s)i is symmetric about the value s = 0 that is a function of |s| only. Indeed: hf ðt Þf ðt þ sÞi = hf ðt- sÞf ðt Þi = hf ðt Þf ðt- sÞi:

ðG:45Þ

The first equality in (G.45) is true because the system is stationary. 4. For any value of s, the autocorrelation function is as follows: jhf ðt Þf ðt þ sÞij ≤ f 2 ðt Þ :

ðG:46Þ

Indeed, since: ½f ðt Þ ± f ðt þ sÞ2 = f 2 ðt Þ þ f 2 ðt þ sÞ ± 2hf ðt Þf ðt þ sÞi = 2 f 2 ðt Þ ± hf ðt Þf ðt þ sÞi ≥ 0, the function hf(t)f(t + s)i cannot go outside the limits of ±hf 2(t)i. 5. As known, the joint probability distribution function of statistically independent stochastic processes factors into the product of the probability distribution functions of the individual processes. For our system this means that if (ω1, t1), for some reason, is statistically independent of (ω2, t2), then: Pðω1 , t 1 ; ω2 , t 2 Þ = Pðω1 , t 1 Þ Pðω2 , t 2 Þ

ðG:47Þ

hf ðω1 , t 1 Þf ðω2 , t 2 Þi = hf ðω1 , t 1 Þihf ðω2 , t 2 Þi

ðG:48Þ

and hence:

One may introduce a function, which is called a two-time irreducible autocorrelation function of f:

468

Appendix G: Stochastic Processes

K f ðt 1 , t 2 Þ  hf ðω1 , t 1 Þf ðω2 , t 2 Þi - hf ðω1 , t 1 Þihf ðω2 , t 2 Þi

ðG:49Þ

This function characterizes statistical dependence of the values of the stochastic process f(ω, t) at different moments in time. For a stationary process, it depends on |s| = |t2 - t1| only: K f ðt 1 , t 2 Þ = K f ðjsjÞ

ðG:50Þ

6. If statistical correlations diminish with |s|, that is: K f ðjsjÞ → 0

for jsj → 1,

ðG:51Þ

then the stochastic process f(ω, t) may be characterized by the correlation time τcor: τcor 

1 K f ð 0Þ

1

K f ðsÞ ds

ðG:52Þ

0

The magnitude of the function Kf(s) is significant only when the variable s is of the same order of magnitude as τcor. In other words, as s becomes larger in comparison with τcor, the values f(t) and f(t + s) become uncorrelated, that is, the “memory” of the physical activity during a given interval of time around t is completely lost after a lapse of time large in comparison with τcor. A useful exercise in the correlation functions is the evaluation of the double integral: t

t

eðt1 þt2 Þ=τ K f ðt 2- t 1 Þdt 1 dt 2

I= 0

ðG:53Þ

0

Changing to the variables T = 1=2(t1 + t2) and s = (t2 - t1), the integrand takes the form exp(2T/τ)Kf(s); integration over (dt1dt2) gets replaced by (dTds), while the limits of integration, in terms of the variables T and s, can be read from Fig. G.3; we find that, for 0 ≤ T ≤ t/2, s goes from -2T to +2T, while for t/2 ≤ T ≤ t, it goes from -2(t - T ) to +2(t - T ). Accordingly, we have the following: þ2T

t=2

I=

e 0

2T=τ

K f ðsÞds þ

dT - 2T

þ2ðt - T Þ

t

e t=2

2T=τ

K f ðsÞds

dT - 2ðt - T Þ

In view of the properties #5 and 6 of the function Kf(s), the integrals over s draw significant contributions only from a very narrow region, of the order of τcor, around

Appendix G: Stochastic Processes

469

Fig. G.3 Limits of integration of the double integral in (G.53) in terms of (T, s)

t2

s=2(tT)

s=2(tT)

s=2T

t

0

t

s=2T

t1

the central value s = 0; contributions from regions with larger values of |s| are negligible. Therefore, if t ≫ τcor, the limits of integration for s may be replaced by 1 and +1, with the following result: þ1

t

I≈

e 0

2T=τ

dT -1

τ 2t=τ K f ðsÞds = e -1 2

þ1

K f ðsÞds

ðG:54Þ

-1

The Langevin Approach If the properties of the jump moments (G.9) are not known, an alternative way of writing an evolution equation for the time-dependent stochastic process is by including the fluctuations explicitly into the phenomenological equation for the state variable ω(t): dω = AðωÞ þ ξðt Þ dt

ðG:55Þ

Here A(ω) is a phenomenological force that depends on the state of the system ω, and ξ(t) is a fluctuating force statistically independent of A. Such equation is called the Langevin equation. We are not looking for an exact solution of the Langevin equation, only for a stochastic one. Another way to include the fluctuations into the

470

Appendix G: Stochastic Processes

evolutionary problem is to consider stochastic initial conditions; this alternative, however, will not be pursued in this book. Properties of the fluctuating force can be deduced from the following calculations. An obvious short-time solution of the Langevin equation is as follows: tþΔt

tþΔt 0

0

ξðt 0 Þdt 0

Aðωðt ÞÞdt þ

ωðt þ Δt Þ = ωðt Þ þ t

t

Hence the average of the increment Δω is as follows: hΔωi = Aðωðt ÞÞΔt þ hξðt ÞiΔt þ OðΔt Þ2

ðG:56Þ

This relation shows that for the phenomenological force to be representative of the dynamics of the system the ensemble average of the fluctuating force must be zero: hξðt Þi = 0

ðG:57Þ

Next: 2

tþΔt

ðΔωÞ

2

0

Aðωðt ÞÞdt

=

0

tþΔt

þ2

t

tþΔt

dt

0

t tþΔt

t

t

tþΔt

dt 0

þ

dt 00 hAðωðt 0 ÞÞξðt 00 Þi

dt 00 hξðt 0 Þξðt 00 Þi t

The first term in the right-hand side is of order (Δt)2; the second term vanishes due to (G.57) and the statistical independence of A and ξ; in the third term, we have the autocorrelation function of ξ, Kξ(s) = hξ(t)ξ(t + s)i, which is a measure of the stochastic correlation between the value of the fluctuating variable ξ at time t and its value at time t + s. If the process ξ(t) has some sort of regularity, then the correlator Kξ(s) would extend over a range of the time interval τcor, (G.52). On the contrary, if we assume that ξ(t) is extremely irregular, then τcor is zero and we may choose the following: hξðt 1 Þξðt 2 Þi = Kδðt 2- t 1 Þ Then:

ðG:58Þ

Appendix G: Stochastic Processes

471

ðΔωÞ2 = KΔt þ OðΔt Þ2

ðG:59Þ

Comparison of (G.56) with (G.15a) and (G.59) with (G.17b) shows that the Fokker-Plank and Langevin descriptions are equivalent if: AðωÞ = a1 ðωÞ;

K = a2 ðωÞ = constðωÞ

ðG:60Þ

Of course, if the potential function of the system U(ω) is known, the phenomenological force may be written in the following form: AðωÞ = -

∂U ðωÞ : ∂ω

ðG:61Þ

A comment regarding the derivative of the Langevin force is in order here. In principle, it is so irregular that its derivative is not defined. However, the rate of change of the stochastic process can be defined by its moments. This approach to the time derivative of ξ(t) is used in the text.

Appendix H: Two-Phase Equilibrium in a Closed Binary System

Consider a closed system, which consists of two species A and B and is capable of existing as a phase α or β or a combination of both. The number of moles of different species in different phases will be designated as nki , where i, j = A or B is the species index and k, l = α or β is the phase index. The fact that species do not transform into each other (no nuclear transformations) and remain in the same quantities (closed system) is expressed in the form of the species conservation conditions: nαi þ nβi = const

ðH:1Þ

However, because the α $ β phase transformations may go, there is no conservation of the phase amounts: nkA þ nkB ≠ const. Compositions of a phase may be described by specifying the weight, volume, atomic, or molar fraction of one of the species. We find the latter quantity to be the most illuminating. The molar fractions of the species B in each phase: Xk 

nkB nkA þ nkB

ðH:2Þ

Considering the mole numbers of different species in different phases independent of each other, we obtain the following: ∂nki = δij δkl ∂nlj

ðH:3Þ

where δij is Kronecker’s symbol. Then:

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Umantsev, Field Theoretic Method in Phase Transformations, Lecture Notes in Physics 1016, https://doi.org/

473

474

Appendix H: Two-Phase Equilibrium in a Closed Binary System

pð i Þ - X k ∂X k = δkl , ∂nlj nkA þ nkB

pðiÞ =

0, 1,

if i = A if i = B

ðH:4Þ

Each phase can be characterized by its molar Gibbs free energy Gk, which is a function of this phase’s molar fraction only: ðH:5Þ

Gk  Gk X k The total Gibbs free energy of the whole system is as follows: G S = nαA þ nαB Gα þ nβA þ nβB Gβ

ðH:6Þ

Then the condition of thermodynamic equilibrium of the system may be formulated in the form of a constraint extremum: G S → min

for nαi þ nβi = const,

i = A, B:

ðH:7Þ

To find the mole numbers nki that deliver the constraint minimum to G S , we will be using the method of Lagrange multipliers, according to which there exist constants a and b such that the function: G S þ a nαA þ nβA þ b nαB þ nβB

ðH:8Þ

has unconstrained minimum with respect to the variables nki . Then, differentiating function (H.8) with respect to nki , we obtain the following: S ∂nαA ∂nβA ∂G þa þ k k ∂ni ∂nki ∂ni

þb

∂nαB ∂nβB þ k =0 ∂nki ∂ni

ðH:9Þ

Using (H.3)–(H.6) for (H.9) and the following relations: δAi þ δBi = 1,

δαk þ δβk = 1,

ðH:10Þ

we obtain four simultaneous equations for four unknowns, Xα, Xβ, a, and b: δαk Gα þ ðpðiÞ - X α Þ þ bδBi = 0:

β dGα β β dG þ aδAi α þ δβk G þ pðiÞ - X dX dX β

ðH:11Þ

Appendix H: Two-Phase Equilibrium in a Closed Binary System

475

To exclude the constants a and b, we first compare the equation for (k = α, i = A) with that for (k = β, i = A). Then we subtract the equation for (k = α, i = A) from the equation for (k = α, i = B) and compare it with the difference of the equations for (k = β, i = A) and (k = β, i = B). The result is the system of two simultaneous equations for Xα and Xβ: Gα - X α

β dGα β β dG α =G -X β dX dX

ðH:12aÞ

dGα dGβ = dX α dX β

ðH:12bÞ

Mathematically, (H.12) expresses the condition of common tangency for the functions Gα(Xα) and Gβ(Xβ). Physically they mean that the chemical potentials of the species A (-a) and B (-b) in both phases, α and β, are equal. Notice that the derivative of the molar Gibbs free energy with respect to the fraction in (H.12b) is the difference of the chemical potentials of the species. Both equations can be conveniently expressed by a unified relation: dGα dGβ Gα - Gβ = α = dX dX β Xα - Xβ

ðH:13Þ

which says that there are two ways to find the difference of the chemical potentials of the species at equilibrium. Example H.1 Common Tangent Construction in Parabolic Approximation In many cases, it is quite sufficient to represent the Gibbs free energy of an alloy as a parabolic function of its concentration. Find the conditions of common tangency for such case. Without any loss of generality, the free energies of the α- and β-phases can be represented as follows: Gα =

Gα2 ðΔX α Þ2 ; 2

Gβ = Gβ0 þ

Gβ2 ΔX β 2

2

ðHE:1Þ

where: α

ΔX α = X α - X ; α

β

ΔX β = X β - X

β

ðHE:2Þ

Gα2 , Gβ2 , Gβ0 are constants, and X , X are the coordinates of the minima of the respective functions. On the latter, the following relations are imposed:

476

Appendix H: Two-Phase Equilibrium in a Closed Binary System α

β

0