590 69 15MB
English Pages 289 Year 2020
Igor O. Cherednikov, Tom Mertens, Frederik Van der Veken Wilson Lines in Quantum Field Theory
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
De Gruyter Studies in Mathematical Physics

Edited by Michael Efroimsky, Bethesda, Maryland, USA Leonard Gamberg, Reading, Pennsylvania, USA Dmitry Gitman, São Paulo, Brazil Alexander Lazarian, Madison, Wisconsin, USA Boris Smirnov, Moscow, Russia
Volume 24
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
Igor O. Cherednikov, Tom Mertens, Frederik Van der Veken
Wilson Lines in Quantum Field Theory 
2nd edition
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
Physics and Astronomy Classification 2010 11.15q, 11.15.Tk, 12.38.Aw, 02.10.Hh, 02.20Qs, 02.40.Hw, 03.65.Vf Authors Dr. Igor O. Cherednikov Universiteit Antwerpen Departement Fysica Groenenborgerlaan 171 2020 Antwerpen Belgium [email protected]
Dr. Frederik Van der Veken CERN Beams Department Esplanade des Particules 1 1211 Geneva Switzerland [email protected]
Dr. Tom Mertens AbramJoffeStr. 6 12489 Berlin Germany [email protected]
ISBN 9783110650921 eISBN (PDF) 9783110651690 eISBN (EPUB) 9783110651034 ISSN 21943532 Library of Congress Control Number: 2019951784 Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de. © 2020 Walter de Gruyter GmbH, Berlin/Boston Cover image: Science Photo Library / Parker, David Typesetting: VTeX UAB, Lithuania Printing and binding: CPI books GmbH, Leck www.degruyter.com
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
Preface Aristotle held that human intellectual activity or philosophical (in a broad sense) knowledge can be seen as a threefold research program. This program contains metaphysics, the most fundamental branch, which tries to find the right way to deal with “Being” as such; mathematics, an exact science studying calculable – at least, in principle – abstract objects and formal relations between them; and, finally, physics, the science working with changeable things and the causes of the changes. Therefore, physics is the science of evolution – in the first place the evolution in time. To put it in more ‘contemporary’ terms, at any energy scale there are things which a physicist has to accept as being ‘given from above’ and then try to formulate a theory of how do these things, whatever they are, change. Of course, by increasing the energy and, therefore, by improving the resolution of experimental facility, one discovers that those things emerge, in fact, as a result of evolution of other things, which should now be considered as ‘given from above’.1 The very possibility that the evolution of material things, whatever they are, can be studied quantitatively is highly nontrivial. First of all, to introduce changes of something, one has to secure the existence of something that does not change. Indeed, changes can be observed only with respect to something permanent. Kant proposed that what is permanent in all changes of phenomena is substance. Although phenomena occur in time and time is the substratum, wherein coexistence or succession of phenomena can take place, time as such cannot be perceived. Relations of time are only possible on the background of the permanent. Given that changes ‘really’ take place, one derives the necessity of the existence of a representation of time as the substratum and defines it as substance. Substance is, therefore, the permanent thing only with respect to which all time relations of phenomena can be identified. Kant gave then a proof that all changes occur according to the law of the connection between cause and effect, that is, the law of causality. Given that the requirement of causality is fulfilled, at least locally, we are able to use the language of differential equations to describe quantitatively the physical evolution of things. There is, however, a hierarchy of levels of causality. For example, Newton’s theory of gravitation is causal only if we do not ask how the gravitational force gets transported from one massive body to another. The concept of a field as an omnipresent mediator of all interactions allows us to step up to a higher level of causality. The field approach to the description of the natural forces culminated in the creation in the 20th century of the quantum field theoretic approach as an (almost) universal framework to study the physical phenomena at the level of the most elementary constituents of matter. 1 It is worth noticing that this scheme is one of the most consistent ways to introduce the concept of the renormalization group, which is crucial in a quantum field theoretical approach to describe the three fundamental interactions. https://doi.org/10.1515/9783110651690201
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
VI  Preface To be more precise, the quantitative picture of the three fundamental interactions is provided by the Standard Model, the quantum field theory of the strong, weak and electromagnetic forces. The aesthetic attractivity and unprecedented predictive power of this theory is due to the most successful, and nowadays commonly accepted, way to introduce the interactions by adopting the principle of local (gauge) symmetry. This principle allows us to make use of the local field functions, which depend on the choice of the specific gauge and, as such, do not represent any observables, to construct a mathematically consistent and phenomenologically useful theory. In any gauge field theory we need, therefore, gaugeinvariant objects, which are supposed to be the fundamental ingredients of the Lagrangian of the theory, and which can be consistently related, at least, in principle, to physical observables. The most straightforward implementation of the idea of a scalar gauge invariant object is provided by the traced product of field strength tensors Tr [Fμν (x)F μν (x)] ,
(1)
Fμν (x) = 𝜕μ Aν (x) − 𝜕ν Aμ (x) ± ig[Aμ (x), Aν (x)],
(2)
where
Aμ (x) being the local gauge potentials belonging to the adjoint representation of the Nparametric group of local transformations U(x), and g a coupling constant. Field strength tensors are also local quantities, which change covariantly under the gauge transformations: Fμν (x) → U(x)Fμν (x)U † (x).
(3)
Interesting nonlocal realizations of gaugeinvariant objects emerge from Wilson lines defined as pathordered (𝒫 ) exponentials2 of contour (path, loop, line) integrals of the local gauge fields Aμ (z) y
Uγ [y, x] = 𝒫 exp [±ig ∫dz μ Aμ (z) ] . x [ ]γ
(4)
The integration goes along an arbitrary path γ: z∈γ from the initial point x to the end point y. The notion of a path will be one of the crucial issues throughout the book. 2 The terminology and the choice of the signature ± will be explained below.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
Preface  VII
The Wilson line (4) is gauge covariant, but, in contrast to the field strength, the transformation law reads Uγ [y, x] → U(y)Uγ [y, x]U † (x),
(5)
so that the transformation operators U, U † are defined in different spacetime points. For closed paths x = y, so that we speak about the Wilson loop: Uγ ≡ Uγ [x, x] = 𝒫 exp[±ig ∮ dz μ Aμ (z)] , γ
(6)
which transforms similarly to the field strength Uγ = Uγ [x, x] → U(x)Uγ U † (x).
(7)
The simplest scalar gauge invariant objects made from Wilson loops are, therefore, the traced Wilson loops 𝒲γ = Tr Uγ .
From the mathematical point of view, one can construct a loop space whose elements are the Wilson loops defined on an infinite set of contours. The recast of a quantum gauge field theory in loop space is supposed to enable one to utilize the scalar gaugeinvariant field functionals as the fundamental degrees of freedom, instead of the traditional gaugedependent boson and fermion fields. Physical observables are then supposed to be expressed in terms of the vacuum averages of the products of Wilson loops (n)
𝒲{γ} = ⟨0 Tr Uγ1 Tr Uγ2 ⋅ ⋅ ⋅ Tr Uγn 0⟩.
(8)
The concept of Wilson lines finds an enormously wide range of applications in a variety of branches of modern quantum field theory, from condensed matter and lattice simulations, to quantum chromodynamics, highenergy effective theories, and gravity. However, there exist surprisingly few reviews or textbooks which contain a more or less comprehensive pedagogical introduction into the subject. Even the basics of the Wilson lines theory may put students and nonexperts in significant trouble. In contrast to generic quantum field theory, which can be taught with the help of plenty of excellent textbooks and lecture courses, the theory of Wilson lines and loops still lacks such a support. The objective of the present book is, therefore, to collect, overview and present in the appropriate form the most important results available in literature, with the aim to familiarize the reader with the theoretical and mathematical foundations of the concept of Wilson lines and loops. We also intend to give an introductory idea of how to implement elementary calculations utilizing Wilson lines within the context of modern quantum field theory, in particular, in Quantum Chromodynamics.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
VIII  Preface The target audience of our book consists of graduate and postgraduate students working in various areas of quantum field theory, as well as curious researchers from other fields. Our lettore modello is assumed to have already followed standard university courses in advanced quantum mechanics, theoretical mechanics, classical fields and the basics of quantum field theory, elements of differential geometry, etc. However, we give all necessary information about those subjects to keep with the logical structure of the exposition. Chapters 2, 3, and 4 were written by T. Mertens, Chapter 5 by F. F. Van der Veken. Preface, Introduction and general editing are due to I. O. Cherednikov. In our exposition we used extensively the results, theorems, proofs and definitions, given in many excellent books and original research papers. For the sake of uniformity, we usually refrain from citing the original works in the main text. We hope that the dedicated literature guide in Appendix D will do this job better. Besides this, we have benefited from presentations made by our colleagues at conferences and workshops and informal discussions with a number of experts. Unfortunately, it is not possible to mention everybody without the risk of missing many others who deserve mentioning as well. However, we are happy to thank our current and former collaborators, from whom we have learned a lot: I. V. Anikin, E. N. Antonov, U. D’Alesio, A. E. Dorokhov, E. Iancu, A. I. Karanikas, N. I. Kochelev, E. A. Kuraev, J. Lauwers, L. N. Lipatov, O. V. Teryaev, F. Murgia, N. G. Stefanis, and P. Taels. Our special thanks go to I. V. Anikin, M. Khalo, and P. Taels for reading parts of the manuscript and making valuable critical remarks on its content. We greatly appreciate the inspiring atmosphere created by our colleagues from the Elementary Particle Physics group in University of Antwerp, where this book was written. We are grateful to M. Efroimsky and L. Gamberg for their invitation to write this book, and to the staff of De Gruyter for their professional assistance in the course of the preparation of the manuscript.
Antwerp, May 2014
I. O. Cherednikov T. Mertens F. F. Van der Veken
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
Contents Preface  V 1
Introduction: What are Wilson lines?  1
2 2.1 2.1.1 2.1.2 2.1.3 2.2 2.2.1 2.2.2 2.2.3 2.3 2.3.1 2.3.2 2.3.3 2.3.4 2.4 2.4.1 2.4.2 2.5 2.5.1 2.5.2
Prolegomena to the mathematical theory of Wilson lines  6 Shuffle algebra and the idea of algebraic paths  7 Shuffle algebra: Definition and properties  7 Chen’s algebraic paths  22 Chen iterated integrals  41 Gauge fields as connections on a principal bundle  48 Principal fiber bundle, sections and associated vector bundle  48 Gauge field as a connection  53 Horizontal lift and parallel transport  59 Solving matrix differential equations: Chen iterated integrals  61 Derivatives of a matrix function  61 Product integral of a matrix function  63 Continuity of matrix functions  66 Iterated integrals and path ordering  67 Wilson lines, parallel transport and covariant derivative  70 Parallel transport and Wilson lines  70 Holonomy, curvature and the Ambrose–Singer theorem  71 Generalization of manifolds and derivatives  76 Manifold: Fréchet derivative and Banach manifold  77 Fréchet manifold  82
3 3.1 3.2 3.3 3.4 3.5 3.6
The group of generalized loops and its Lie algebra  86 Introduction  86 The shuffle algebra over Ω = ⋀ M as a Hopf algebra  86 The group of loops  94 The group of generalized loops  94 Generalized loops and the Ambrose–Singer theorem  100 The Lie algebra of the group of the generalized loops  101
4 4.1 4.2 4.3 4.4
Shape variations in the loop space  108 Path derivatives  108 Area derivative  116 Variational calculus  127 Fréchet derivative in a generalized loop space  130
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
X  Contents 5 5.1 5.1.1 5.1.2 5.2 5.2.1 5.2.2 5.2.3 5.2.4 5.2.5 5.2.6 5.2.7 5.3 5.3.1 5.3.2 5.3.3 5.3.4
Wilson lines in highenergy QCD  137 Eikonal approximation  137 Wilson line on a linear path  137 Wilson line as an eikonal line  146 Deep inelastic scattering  148 Kinematics  149 Invitation: the free parton model  150 A more formal approach  152 Parton distribution functions  160 Operator definition for PDFs  163 Gauge invariant operator definition  165 Collinear factorization and evolution of PDFs  169 Semiinclusive deep inelastic scattering  176 Conventions and kinematics  176 Structure functions  178 Transverse momentum dependent PDFs  180 Gaugeinvariant definition for TMDs  183
A A.1 A.2 A.3 A.4 A.5 A.6 A.7 A.8 A.9 A.10 A.11 A.12 A.13 A.14 A.15 A.16 A.17 A.18 A.19 A.20 A.21
Mathematical vocabulary  187 General topology  187 Topology and basis  188 Continuity  193 Connectedness  195 Local connectedness and local pathconnectedness  198 Compactness  199 Countability axioms and Baire theorem  203 Convergence  205 Separation properties  207 Local compactness and compactification  209 Quotient topology  210 Fundamental group  212 Manifolds  216 Differential calculus  219 Stokes’ theorem  224 Algebra: Rings and modules  225 Algebra: Ideals  228 Algebras  229 Hopf algebra  231 Topological, C ∗ , and Banach algebras  240 Nuclear multiplicative convex Hausdorff algebras and the Gel’fand spectrum  241
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
Contents  XI
B B.1 B.2 B.3 B.4 B.5
Notations and conventions in quantum field theory  249 Vectors and tensors  249 Spinors and gamma matrices  250 Lightcone coordinates  252 Fourier transforms and distributions  254 Feynman rules for QCD  255
C C.1 C.1.1 C.1.2 C.2 C.2.1 C.2.2
Color algebra  257 Basics  257 Representations  257 Properties  257 Advanced topics  259 Calculating products of fundamental generators  259 Calculating traces in the adjoint representation  262
D
Brief literature guide  265
Bibliography  266 Index  269
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
1 Introduction: What are Wilson lines? The idea of gauge symmetry suggests that any field theory must be invariant under the local (i. e., depending on spacetime points) transformations of field functions ψ(x) → U(x)ψ(x), ψ(x) → ψ(x)U † (x),
(1.1)
where the matrices U(x) belong to the fundamental representation of a given Lie group. In other words, the Lagrangian has to exhibit local symmetry. We shall mostly deal with special unitary groups, SU(Nc ), which are used in Yang–Mills theories. Although a number of important results can be obtained by using only the general form of this gauge transformation, it will be sometimes helpful to use the parameterization1 U(x) = e±igα(x) ,
(1.2)
where α(x) = t a αa (x), t a =
λa , 2
and λa are the generators of the Lie algebra of the group U. Consider for simplicity the free Lagrangian for a single massless fermion field ψ(x) ℒfermion = ψ(x) i𝜕/ ψ(x), 𝜕/ = γμ
𝜕 . 𝜕xμ
(1.3)
We easily observe that the local transformations (1.1) do not leave this Lagrangian intact. The reason is that the derivative of the field transforms as 𝜕μ ψ(x) → U(x)[𝜕μ ψ(x)] + [𝜕μ U(x)]ψ(x).
(1.4)
The minimal extension of the Lagrangian (1.3), which exhibits the property of gauge invariance, consists in the introduction of the set of gauge fields Aμ (x) = t a Aaμ (x)
(1.5)
belonging to the adjoint representation of the same gauge group, which is required to transform as Aμ (x) → U(x)Aμ U † (x) ±
i U(x)𝜕μ U † (x). g
(1.6)
1 The coupling constant g can be chosen to have a positive or a negative sign. As this is merely a matter of convention, we leave the choice open and will write ±g. https://doi.org/10.1515/9783110651690001
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2  1 Introduction: What are Wilson lines? Thus, the usual derivative has to be replaced by the covariant derivative: Dμ = 𝜕μ ∓ igAμ ,
(1.7)
Dμ → U(x)Dμ U † (x).
(1.8)
Dμ ψ(x) → U(x)[Dμ ψ(x)].
(1.9)
which transforms as
Then
This procedure obviously makes the minimally extended Lagrangian gauge invariant / μ ψ(x). ℒgauge inv. = ψ(x) iD
(1.10)
Consider now a little bit more complicated object, namely, the bilocal product of two matter fields Δ(y, x) = ψ(y)ψ(x).
(1.11)
Such products arise in various correlation functions in quantum field theory,2 in particular, they determine the most fundamental quantities, Green’s functions, via G(y, x) = ⟨0𝒯 ψ(y)ψ(x)0⟩,
(1.12)
where the symbol 𝒯 stands for the timeordering operation. It is evident that in such a naive form the bilocal field products and Green’s functions are not gauge invariant: Δ(y, x) → ψ(y)U † (y)U(x)ψ(x).
(1.13)
Therefore, the problem arises of how to find an operator T[y,x] , which transports the field ψ(x) to the point y, so that T[y,x] ψ(x) → U(y)[T[y,x] ψ(x)].
(1.14)
ψ(y)T[y,x] ψ(x) → ψ(y)U † (y)U(y)[T[y,x] ψ(x)] = ψ(y)T[y,x] ψ(x),
(1.15)
In this case, we have
so that the product (1.13) becomes gauge invariant. Consider first the Abelian gauge group U(1). In this case U(x) = e±igα(x) ,
(1.16)
2 See, in particular, references in the section ‘Gauge invariance in particle physics’, Appendix D.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
1 Introduction: What are Wilson lines?
 3
where α(x) is a scalar function. Then3 Aμ (x) → Aμ (x) + 𝜕μ α(x),
(1.17)
so it is straightforward to see that the ‘transporter’ T[y,x] is given by4 y
T[y,x]
= exp [sign ig ∫ dzμ Aμ (z)] . x [ ]
(1.18)
Indeed, the product (1.15) transforms as ψ(y)Uγ [y, x]ψ(x) → ∓igα(y)
ψ(y)e
y
exp [sign ig ∫ dzμ [Aμ (z) + 𝜕μ α(z)]] e±igα(x) ψ(x). x ] [
(1.19)
It is instructive to see that the choice of the sign in equation (4) depends on the parameterization of the symmetry transformation U(x). Taking the line integral for the integrand 𝜕μ α explicitly, one concludes that in order to save the gauge invariance, the sign should be chosen as sign = ±. In what follows we shall always specify the signature conventions we adopt. In the nonAbelian case the situation is more involved. The fields at different spacetime points Aμ (z) and Aμ (z ), equation (1.5), do not commute, so the exponential of noncommuting functions is illdefined. An infinitesimal version of equation (1.15) suggests the following equation to the transporter T[y,x] : d T = 𝒜γ (t)T[y,x] , dt [y,x]
(1.20)
where we introduce an arbitrary path γ along which T[y,x] ‘transports’ a field ψ(x) from the point x to the point y. The need for this path stems from the fact that we do not know (unless, for some reason, stated otherwise) along which trajectory we have to transfer the fields from one point to another. The requirement of the gauge invariance is not affected by the choice of path, but, as we will see, the transporter becomes a functional of the path. The path γ is assumed to be parameterized by the coordinate z∈γ 3 Note that the sign in front of 𝜕μ α(x) is independent on the sign choice of g. 4 Any ordering of the field functions is not needed in the classical Abelian case.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
4  1 Introduction: What are Wilson lines? depending on the parameter t in such a way that dzμ = zμ̇ (t)dt, z(0) = x, z(t) = y. The operator 𝒜γ (t) in the r. h. s. of equation (1.20) is given by 𝒜γ (t) = ±i gAμ [z(t)]zμ̇ (t).
(1.21)
It is easy to see that (1.18) solves equation (1.20) in the classical Abelian case. Integrating equation (1.20) from 0 to t yields an integral equation instead of a differential one: t
T[y,x] − T[x,x] = T(t) − T(0) = ∫ 𝒜γ (t1 )T(t1 )dt1 .
(1.22)
0
Imagine that the coupling constant g can be considered as small and let us solve this equation perturbatively. Namely, we assume that a solution can be presented as an infinite series T[y,x] (t) = T (0) + T (1) + T (2) + ⋅ ⋅ ⋅ + T (n) + ⋅ ⋅ ⋅
(1.23)
Suppose we have an initial condition T(0) = T[x,x] = T (0) .
(1.24)
Then, for the first nontrivial term in the expansion (1.23) we obtain t
T (1) (t) = [∫ 𝒜γ (t1 )dt1 ] T(0). ] [0
(1.25)
T(0) is t1 independent by construction and thus can be separated out from the integration. The next order gives t
T (2) (t) = ∫ 𝒜γ (t1 )T(t1 )dt1 0
t1
t
= [∫ 𝒜γ (t1 ) ∫ 𝒜γ (t2 )dt1 dt2 ] T(0). 0 [0 ]
(1.26)
We can rewrite equation (1.26) as t t
1 T (t) = [𝒫 ∫ ∫ 𝒜γ (t1 )𝒜γ (t2 )dt1 dt2 ] T(0), 2 [ 0 0 ] (2)
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
(1.27)
1 Introduction: What are Wilson lines?
 5
where the pathordering operator reads 𝒫𝒜γ (t1 )𝒜γ (t2 ) = θ(t1 − t2 )𝒜γ (t1 )𝒜γ (t2 ) + θ(t2 − t1 )𝒜γ (t2 )𝒜γ (t1 ).
(1.28)
It is straightforward to see that generic norder term is given by T (n) (t) =
t
t
1 𝒫 ∫ ⋅ ⋅ ⋅ ∫ [𝒜γ (t1 ) . . . 𝒜γ (tn )dt1 dt2 ⋅ ⋅ ⋅ dtn ] T(0), n! 0
(1.29)
0
with obvious generalization of the pathordering to n functions 𝒜. Therefore, the final solution can be presented in the form t
t
0
0
1 𝒫 ∫ ⋅ ⋅ ⋅ ∫ [𝒜γ (t1 ) ⋅ ⋅ ⋅ 𝒜γ (tn )dt1 dt2 dtn ] T(0) n! n=0
T(t) = ∑
t
≡ 𝒫 exp [∫ 𝒜γ (t )dt ] T(0). ] [0
(1.30)
Remembering the definition of 𝒜, equation (1.21) and taking the natural initial condition T0 = T[x,x] = 1, we have finally y
T[y,x] = 𝒫 exp [±i g ∫ Aμ [z]dzμ ] , x [ ]γ
(1.31)
that is the Wilson line (4): T[y,x] = Uγ [y, x],
(1.32)
with arbitrary path γ. In other words, the function (1.15) is gauge invariant, but path dependent. The rest of the book will be devoted to mathematical motivation of the above manipulations and to some applications of the Wilson line approach in Quantum Field Theory.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2 Prolegomena to the mathematical theory of Wilson lines In this part of the book we give the necessary conceptual thesaurus and overview the main steps towards the construction of the mathematical theory of Wilson lines and loops. To be more precise, a goal of this exposition is to demonstrate that gauge theories can be consistently formulated in the principal fiber bundle setting, where the gauge fields (or potentials) are identified with pullbacks of sections of a connection oneform in the gauge bundle. The gauge potentials give rise to a parallel transport equation in the gauge bundle that can be solved by using product integrals. As a result, we shall show that the solution of the parallel transport equation can be presented as a Wilson line. We shall also discuss its relation to the standard covariant derivative in gauge theories. Then, an alternative way to construct a gauge theory will be discussed, which is based on the use of the holonomies in the gauge bundle instead of the gauge potentials. This possibility is based on the Ambrose–Singer theorem, which claims that the entire gauge invariant content of a gauge theory is included in the holonomies. However, the issues of overcompleteness, reparameterization invariance, and additional algebraic constraints, coming from the matrix representation of the Lie algebra associated with the gauge group, impede the straightforward application of the standard loop space approach to gauge field theories. An interesting solution to these problems arises if one extends this setting to the socalled generalized loops, first proposed by Chen and further studied by Tavares (for references, see section ‘Algebraic paths’ in Literature Guide D) within the framework of the generalized loop space (GLS) approach. Our exposition is based mostly on the original works by these authors. Aiming towards the appropriate formulation of the generalized loop space framework and having in mind the demonstration of its relation to Wilson lines and loops, we start with an introductory discussion of the most relevant algebraic concepts. Then we make use of these concepts to construct Chen’s algebraic dpaths, and, consequently, the generalized loop space. We end the chapter with a discussion on the differential operators which can be defined in generalized loop space. Assuming that gauge field theories can be recast within the GLS framework, and given that, to this end, a suitable action could be found, one can use relevant differential operators to generate the variations of the generalized degrees of freedom in the GLS, and hence, to construct the appropriate equations of motion in the GLS. Let us mention that the ambitious program of reformulation of gauge theories in the GLS setting has never been fully accomplished and thus remains a challenge. Note that we give complete definitions of the notions, formulations of theorems and their proofs only when we find it necessary for the consistency of the exposition. For an extended list of definitions and some helpful theorems and statements we refer to Appendix A. https://doi.org/10.1515/9783110651690002
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.1 Shuffle algebra and the idea of algebraic paths  7
2.1 Shuffle algebra and the idea of algebraic paths 2.1.1 Shuffle algebra: Definition and properties 2.1.1.1 Algebraic preliminaries For our purposes it is sufficient to describe an ndimensional manifold as a topological space, wherein a neighborhood to each point is equivalent (strictly speaking, homeomorphic) to the ndimensional Euclidean space. The fundamental geometrical object in a manifold we will be concerned about is a path. One has a natural intuitive idea of what a path or a loop in a manifold is. Mathematically one usually defines a path γ in a manifold M as the map γ : [0, 1] → M,
t → γ(t).
For closed paths, which are called loops, one just adds the extra condition that the initial and final points of the path coincide γ(0) = γ(1) ∈ M. The straightforward idea of paths and loops can be generalized to the socalled algebraic dpaths, where the dpaths are constructed as algebraic objects possessing certain (desirable) properties. The resulting algebraic structure can then be supplied with a topology, turning it into a topological algebra. The topology is used to complete the algebraic properties with analytic ones, allowing the introduction of the necessary differential operators.1 Several algebraic structures must be introduced before we begin the main discussion. Without going too deep into details, we define a ring as a set wherein two binary operations of multiplication and addition are defined. Putting it another way, a ring is an Abelian group (with addition being the group operation) supplied with an extra operation (multiplication). If the second operation is commutative, the ring is also called commutative. The set of integers provides one of the simplest examples of a commutative ring. Otherwise we speak of noncommutative rings. The set of square matrices is an example of a noncommutative ring. Having introduced the notion of a ring, we are able to introduce another algebraic structure, namely a field, which is defined as a commutative ring where division by a nonzero element is allowed. It is evident that nonzero elements of a field make up an Abelian group under multiplication. For example, the set of real numbers forms a field. 1 Most of the material in this section is based on the original works by Chen (see Literature Guide), where the proofs to a number of the stated theorems can also be found. For the sake of brevity we skip those proofs which do not bring more insight than needed into the subject of the book.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
8  2 Prolegomena to the mathematical theory of Wilson lines One can then construct a vector space over a field. In this case, the elements of the vector space are called vectors, while the elements of the field are scalars, and two binary operations (addition of two vectors and multiplication of a vector by a scalar) acting within the vector space should be defined. One easily captures the idea of a vector space by thinking of the usual Euclidean vectors of velocities or forces. The notion of a module over a ring generalizes the concept of a vector space over a field: now scalars only have to form a ring, not necessary a field. For example, any Abelian group is a module over the ring of integers. In what follows ‘Kmodule’ stands for a module over a ring K. 2.1.1.2 Shuffle algebra The generalization of the concept of paths in a manifold calls for the introduction of the notion of a shuffle algebra. The shuffle algebra is an algebra based on the shuffle product, which in its turn is defined via (k, l)shuffles. Let us start with the definitions of these shuffles. Definition 2.1 ((k, l)shuffle). A (k, l)shuffle is a permutation P of the k + l letters, such that P(1) < ⋅ ⋅ ⋅ < P(k)
and P(k + 1) < ⋅ ⋅ ⋅ < P(k + l).
Exercise 2.2. How can one explain a (k, l)shuffle using a deck of cards?
Using these (k, l)shuffles we can introduce the shuffle multiplication, symbolically represented by the symbol ∙. Let us consider a set of arbitrary objects Zi . Definition 2.3 (Shuffle multiplication). Using the notations k
nk
Z1 ⋅ ⋅ ⋅ Zk = Z1 ⊗ ⋅ ⋅ ⋅ ⊗ Zk ∈ ⨂ ⋀ M,
k≥1
and, by convention, Z1 ⋅ ⋅ ⋅ Zk = 0
for k = 0.
We write the shuffle multiplication as Z1 ⋅ ⋅ ⋅ Zk ∙ Zk+1 ⋅ ⋅ ⋅ Zk+l = ∑ ZP(1) ⋅ ⋅ ⋅ ZP(k+l) Pk,l
(2.1)
where ∑Pk,l denotes the sum over all (k, l)shuffles and ⋀ M stands for the nk th exterior power of the exterior algebra ⋀ M over the manifold M, and nk the exterior algebra degree of the factor Zk .
Several examples will be instructive to make the shuffle multiplication clear.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.1 Shuffle algebra and the idea of algebraic paths  9
Example 2.4 (Shuffle multiplication). – The situation with two objects is evident: Z1 ∙ Z2 = Z1 Z2 + Z2 Z1 –
Shuffle multiplication of three objects reads: Z1 ∙ Z2 Z3 = Z1 Z2 Z3 + Z2 Z1 Z3 + Z2 Z3 Z1
–
Four objects multiply as: Z1 Z2 ∙ Z3 Z4 = Z1 Z2 Z3 Z4 + Z1 Z3 Z2 Z4 + Z1 Z3 Z4 Z2
+ Z3 Z1 Z2 Z4 + Z3 Z1 Z4 Z2 + Z3 Z4 Z1 Z2 .
(2.2)
If we consider the objects Z to be oneforms ω (or linear functionals) defined on some manifold and compare their shuffle products with the usual antisymmetric wedge products, then shuffle product can be treated as a symmetric counterpart to the wedge product. Let now M be a manifold and 1
Ω = ⋀M = ⋀M be the set of oneforms on M. We interpret Ω as a Kmodule, where for the moment we assume that K is a general ring of scalars with a multiplicative unity. Introducing the shuffle product on a Kmodule Ω defines the shuffle Kalgebra.2 Definition 2.5 (Shuffle Kalgebra). Consider a Kmodule Ω and the regular tensor algebra over K based on Ω, denoted by T (Ω). Then T r (Ω) represents the degree r components of the algebra. It is easy to see that T 0 (Ω) = K. Replacing the tensor product by the shuffle multiplication generates a new algebra called the shuffle Kalgebra Sh(Ω) based on the Kmodule Ω.
In this algebra the shuffle product plays a role of the algebra multiplication m, so that one can write m ≡ ∙ : Sh → Sh, and for the algebra unit map one has u : K → Sh,
1K → 1Sh .
2 The shuffle product acts here like the vector product on the usual Euclidian vector space over ℝ.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
10  2 Prolegomena to the mathematical theory of Wilson lines It is now possible to extend the algebraic structure based on the shuffle product by introducing the Klinear maps ϵ, Δ. Definition 2.6 (Counit and comultiplication). – Counit ϵ ∈ Alg(Sh(Ω), K) is defined by { –
ϵ(1Sh ) = 1K ϵ(ω1 ⋅ ⋅ ⋅ ωn ) = 0
for n = 0 for n > 0.
(2.3)
Comultiplication Δ : Sh(Ω) → Sh(Ω) ⊗ Sh(Ω) acts as Δ(1) = 1 for n = 0 { { { { Δ(ω ⋅ ⋅ ⋅ ω ) = 1 n { n { { { ∑ (ω1 ⋅ ⋅ ⋅ ωi ) ⊗ (ωi+1 ⋅ ⋅ ⋅ ωn ) i=0 {
for n > 0.
(2.4)
The map Δ can be considered as a Kmodule morphism and is also an associative comultiplication since (1 ⊗ Δ)Δ = (Δ ⊗ 1)Δ. Exercise 2.7. Prove the above statement.
The comultiplication Δ and counit ϵ introduces a coalgebra structure on the shuffle algebra, so that it becomes a bialgebra. We can go a step further and show that the shuffle algebra is also a Hopf algebra.3 For that reason we introduce the notion of an antipode. Definition 2.8 (Antipode). A Klinear map J J : Sh → Sh, is called the shuffle algebra antipode provided that J(ω1 ⋅ ⋅ ⋅ ωn ) = (−1)n ωn ⋅ ⋅ ⋅ ω1 . It is evident that J(1) = 1,
J2 = 1.
Consider now the shuffle multiplication ms : Sh ⊗ Sh → Sh 3 A Hopf algebra is at the same time an algebra and a coalgebra, see Appendix A.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
(2.5)
2.1 Shuffle algebra and the idea of algebraic paths  11
and the unit map u : K → Sh. Let the transposition map or flipping operation T : Sh ⊗ Sh → Sh ⊗ Sh be defined as T(v1 ⊗ v2 ) = v2 ⊗ v1 .
(2.6)
Then, for all v1 , v2 ∈ Sh, the antipode can be shown to possess the following properties: J(v1 ∙ v2 ) = J(v2 ) ∙ J(v1 )
ms ∘ (J ⊗ 1) ∘ Δ = ms ∘ (1 ⊗ J) ∘ Δ = u ∘ ϵ T ∘ (J ⊗ J) ∘ Δ = Δ ∘ J,
ϵ ∘ J = ϵ.
(2.7)
Therefore, the following theorem holds: Theorem 2.9 (Sh(Ω) is a Hopf algebra). The shuffle algebra Sh(Ω) is a Hopf Kalgebra provided that its comultiplication Δ, counit ϵ and antipode J are defined as in equations (2.3), (2.4), and (2.5). Keeping in mind the algebraic structure of the shuffle algebra discussed above, we can go on with the study of the algebra homomorphisms4 Alg(Sh(Ω), K). Definition 2.10 (Group multiplication on Alg(Sh(Ω), K)). Consider the algebra homomorphisms αi ∈ Alg(Sh(Ω), K). Define the multiplication α1 α2 ∈ Alg(Sh(Ω), K) as α1 α2 = (α1 ⊗ α2 )Δ. For this multiplication we have ϵα1 = α1 ϵ = α1 and α1 (α2 α3 ) = (α1 ⊗ α2 ⊗ α3 )(1 ⊗ Δ)Δ = (α1 ⊗ α2 ⊗ α3 )(Δ ⊗ 1)Δ = (α1 α2 )α3 .
4 It suffices here to describe a homomorphism as a map between two sets which preserves their algebraic structures.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
12  2 Prolegomena to the mathematical theory of Wilson lines
Figure 2.1: Multiplication of algebra morphisms.
The multiplication of algebra morphisms is depicted in Figure 2.1. It is easy to observe that: Proposition 2.11. The multiplication in Definition 2.10, defined on the algebra morphisms Alg(Sh(Ω), K), turns it into a group. We can now study the properties of the algebra homomorphisms Alg(Sh(Ω), Sh(Ω)). To this end, let us define an algebra morphism which might look a bit strange at the moment, but will turn out to be valuable when considering the group structure of algebraic paths and loops. Definition 2.12 (Loperator). For α ∈ Alg(Sh(Ω), K) define ̃Lα = (α ⊗ 1)Δ ∈ Alg(Sh(Ω), Sh(Ω))
(2.8)
L̂α = ̃Lα ⊗ 1 ∈ Hom(Sh(Ω) ⊗ Ω, Sh(Ω) ⊗ Ω).
(2.9)
and
This operator has the following interesting property with respect to the products of elements of Alg(Sh(Ω), K): Property 2.13. If α2 ∈ Alg(Sh(Ω), K), then, by Definition 2.10, ̃ α = α1 α2 α2 L 1
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.1 Shuffle algebra and the idea of algebraic paths  13
and ̃ , L̂ α L̂ α ). ̃α L ̃ α α , L̂ α α ) = (L (L 1 2 2 α1 1 2 1 2
(2.10)
The proof of the above statement is straightforward: Proof. ̃ α α = (α1 α2 ⊗ 1)Δ=[(α1 ⊗ α2 )Δ ⊗ 1]Δ L 1 2 ̃ α )Δ = (α1 ⊗ α2 ⊗ 1)(1 ⊗ Δ)Δ = (α1 ⊗ L 2 ̃α . ̃α L ̃ α (α1 ⊗ 1)Δ = L =L
(2.11)
̃ α α ⊗ 1 = L̂ α L̂ α . L̂ α1 α2 = L 1 2 1 2
(2.12)
2
2
1
One also obtains
̃ ϵ and L̂ ϵ are identity morphisms on Sh(Ω) and Sh(Ω) ⊗ Ω. Notice that L 2.1.1.3 Shuffle differentiations We wish to discuss generalized or algebraic paths and loops which are based on shuffle algebra morphisms. Because we are ultimately interested in the mathematically consistent formalism for variations of these paths and loops, we need the operations of differentiation to be welldefined. The appropriate introduction of these differentiations requires some basic knowledge of category theory. We give now a brief introduction to category theory, restricting ourselves only to those concepts which will be explicitly used in our discussion. Define first the concept of a category. Definition 2.14 (Category). Category C includes: 1. a class ob(C) of objects ai ; 2. a class Hom(C) of morphisms Fl which can be interpreted as maps between the objects. A morphism has a unique source object ai and target object aj : Fl : ai → aj ; 3.
a binary operation called the composition of morphisms, such that Hom(a1 , a2 ) × Hom(a2 , a3 ) → Hom(a1 , a3 ),
which exhibit the properties of 1. identity: there exists a unity object 1 ∈ ob(C), such that 1ai = ai 1 = ai ;
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
14  2 Prolegomena to the mathematical theory of Wilson lines
2.
associativity: if Fi : ai → ai+1 , then F3 ⋅ (F2 ⋅ F1 ) = (F3 ⋅ F2 ) ⋅ F1 .
It is easy to show that there exists a unique identity map.
We also need maps between categories which are captured by the notion of a functor. Definition 2.15 (Functor). Let C1 and C2 be two categories. A functor F from C1 to C2 is a map with the following properties: 1. The mapping rule: for each X1 ∈ C1 , there exists X2 ∈ C2 , such that X1 → X2 = F (X1 ). 2.
For a covariant functor: for each f : X1 → X2 ∈ C1 there exists F (f ), such that F (X1 ) → F (X2 ) ∈ C2 .
3.
For a contravariant functor, correspondingly: F (X2 ) → F (X1 ) ∈ C2 ,
such that the identity and composition of morphisms are preserved. Namely, we have 1. for the unity 1C1 in C1 1C1 → F (1C1 ) = 1C2 ∈ C2 ; 2.
for a covariant functor F (f1 ∘ f2 ) = F (f1 ) ∘ F (f2 );
3.
for a contravariant functor F (f1 ∘ f2 ) = F (f2 ) ∘ F (f1 ).
One calls a functor F : C1 → C2 full (faithful, fully faithful) if, for all objects a1 and a2 of C1 , the map Hom(a1 , a2 ) → Hom(F (a1 ), F (a2 )) is surjective (injective, bijective).
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
(2.13)
2.1 Shuffle algebra and the idea of algebraic paths  15
There exist some special types of functors, of which we only mention the forgetful functor, since we shall deal with it in further discussions. Definition 2.16 (Forgetful functor). Suppose two categories C1 and C2 are given, and the object X ∈ C1 can be regarded as an object of C2 by ignoring certain mathematical structures of X . Then a functor U : C1 → C2 which ‘forgets’ about any mathematical structure is called a forgetful functor.
Now we are ready to introduce differentiations. Let us begin with the notion of a Kmodule differentiation. Definition 2.17 (Kmodule differentiation). Consider Kmodules U and a Umodule Ω. Let F1 , F2 ∈ U. A differentiation d is a morphism d: U→Ω which obeys the rule d(F1 F2 ) = F2 (dF1 ) + F1 (dF2 ).
(2.14)
Extending to Kalgebras, Kmodule differentiations form a category: Definition 2.18 (Category of Kalgebra differentiations). Consider Kalgebras U and U and the differentiations d:U→Ω and d : U → Ω Denote by Diff(D, D ) the set of all pairs ̃ ϕ), ̂ (ϕ, where ̃ ∈ Alg(U, U ) ϕ and ϕ̂ ∈ Homk (Ω, Ω ), such that ̃ = ϕD ̂ d ϕ
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
16  2 Prolegomena to the mathematical theory of Wilson lines
and if F ∈ U,
w∈Ω
then ̂ ̃ )ϕw. ̂ ϕ(Fw) = (ϕF In what follows, 𝒟 stands for the category of differentiations of unitary commutative Kalgebras with the category morphisms defined above.
The next category of differentiations we shall introduce is the category of pointed differentiations. Definition 2.19 (Category of pointed differentiations). Consider a pair (d, p) with the operation d:U→Ω being a differentiation, and p ∈ Alg(U, K). Such a pair is said to be a pointed differentiation. The corresponding category 𝒫𝒟 can be introduced, such that the morphisms Diff(D, p : D p ) in 𝒫𝒟 (d, p) → (d , p ) are given by pairs ̃ ϕ)̂ ∈ Diff(d, d ), (ϕ, such that ̃ p = p ϕ The category morphisms then define equivalences of differentiations.
Anticipating a forthcoming discussion, we notice that this is the abovementioned property of morphisms ̃ p = p ϕ which guarantees the uniqueness of the initial point of a generalized path. A subcategory of this pointed differentiation is generated if one imposes the constraint of surjectivity on the Kmodule differentiation. Definition 2.20 (Surjective pointed differentiation). We call a pointed differentiation (d, p) surjective if d is surjective.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.1 Shuffle algebra and the idea of algebraic paths  17
The last subcategory of pointed differentiations we need to define is the category of splitting pointed differentiations. To define these differentiations we also need to introduce the notion of a short exact sequence. In general, ker F stands for a kernel of a map F : A1 → A2 , that is, a subset of A1 which maps under F into the zero of A2 . In other words, the image of the kernel F is the zero in A2 . Then we define: Definition 2.21 (Exact sequence). Consider a sequence of homomorphisms of Kmodules F0
F1
U1 → U2 → U3 . It is said to be exact at U2 if Im F0 = ker F1 . If each term except the first and the last one in a sequence F0
F1
F
Fn−1
U0 → U1 → U2 → ⋅ ⋅ ⋅ → Un is exact, then we speak about an exact sequence. A fiveterm exact sequence 0 → U1 → U2 → U3 → 0 is short exact.
Using short exact sequences we can finally define the splitting pointed differentiations. Definition 2.22 (Splitting pointed differentiation). A pointed differentiation (d, p) is called splitting if for a Kmodule U U = ker d ⊕ ker p.
(2.15)
That is, (d, p) is splitting if and only if ker d ∩ ker p = 0. (d, p) is splitting and surjective if and only if u
d
0→K →U→Ω→0
(2.16)
is a short exact sequence, where u: K →U is the unit map in the algebra U.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
18  2 Prolegomena to the mathematical theory of Wilson lines In what follows, 𝒮𝒫𝒟 stands for the subcategory of splitting surjective differentiations of the category 𝒫𝒟. Consider an application of the above differentiations to the specific case of shuffle algebras. Applying the Kmodule differentiation d from Definition 2.17 with U = Sh(Ω) and Kmodule Ω yields d
Sh(Ω) → Sh(Ω) ⊗ Ω, where we treat Ω as an Sh(Ω)module with the properties for ωi ∈ Sh(Ω) and wi ∈ Ω: 1. 1Sh(Ω) w = w; 2. ω(w1 +Ω w2 ) = ωw1 +Ω ωw2 ; 3. (ω1 +Sh(Ω) ω2 )w = ω1 w +Ω ω2 w; 4. (ω1 ∙ ω2 )w = ω1 w ⋅Ω ω2 w. Similarly we can consider the surjective differentiations: Definition 2.23 (Surjective shuffle module differentiation). Suppose we have the Kmodule Sh(Ω) ⊗ Ω. Consider it as a Sh(Ω)module, so that for u1 , u2 ∈ Sh(Ω),
u3 ∈ Ω
we have u1 ∙ (u2 ⊗ u3 ) = (u1 ∙ u2 ) ⊗ u3 . The surjective differentiation δ δ ∈ Hom(Sh(Ω), Sh(Ω) ⊗ Ω) is defined by δ(u1 u3 ) = u1 ⊗ u3 , δ(1) = 0.
To see that δ is a differentiation, it is instructive to first consider an example of a shuffle product of the tensor products. Example 2.24 (Shuffle product of tensor products). Consider u1 , u2 ∈ T 1 (Ω)
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.1 Shuffle algebra and the idea of algebraic paths  19
and w1 , w2 ∈ T 1 (Ω). We have (u1 w1 ) ∙ (u2 w2 ) = (u1 ⊗ w1 ) ∙ (u2 ⊗ w2 )
= u1 w1 u2 w2 + u1 u2 w1 w2 + u1 u2 w2 w1 + u2 u1 w2 w1 + u2 w2 u1 w1 + u2 u1 w1 w2
= (u1 w1 ∙ u2 )w2 + (u2 w2 ∙ u1 )w1 .
(2.17)
Therefore, we obtain: Theorem 2.25. δ is a differentiation. Proof. We have (u1 w1 ) ∙ (u2 w2 ) = (u1 w1 ∙ u2 )w2 + (u2 w2 ∙ u1 )w1
(2.18)
by the properties of the shuffle multiplication as discussed above. Applying δ one gets δ [(u1 w1 ) ∙ (u2 w2 )] = (u1 w1 ∙ u2 ) ⊗ w2 + (u2 w2 ∙ u1 ) ⊗ w1
= (u1 w1 ) ∙ (u2 ⊗ w2 ) + (u2 w2 ) ∙ (u1 ⊗ w1 )
= (u1 w1 ) ∙ δ(u2 w2 ) + (u2 w2 ) ∙ δ(u1 w1 ).
(2.19)
Thus δ obeys the Leibniz rule showing that δ is indeed a differentiation, according to (2.17). For splitting pointed differentiations we have the following lemma: Lemma 2.26 (Splitting Pointed Differentiation Homomorphisms). Suppose we have a splitting pointed differentiation (d, p), (Definition 2.22). A commutative diagram of Kmodule morphisms is shown in Figure 2.2 (the double line between the K’s indicates that their values are equal). Assuming that ̃ =1 θ(1)
Figure 2.2: Splitting pointed differential.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
20  2 Prolegomena to the mathematical theory of Wilson lines and, for all u1 ∈ Sh(Ω), u2 ∈ Ω, ̂ ̃ ̂ θ(u 1 ⊗ u2 ) = (θu1 )θ(1 ⊗ u2 ), we obtain ̃ θ)̂ ∈ Diff(δ, ϵ; d, p). (θ, Suppose that θ ∈ Hom(Ω, Ω ), generates a homomorphism between the tensor algebras T(Ω) and T(Ω ). Denote this homomorphism by T(θ). This algebra morphism is shuffle product preserving, such that we can write it as Sh(θ). The following special functor can be defined: Definition 2.27 (Covariant functor to 𝒮𝒫𝒟). Let ΔF denote the covariant functor (Definition 2.15) to the category of splitting pointed differentiations (Definition 2.22) on the category of Kmodules, which exhibits the properties ΔF (Ω) = (δ, ϵ) = (δ(Ω), ϵ(Ω)) and for θ ∈ Hom(Ω, Ω )
ΔF (θ) = (Sh(θ), Sh(θ) ⊗ θ).
A diagrammatic representation of this definition is given in Figure 2.3.
Figure 2.3: Covariant functor to the category of 𝒮𝒫𝒟.
̃ θ)̂ is a unique homomorphism in A theorem holds which states that the morphism (θ, the category of splitting pointed differentiations:
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.1 Shuffle algebra and the idea of algebraic paths  21
Theorem 2.28 (Uniqueness). Suppose we have Kmodule Ω and a splitting surjective pointed differentiation (d , p ), where d : 𝒰 → Ω . Then, for θ ∈ Hom(Ω, Ω ), there exists a unique pair ̃ θ)̂ ∈ Diff(δ, ϵ; d , p ) (θ, such that for all w ∈ Ω ̂ ⊗ w) = θw. θ(1 This theorem demonstrates that ΔF is adjoint to the forgetful functor (Definition 2.16) to the category of Kmodules on the category of splitting pointed differentiations, which assigns to each (d, p) the Kmodule Ω and to each ̃ ϕ)̂ ∈ Diff(d, p; d , p ) (ϕ, the morphism ϕ of Kmodules. Continuing along the same line and taking into account that Sh(Ω) ⊗ Sh(Ω) ⊗ Ω is a Sh(Ω) ⊗ Sh(Ω)module, and that ϵ ⊗ ϵ ∈ Alg(Sh(Ω)) ⊗ Sh(Ω), K), we have the following properties: 1. The morphism of Kmodules 1 ⊗ δ : Sh(Ω) ⊗ Sh(Ω) → Sh(Ω) ⊗ Sh(Ω) ⊗ Ω 2.
is a differentiation. The pair (1 ⊗ δ, ϵ ⊗ ϵ) is a splitting surjective pointed differentiation.
We end the discussion on shuffle algebras and their differentiations by stating the following property of the Loperator, defined in equation (2.12), with respect to the category of differentiations defined on the shuffle algebras: ̃ α , L̂ α ) is an equivalence in the category of differentiations 𝒟. That Proposition 2.29. (L is, ̃α . L̂ α δ = δL
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
22  2 Prolegomena to the mathematical theory of Wilson lines 2.1.2 Chen’s algebraic paths In this part we introduce the dpaths and dloops, generalizing the intuitive notion of paths and loops, as it was originally proposed by Chen. 2.1.2.1 Algebraic paths The whole concept of dpaths is schematically visualized in Figure 2.4. This diagram merges the properties of the shuffle product and algebra on the Kmodule of oneforms on a manifold M into a unified structure, allowing the construction of algebraic or generalized paths on M.
Figure 2.4: Path diagram.
Now we shall discuss the properties of the maps shown in Figure 2.4 and the way they generate the dpaths. The entire structure is built starting from a given pointed differentiation (d, p), which is mapped to the pointed differentiation (δ, ϵ) by the equivalence of differentiations we introduced in Definition 2.19. The δ in the figure is the differentiation introduced in Definition 2.23, and ϵ is the counit from the coalgebra structure on Sh(Ω). Anticipating that the notion of an ideal will play an important role, let us give the definition of an ideal and review some of its properties related to kernels of maps. Definition 2.30 (Ring ideal). Consider a ring K. Its subset A⊂K
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.1 Shuffle algebra and the idea of algebraic paths  23
is called an ideal if 1. A is a subgroup of K under the addition; 2. for a ∈ A and k ∈ K one has ka ∈ A.
We shall denote an ideal of Sh(Ω) by I = I(d, p). In diagram 2.4 (δ1 , ϵ1 ) stands for the pointed differentiation induced by (δ, ϵ) after dividing out this ideal. χ̃0 and χ0̂ are the Kmodule morphisms as defined in Lemma 2.26, such that χ̃0 F = pF + dF and χ0̂ w = 1 ⊗ w. Given the maps in the above diagram, a dpath from p can be defined as a Kalgebra morphism Sh(Ω) → K, factorizable through Sh(Ω)/I. We can also consider sums and products of ideals. Definition 2.31. Consider two ideals in K, A1 and A2 . Then, for a1 ∈ a1 , a2 ∈ a2 , the set {a1 + a2 } is an ideal written as A 1 + A2 . Similarly, the set {a1 a2 } is an ideal A1 A2 .
Note that A1 A2 ⊂ A1 ∩ A2 . The following property of ideals is important for our purposes.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
24  2 Prolegomena to the mathematical theory of Wilson lines Proposition 2.32. The kernel of a homomorphism F : A1 → A2 is an ideal in A1 . To prove this powerful statement we first need the following theorem: Theorem 2.33 (Kernel is a subring). Consider two rings K1 and K2 with the binary operations {+1,2 , ∘1,2 }. Suppose that we have a ring homomorphism Φ : K1 → K2 . Then the kernel of Φ is a subring in K1 . Proof. A ring homomorphism of addition is a group homomorphism and the kernel is a subgroup ker Φ ≤ K1 , where ≤ denotes subgroup. Let now x1 , x2 ∈ ker Φ, then Φ (x1 ∘1 x2 ) = Φ (x1 ) ∘2 Φ (x2 ) = 0K2 ∘2 0K2 = 0K2 . Therefore, x1 ∘1 x2 ∈ ker Φ so that the conditions for a subring are fulfilled. Now we are in a position to show that the kernel of a homomorphism is indeed an ideal. Theorem 2.34 (Kernel is an ideal). Let K1,2 be again rings with the corresponding binary operations. Consider a ring homomorphism Φ : K1 → K2 . Then the kernel of Φ is an ideal in K1 . Proof. By Theorem 2.33, ker Φ is a subring of K1 . Consider x1 ∈ ker Φ, such that Φ (x1 ) = 0K2 .
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.1 Shuffle algebra and the idea of algebraic paths  25
Suppose now that x2 ∈ K1 . Then, given that x1 ∈ ker Φ, Φ (x2 ∘1 x1 ) = Φ (x2 ) ∘2 Φ (x1 )
= Φ (x2 ) ∘2 0K2 = 0K2 .
Taking into account that (now obviously) Φ (x1 ∘1 x2 ) , the theorem is proven. With the aid of ideals we can introduce dclosed differentiations. Definition 2.35 (dclosed differentiation). Consider a differentiation d : U → Ω. An ideal J of U is called dclosed if dJ is an Usubmodule of Ω and if JΩ ⊂ dJ. If J is a dclosed ideal for U, then d induces a differentiation dJ which maps U/J → Ω/dJ .
(2.20)
This definition calls for a more detailed explanation. We take U to be a Kmodule, so that U supplied with addition (U, +) is an Abelian group, and we can use elements of K as scalars in the multiplication with elements of U. Expressing this multiplication as a map, we can write K × U → U. We take similarly (Ω, +) to be an Abelian group as a Umodule, such that the elements of U now act as scalars. The differentiation d then makes the ideal J a subset of Ω, but also turns it into a U(sub)module U × dJ → Ω. The term ‘closed’ refers then to the fact that JΩ ⊂ dJ
in Ω,
where the elements of J are multiplied by the elements of Ω. As it was discussed before, kernels of homomorphisms generate ideals. The proposition below explains how a dclosed ideal is related to the kernel of a homomorphism between pointed differentiations.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
26  2 Prolegomena to the mathematical theory of Wilson lines Proposition 2.36. Consider the pointed differentiations (d, p) and (d , p ), of which (d, p) is surjective and (d , p ) splitting. Hence, if ̃ Φ̂ ∈ Diff(d, p; d , p ), Φ, then ̃ ker Φ is a dclosed ideal of U. Therefore, we see that J from Definition 2.35 becomes ̃ J → ker Φ. 2.1.2.2 Chen iterated integrals as extension of line integrals Given that the concept of ideals is introduced and their relation to kernels of homomorphisms is clarified, we are in a position to study an ideal in the shuffle algebra. To this end, we define an extension of the notion of line integrals, socalled Chen iterated integrals. Definition 2.37 (Chen iterated integrals). Consider a line integral along the path γ(t) from the point p ∈ γ to q ∈ γ: q
Ii [γ] = ∫dxi (t) = xi (q) − xi (p).
(2.21)
p
Being generalized by recursion for n ≥ 2, it gives q
Ii1 ⋅⋅⋅in [γ] = ∫dxin (t) Ii1 ⋅⋅⋅in−1 (γ t ),
(2.22)
p
where γ t represents the part of the path γ, for which the path parameter runs from 0 to t (or, equivalently, the coordinates along the path vary from the point a to the point γ(t)).
The above definition depends, however, on the coordinates, which is not desirable in a manifold setting with coordinate independent equations. One can give an alternative definition that is free of explicit coordinate dependence. Definition 2.38 (Chen iterated integrals without coordinates). Consider a smooth ndimensional manifold M, the set of piecewisesmooth paths 𝒫ℳ γ:I→M
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.1 Shuffle algebra and the idea of algebraic paths  27
where I = [0, 1], and real oneforms ω1 , ω2 ⋅ ⋅ ⋅ , ωn ∈ ⋀ M. Using the notation ω1 ⊗ ω2 ⊗ ⋅ ⋅ ⋅ ⊗ ωn = ω1 ⋅ ⋅ ⋅ ωn , ̇ ωk (t) ≡ ωk [γ(t)] ⋅ γ(t), and γ t : I → M,
γ t (t ) ≡ γ(tt ),
we can define the iterated line integrals by induction: 1
∫ ω1 = ∫ ω1 (t)dt γ
0 t
1
∫ ω1 ω2 = ∫ [ ∫ ω1 (t1 )dt1 ]ω2 (t)dt γ 0 [0 ] and, generically, 1
∫ ω1 ω2 ⋅ ⋅ ⋅ ωn = ∫ [ ∫ ω1 ⋅ ⋅ ⋅ ωn−1 ]ωn (t)dt. γ 0 [ γt ]
(2.23)
The following property of the Chen iterated integrals helps us to derive the form of the elements of an ideal of the shuffle algebra. Proposition 2.39 (Chen iterated integrals preserve multiplication). Consider again a piecewise linear path γ in the manifold M γ and be the set of oneforms Ω on M. If we interpret γ as the map γ : T(Ω) → ℝ, ω1 ω2 ⋅ ⋅ ⋅ ωn → ∫ω1 ⋅ ⋅ ⋅ ωn , γ
then this map preserves multiplication: ∫ ω1 ⋅ ⋅ ⋅ ωk ∫ ωk+1 ⋅ ⋅ ⋅ ωk+l = ∫ ω1 ⋅ ⋅ ⋅ ωk ∙ ωk+1 ⋅ ⋅ ⋅ ωk+l . γ
γ
(2.24)
γ
This means that the map γ is a homomorphism.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
28  2 Prolegomena to the mathematical theory of Wilson lines
Exercise 2.40. Prove Proposition 2.39. Hint: use induction.
This proposition can be straightforwardly extended to oneforms by taking values in ℂ or in GL(n, ℂ). Then, the shuffle multiplication is replaced by the matrix multiplication, where the matrix entries get multiplied by means of the shuffle multiplication. It is worth noting that the extension of Proposition 2.39 to Lie algebra valued oneforms allows one to use Chen iterated integrals in the principal fiber bundle setting when solving the parallel transport equation in gauge theory. Let us give some simple examples of how the above proposition actually works. Example 2.41. ∫ ω1 ∫ ω2 = ∫ ω1 ∙ ω2 = ∫ ω1 ω2 + ω2 ω1
(2.25)
γ
γ
γ
γ
∫ ω1 ∫ ω2 ω3 = ∫ ω1 ∙ ω2 ω3 = ∫ ω1 ω2 ω3 + ω2 ω1 ω3 + ω2 ω3 ω1 . γ
γ
γ
(2.26)
γ
Suppose now that γ is a map as defined in Proposition 2.39 and F ∈ U. We obtain t
t
F [γ(t)] = F[γ(0)] + ∫ dF = pF + ∫ dF, 0
(2.27)
0
t
∫ Fω1 = ∫ (pF + ∫ dF)ω1 = ∫ dF ω1 + pF ∫ ω1 , γ
γ
1
t
(2.28)
γ
γ
0 t
1
t
∫ ω1 (Fω2 ) = ∫ (∫ ω1 ∫ dF)ω2 [γ(t)] dt + pF ∫ (∫ ω1 )ω2 [γ(t)] dt γ
0
0
0
0
= ∫ (ω1 ∙ dF)ω2 + pF ∫ ω1 ω2 , γ
(2.29)
0
(2.30)
γ
where we defined pF ≡ F [γ(0)] . Therefore, the general expression reads ∫ ω1 ⋅ ⋅ ⋅ ωi−1 (Fωi )ωi+1 ⋅ ⋅ ⋅ ωn = ∫ [(ω1 ⋅ ⋅ ⋅ ωi−1 ) ∙ dF] ωi ⋅ ⋅ ⋅ ωn + pF ∫ ω1 ⋅ ⋅ ⋅ ωn , γ
γ
(2.31)
γ
where the integrals are Chen iterated integrals as defined in Definition 2.38. This expression can again be extended to other oneforms. The following proposition holds:
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.1 Shuffle algebra and the idea of algebraic paths  29
Proposition 2.42. For all F ∈ C∞ M and ω1 , . . . , ωn ∈ ⋀ M, one has ∫ dF ⋅ ω1 ⋅ ⋅ ⋅ ωn = ∫(F ⋅ ω1 )ω2 ⋅ ⋅ ⋅ ωn − F [γ(0)] ⋅ ∫ ω1 ⋅ ⋅ ⋅ ωn γ
γ
γ
∫ ω1 ⋅ ⋅ ⋅ ωn ⋅ dF = (∫ ω1 ⋅ ⋅ ⋅ ωn ) ⋅ F [γ(1)] − ∫ ω1 ⋅ ⋅ ⋅ ωn−1 ⋅ (ωn ⋅ F) γ
γ
γ
∫ ω1 ⋅ ⋅ ⋅ ωi−1 ⋅ (dF) ⋅ ωi+1 ⋅ ⋅ ⋅ ωn = ∫ ω1 ⋅ ⋅ ⋅ ωi−1 ⋅ (F ⋅ ωi+1 ) ⋅ ωi+2 ⋅ ⋅ ⋅ ωn γ
γ
− ∫ ω1 ⋅ ⋅ ⋅ (ωi−1 ⋅ F) ⋅ ωi ⋅ ⋅ ⋅ ωn γ
∫ ω1 ⋅ ⋅ ⋅ ωi−1 ⋅ (F ⋅ ωi ) ⋅ ωi+1 ⋅ ⋅ ⋅ ωn = γ
F [γ(0)] ⋅ ∫ ω1 ⋅ ⋅ ⋅ ωn + ∫ [(ω1 ⋅ ⋅ ⋅ ωi−1 ) ∙ dF] ⋅ ωi ⋅ ⋅ ⋅ ωn . γ
γ
In Section 2.1.3 we shall discuss these integrals and their further properties in detail, but for the moment the above is sufficient to construct an ideal in the shuffle algebra. We learned from Proposition 2.39 that the map γ : T(Ω) → ℝ is a homomorphism. If one brings all the terms in equation (2.31) to the l. h. s., then the r. h. s. becomes zero. In this way the new l. h. s. becomes an element of the kernel of the homomorphism γ. From Theorem 2.34 we know that this operation generates an ideal on T(Ω), and hence also on the shuffle algebra Sh(Ω), according to Proposition 2.39. This suggests that I(d, p) ≡ u1 (Fw)u2 − (u1 ∙ dF)wu2 − (pF)u1 wu2
(2.32)
is an ideal in Sh(Ω) with p ∈ Alg(U, K)
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
30  2 Prolegomena to the mathematical theory of Wilson lines and u1 , u2 ∈ T(Ω),
w ∈ T 1 (Ω),
F ∈ U.
We still need to prove that this is actually an ideal of the Kalgebra Sh(Ω). Lemma 2.43. The Ksubmodule I(d, p) is an ideal of the Kalgebra Sh(Ω). Proof. Given u1 , u2 ∈ T(Ω),
w ∈ T 1 (Ω),
F ∈ U,
w1 , . . . wn ∈ T 1 (Ω),
we set Wi = w1 ⋅ ⋅ ⋅ wn−i ,
W i = wn−i+1 ⋅ ⋅ ⋅ wn ,
and Wn = W 0 = 1. Then: (w1 ⋅ ⋅ ⋅ wn ) ∙ (u1 (Fw)u2 ) = ∑(Wi ∙ u1 )(Fw)(W i ∙ u2 ), i
(w1 ⋅ ⋅ ⋅ wn ) ∙ [(u1 ∙ dF)wu2 ] = ∑(Wi ∙ u1 ∙ dF)w(W i ∙ u2 ), i
(w1 ⋅ ⋅ ⋅ wn ) ∙ (u1 wu2 ) = ∑(Wi ∙ u1 )w(W i ∙ u2 ), i
as is clear from the definition of the shuffle product. This allows us to write Fp (F, w, u1 , u2 ) = u1 (Fw)u2 − (u1 ∙ dF)wu2 − (pF)u1 wu2 , and (w1 ⋅ ⋅ ⋅ wr ) ∙ Fp (F, w, u1 , u2 ) = ∑ Fp (F, w, Wi ∙ u1 , W i ∙ u2 ) ∈ I, i
(2.33)
which by Definition 2.30 turns it into an ideal. Note that 1 ∉ I such that the factor algebra Sh(Ω)/I is again a commutative unitary Kalgebra. Having the above ideal of Sh(Ω), we are now ready to introduce Chen’s dpaths. Definition 2.44 (dpath). A dpath γ from p is an element of Alg(Sh(Ω), K), such that γI = {0}.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.1 Shuffle algebra and the idea of algebraic paths  31
From the discussion on Chen integrals and their relation to the shuffle algebra’s ideal, it is clear that if one takes such an integral over an element of the ideal I(d, p), this will return zero, in addition making the link with the ideal being the kernel of the map ∫γ . This is consistent with the definition of a dpath γ where one needs to have γ [I(d, p)] = ∫ I(d, p) = 0. γ
In other words, Chen integrals can be considered as dpaths. We shall come back to this point in Section 2.1.3. Notice that the homomorphisms induced by the Chen iterated integrals also preserve the algebraic (shuffle) structure. Hence, they can also be considered as algebra morphisms, where we denote the resulting algebra by 𝒜p . This leads us to the following remark, which will become relevant when introducing generalized loop space. Remark 2.45. The kernel of the algebra map Sh(Ω) → 𝒜p , when considering closed dpaths (i. e., loops), not only contains the ideal of the shuffle algebra, but also dC ∞ (M) that we denote by ⟨dC⟩. This generates a new ideal in Sh(Ω) when considered on the space of closed paths at p: Jp = Ip + ⟨dC⟩, such that for dloops we have the isomorphism Sh(Ω)/Jp ≅ 𝒜p . The following proposition relates the dclosed property to the ideal. Proposition 2.46 (Least δclosed ideal). I is the least δclosed ideal of Sh(Ω) which is contained in ker ϵ and for F ∈ U, w ∈ Ω contains all Fw − dFw − (pF)w. Proof. We start by determining that the intersection of two δclosed ideals is not necessarily δclosed. It is easy to see that ϵI = 0 and that I ∙ (Sh(Ω) ⊗ Ω) ⊂ (I ∙ Sh(Ω)) ⊗ Ω ⊂ I ⊗ Ω ⊂ δI,
(2.34)
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
32  2 Prolegomena to the mathematical theory of Wilson lines with δ as defined in Definition 2.23, next to Sh(Ω) ∙ δI ⊂ δ(Sh(Ω) ∙ I) + I ∙ δSh(Ω) ⊂ δI.
(2.35)
We conclude that I is a δclosed ideal of Sh(Ω). Consider now another δclosed ideal of Sh(Ω), I , which is contained in ker ϵ and itself contains all elements of the form Fp (F, w, 1, 1) = Fw − dFw − (pF)w. Then δI includes all u ∙ δFp (F, w, 1, 1) = δFp (F, w, u, 1),
(2.36)
such that Fp (F, w, u, 1) ∈ I , by virtue that I ⊂ ker ϵ. Taking into account that for n ≥ 1 δFp (F, w, u, w1 ⋅ ⋅ ⋅ wn )
= Fp (F, w, u, w1 ⋅ ⋅ ⋅ wn−1 ) ⊗ wn ∈ I ⊗ wn ⊂ δI ,
we obtain by induction Fp (F, w, u, w1 ⋅ ⋅ ⋅ wn ) ∈ I .
(2.37)
We now introduce the following notation for the canonical morphisms: δI = δ1 = δ1 (d, p) : Sh(Ω)/I → (Sh(Ω) ⊗ Ω)/δI ρ̃ : Sh(Ω) → Sh(Ω)/I
ρ̂ : Sh(Ω) ⊗ Ω → (Sh(Ω) ⊗ Ω)/δI.
(2.38)
By virtue that ϵI = 0, we have that ϵ has a unique factorization through the ideal: ϵ = ϵ1 ρ̃,
ϵ1 ∈ Alg(Sh(Ω)/I, k).
Obviously, ker δ1 ∩ ker ϵ1 = 0,
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.1 Shuffle algebra and the idea of algebraic paths  33
so that a pair (δ1 , ϵ1 ) = (δ1 (d, p), ϵ1 (d, p)) is a splitting surjective pointed differentiation, and (ρ̃, ρ)̂ ∈ Diff(δ, ϵ; δ1 , ϵ1 ). Using the definition displayed in Figure 2.4 we find δχ̃0 = χ0̂ d. Note that in general this is not the case: (χ̃0 , χ0̂ ) ∉ Diff(δ, ϵ; δ1 , ϵ1 ). This indicates that we need an extra condition, provided by the following theorem. Theorem 2.47. Consider a splitting pointed differentiation. Let (d , p ) and ̃ θ)̂ ∈ Diff(δ, ϵ; δ , ϵ ). (θ, 1 1 We find that (θ̃χ̃0 , θ̂ χ0̂ ) ∈ Diff(d, p; d , p ) if and only if ̃ I ⊂ ker θ. From this theorem one can derive two corollaries that make the diagram in Figure 2.4, defining the mathematical construct for dpaths, consistent. Corollary 2.48. From χ̃ = ρ̃χ̃0 and χ̂ = ρ̂ χ0̂ it follows that (χ̃, χ)̂ ∈ Diff(d, p; δ1 , ϵ1 ).
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
34  2 Prolegomena to the mathematical theory of Wilson lines Corollary 2.49. Given a splitting pointed differentiation (d , p ) and ̃ θ)̂ ∈ Diff(δ, ϵ; d , p ), (θ, such that (θ̃χ̃0 , θ̂ χ0̂ ) ∈ Diff(d, p; d , p ), we get that there is a unique ̃ Θ)̂ ∈ Diff(δ , ϵ ; d , p ) (Θ, 1 1 with ̃χ̃, Θ̂ χ)̂ = (θ̃χ̃ , θ̂ χ̂ ). (Θ 0 0 Using the ideal I(d, p) of the shuffle algebra, any dpath γ which begins at the point p can be factorized through γ ∈ Alg(Sh(Ω)/I, K) as γ = γ ρ̃. With the aid of this factorization we obtain q = γ χ̃0 = γ χ̃ ∈ Alg(U, K). We call p and q the initial and end (terminal) points of γ with γ being the dpath from p to q. If γ is such a dpath from p to q, then γ(dF) = γ(χ̃0 F − pF) = qF − pF which follows from the factorization through the ideal I(d, p). The following proposition states that, under certain assumptions about the scalars in K, the initial point of the dpath is unique. Proposition 2.50 (Unique initial point). Consider an integral domain K (that is, a commutative ring wherein the product of any nonzero elements is nonzero). The initial point of a dpath, provided that γ ≠ ϵ, is unique.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.1 Shuffle algebra and the idea of algebraic paths  35
Proof. Consider γ to be a dpath from p as well as from p . By virtue that γ ≠ ϵ, we find that there exist w ∈ T 1 (Ω),
w ∈ T(Ω)
for which γ(w w) ≠ 0. If now F is an element of U, we get γ [(Fw)w ] = γ(dFww ) + (pF)γ(ww ) = γ(dFww ) + (p F)γ(ww ). From this it follows that pF = p F. We indeed have a unique initial point for the dpath γ. It might seem that the algebraic structures introduced above depend on the choice of the initial point of the dpath. The following lemma shows that this is not the case, i. e., the algebraic structure is preserved under translation of the path to another initial point. Lemma 2.51. Consider a dpath γ from p to q. Then Loperator acts as ̃ γ I(d, p) = I(d, q) L
(2.39)
̃ γ , L̂ γ ) is an equivalence in the Proof. We have learned from Proposition 2.29 that (L ̃ category of differentiations 𝒟, such that Lγ I(d, p) is indeed a δclosed ideal of Sh(Ω). Hence, ̃ γ (Fw − dF w − (pF)w) = (γ ⊗ 1)Δ(Fw − dF w − (pF)w). L Then one gets Δ(Fw) = Fw ⊗ 1 + 1 ⊗ Fw (γ ⊗ 1)Δ(Fw) = γ(Fw) + Fw Δ(dF w) = 1 ⊗ dF w + dF w ⊗ 1 + dF ⊗ w (γ ⊗ 1)Δ(dF w) = dF w + γ(dF w) + γ(dF)w
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
36  2 Prolegomena to the mathematical theory of Wilson lines Δ(pF w) = pF w ⊗ 1 + 1 ⊗ pF w (γ ⊗ 1)Δ(p Fw) = (pF)γ(w) + pF w.
(2.40)
Summing all the above and taking into account that γ is a dpath, we get ̃ γ (Fw − dFw − (pF)w) L
= γ(Fw) + Fw − dFw − γ(dFw) − γ(dF)w − (pF)γ(w) − pFw = Fw − γ(Fw − dFw − pFw) − qFw − dFw + pFw − pFw = Fw − dFw − (qF)w,
(2.41)
where we used γ(I) = 0, so that ̃ γ I(d, p). I(d, q) ⊂ L ̃ −1 Similarly, we obtain for L γ ̃ −1 I(d, q) I(d, p) ⊂ L γ so that ̃ γ I(d, p) ⊂ I(d, q). L This shows that I(d, p) ≡ I(d, q). The meaning of the Loperator is now clear: it is the operator associated with a path γ from p to q that translates the algebra ideal I(d, p) at p to the algebra ideal I(d, q) at q, which is the endpoint of the dpath γ. With Definition 2.10 for the product of two algebra homomorphisms γ1 , γ2 ∈ Alg(Sh(Ω), K) we can introduce products of dpaths and inverses dpaths. As this multiplication turned the algebra homomorphisms into a group, the same will also be true for dpaths. Theorem 2.52. Suppose we have γ1 to be a dpath from p to q and γ2 to be a dpath from q to q . In this case, γ12 = γ1 γ2 is a dpath from p to q , and γ1−1 is a dpath from q to p.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.1 Shuffle algebra and the idea of algebraic paths  37
Proof. Given that ̃ γ I(d, p) = γ2 I(d, q) = 0, γ12 I(d, p) = γ1 γ2 I(d, p) = γ2 L 1
(2.42)
we see that γ12 = γ1 γ2 is a dpath from p. For F ∈ U, γ12 (dF) = (γ1 γ2 )(dF) = γ1 (dF) + γ2 (dF) = q F − pF.
(2.43)
That is, q is the endpoint of γ12 . Next to this we also have ̃ γ I(d, p) = (γ1 γ −1 )I(d, p) = (ϵ)I(d, p) = 0 γ1−1 I(d, q) = γ1−1 L 1 1
(2.44)
γ1−1 (dF) = −γ1 (dF) = pF − qF.
(2.45)
and
Hence, γ1−1 is a dpath from q to p. Given that the dpaths form a group under the above multiplication, one is able to construct the group of generalized loops. 2.1.2.3 Connectedness In the previous sections we have formally introduced Chen’s generalization of the intuitive idea of paths in a given space. Similarly to the case of the intuitive paths, we can now ask the question if some space is connected with respect to these generalized paths. If this turns out to be true, we shall say these spaces are dconnected as compared to the path connected ones. Definition 2.53 (dconnectedness). U is called dconnected if for arbitrary p and q, such that p, q ∈ Alg(U, K), there always is a dpath from p to q.
From topology we know that continuous maps transform path connected spaces to path connected spaces. The dpath counterpart of this statement is given by the following proposition:
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
38  2 Prolegomena to the mathematical theory of Wilson lines Proposition 2.54 (Maps between dconnected spaces). Consider ̃ ϕ)̂ ∈ Diff(d, d ). (ϕ, ̃ generates a surjective map If U is d connected and if ϕ Alg(U , K) → Alg(U, K), then U is dconnected. Recall that the dpaths are defined by means of a differentiation (d, p) that returns the ideal I(d, p) on which the dpath vanishes. Not surprisingly we see that if two points are points in a dconnected space, i. e., they can be connected by a dpath, their differentiations are equivalent. Proposition 2.55 (Equivalence of differentiations). Suppose that U is dconnected. For all p, q ∈ Alg(U, K) the differentiation operations δ1 (d, p) and
δ1 (d, q)
are equivalent. Exercise 2.56. Although this statement seems trivial at first, why is it not so trivial? How can one interpret this equivalence in the principal fiber bundle context?
Similarly to usual topological spaces, we can define discrete points with respect to dpaths. Definition 2.57 (ddiscrete point). A point p ∈ Alg(U, K) is a ddiscrete point if there is no dpath starting at it, such that γ ≠ ϵ.
Using these discrete points we can determine when a w∈Ω may be called trivial.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.1 Shuffle algebra and the idea of algebraic paths  39
Definition 2.58 (Ptriviality). Suppose that P ⊂ Alg(U, K) contains at least one dnondiscrete point. Then w∈Ω is called Ptrivial if, for all dpaths γ from a point p∈P and for all u1 , u2 ∈ T (Ω),
F ∈ U,
we have γ(u1 (Fw)u2 ) = 0. Then F ∈U is called Ptrivial if dF is Ptrivial and if Fw is Ptrivial for all w ∈ Ω.
Exercise 2.59. Show that not every point of Ω is Ptrivial and that specifically 1 ∈ U is such a nonPtrivial point.
Let UP stand for the quotient Kalgebra of U over the ideal of the Ptrivial elements of U, and ΩP stand for the quotient of the Umodule of Ω over the Usubmodule of the Ptrivial elements of Ω. Then ΩP is an UP module. The differentiation d maps the ideal of the Ptrivial elements of U into the submodule of the Ptrivial elements of Ω and thus generates the following differentiation: Definition 2.60. dP : UP → ΩP .
Consider canonical homomorphisms π̃P ∈ Alg(U, UP ) and π̂ P ∈ Hom(Ω, ΩP ).
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
40  2 Prolegomena to the mathematical theory of Wilson lines Hence, we get πP = (π̃P , π̂ P ) ∈ Diff(d, dP ). UP , ΩP and πP depend only on the dnondiscrete points of P. An injective map Alg(UP , K) → Alg(U, K) is generated by the projection π̃P . Proposition 2.61. Consider an integral domain K. The set of all the dnondiscrete points of P is contained in the image of the injective map Alg(UP , K) → Alg(U, K). Since we shall only be interested in the nontrivial elements, we can introduce reduced spaces that only contain such nontrivial elements. The purpose of this reduction will become clear when we return to the properties of Chen iterated integrals and their relation to dpaths in Section 2.1.3. Definition 2.62 (dreduced). U is called dreduced if in Alg(U, K) there exists at least one dnondiscrete point and the only Alg(U, K)trivial element of U is zero.
2.1.2.4 dLoops Before returning to Chen iterated integrals we comment a bit more on dloops. Generalized loops or dloops can be naturally defined as dpaths where the initial and endpoints coincide, but where one needs to complete the ideal with the set {dU}. This becomes clear when one considers Chen integrals as dpaths since they return zero over this set, such that the set {dU} indeed can be added to the algebra ideal I(d, p).5 Definition 2.63 (dloop). A dloop from p is obviously defined as a dpath which begins and ends at the same point p. Then {dU} stands for the ideal of Sh(Ω) generated by dU ⊂ T 1 (Ω). Therefore γ ∈ Alg(Sh(Ω), K)
5 See also Remark 2.45.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.1 Shuffle algebra and the idea of algebraic paths  41
is a dloop from p provided that (and only if) γ annuls the ideal I(d, p) + {dU} of Sh(Ω).
In what follows we shall use the notation Shc(d, p) for the quotient Kalgebra Sh(Ω)/(I = I(d, p) + {dU}). Notice that Shc(d, p) is commutative and unitary. There exists a canonical bijective map from the set Alg(Shc(d, p), K) to the set of dloops. Using again the multiplication from Definition 2.10 and Theorem 2.52, it is easy to see that the dloops also form a group. In Section 2.1.1.2 we have shown that Sh(Ω) is a Hopf algebra. The same is true for Shc: Theorem 2.64. Shc(d, p) is a Hopf Kalgebra with a comultiplication Δc , a counit ϵc and antipode Jc , generated by Δ, ϵ and J. Considering loops in topological spaces one usually discusses the fundamental group. One of the nice properties of the fundamental group is that it is independent of the base point of the loops. In the case of dloops we have similar properties, namely that the Hopfalgebra structure and the group structure of Shc are independent of the base point of the loops. The following proposition holds: Proposition 2.65. Suppose we have a dpath from p to q. Then the Hopf Kalgebras Shc(d, p) and Shc(d, q) are isomorphic. It follows directly from this proposition that, for the same path, the group of dloops from p is isomorphic with the group of dloops from q. We have already introduced Chen’s dpaths and dloops as algebra morphisms. We also discussed some of their properties, emphasizing ideals of algebra morphisms. The shuffle algebra ideal was constructed by using Chen’s generalization of line integrals. We have presented some of the properties of Chen iterated integrals that will be used for introducing the group of generalized loops. 2.1.3 Chen iterated integrals 2.1.3.1 dloops and Chen iterated integrals Let us discuss the relationship between Chen’s integrals and the dloops. From Remark 2.45 we learn that the integral algebra 𝒜p is isomorphic to the algebra Shc(d, p).
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
42  2 Prolegomena to the mathematical theory of Wilson lines A dloop γ is then considered as an algebra morphism Alg(Sh(Ω), K) that vanishes on the ideal I(d, p) + {dU}. On the other hand, this ideal is also an ideal in the algebra of Chen iterated loop integrals 𝒜p , by definition. The isomorphism of both algebras then enables one to identify a dloop with an element of 𝒜∗p , the dual space of 𝒜p formed by the real (complex, GL(n, ℂ)) valued linear functionals on 𝒜p , inducing the identification: Alg(Shc(d, p), k) ∋ γ → ∮ ∈ 𝒜∗p .
(2.46)
γ
This property, in combination with its relevance to the solution of the parallel transport equation in a principal fiber bundle (see Section 2.4), is the reason that we are interested in the Chen integrals. In the principal fiber bundle setting we shall assume the oneforms, used in the functionals X ω1 ⋅⋅⋅ωn (see equation (2.52)), to be Lie algebravalued. In other words, we shall assume ωi ∈ ⋀ M ⊗ gl(g), where gl is a matrix representation (i. e., an element of GL(n, ℂ)) of the Lie algebra g, which explains the presence of ωi ∈ ⋀ M ⊗ GL(n, ℂ) in many of the previous and following definitions and properties of Chen iterated integrals. 2.1.3.2 Chen iterated integrals: properties In Section 2.1.2 we introduced Chen iterated integrals (Definitions 2.37 and 2.38) and discussed some of their properties. Here we extend the list of properties of these integrals, which will be relevant for the construction of generalized loop space. Let us start by answering several elementary questions concerning the behavior of the Chen integrals. Exercise 2.66. What is the behavior of the Chen integrals if we take into account intermediate points along the path?
This question is answered by the following lemma: Lemma 2.67 (Intermediate points). Suppose we have the three points p≤c≤q
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.1 Shuffle algebra and the idea of algebraic paths  43
and the line integrations along the paths γc and γ c defined as q
c
c
γ ∼ ∫, p
and
γc ∼ ∫ . c
Then we find that Ii1 ⋅⋅⋅in [γ] = Ii1 ⋅⋅⋅in [γ c ] + Ii1 ⋅⋅⋅in−1 [γ c ]Iin [γc ] + Ii1 ⋅⋅⋅ik [γ c ]Iik ⋅⋅⋅in [γc ] + ⋅ ⋅ ⋅ + Ii1 ⋅⋅⋅in [γc ].
(2.47)
Notice that these integrals, as extensions of line integrals, are reparameterization invariant if the reparameterization preserves the orientation. Proposition 2.68 (Reparameterization). ∫ ω1 ⋅ ⋅ ⋅ ωn γ
is invariant under orientationpreserving reparameterizations. Having this property in mind, we shall investigate how the integrals behave when we combine two paths γ1 and γ2 , with the endpoint of γ1 being the starting point of γ2 , which we denote by c. Combining the paths we create the path γ12 = γ1 γ2 , where γ2 goes after γ1 . Applying Lemma 2.67 to the new path γ12 with an intermediate point c, we immediately find out how to deal with combined paths: Lemma 2.69 (Combining paths). Ii1 ⋅⋅⋅in [γ12 ] = Ii1 ⋅⋅⋅in [γ1 ] + Ii1 ⋅⋅⋅in−1 [γ1 ]Iin [γ2 ] + Ii1 ⋅⋅⋅ik [γ1 ]Iik ⋅⋅⋅in [γ2 ] + ⋅ ⋅ ⋅ + Ii1 ⋅⋅⋅in [γ2 ]
(2.48)
or, in the notations of Definition 2.38, Proposition 2.70 (Composition of paths). Given that γ1 , γ2 ∈ 𝒫ℳ, the space of paths in the real smooth manifold M, i. e., γ1 , γ2 : [0, 1] → M with γ1 (1) = γ2 (0), we can compose the paths using equation (2.47).
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
44  2 Prolegomena to the mathematical theory of Wilson lines Let ω1 , . . . , ωn ∈ ⋀ M and for n = 0 it is assumed that ∫ ω1 ⋅ ⋅ ⋅ ωn = 1. γ
Then, under composition of the paths, the Chen integrals change in the following way (where we introduced the notion of an inverse path): n
∫ ω1 ⋅ ⋅ ⋅ ωn = ∑ ∫ ω1 ⋅ ⋅ ⋅ ωi ⋅ ∫ ωi+1 ⋅ ⋅ ⋅ ωn i=0 γ
γ1 ⋅γ2
(2.49)
γ2
1
∫ ω1 ⋅ ⋅ ⋅ ωn = (−1)n ∫ ωn ⋅ ⋅ ⋅ ω1 .
(2.50)
γ1
γ1−1
When applied to ω1 , . . . , ωn ∈ ⋀ M ⊗ GL(n, ℂ) (i. e., general linear group complex matrixvalued oneforms), equation (2.50) is replaced by ∫ ω1 ⋅ ⋅ ⋅ ωn = (−1)n ∫[ωTn ⋅ ⋅ ⋅ ωT1 ]T
(2.51)
γ1
γ1−1
with ωT the transpose of the matrix ω. The matrix valued oneforms can be considered as matrix functions, which will be discussed in more detail in Section 2.3.1. Within the principal fiber bundle approach to the formulation of gauge theories (Section 2.2) we shall identify the gauge potentials Aμ with such oneforms, where the matrices form a representation of the Lie algebra. The Chen integrals will then be applied to solving the parallel transport equation. In what follows we adopt the notation X ω1 ⋅⋅⋅ωn X ω1 ⋅⋅⋅ωn [γ] = ∫ ω1 ⋅ ⋅ ⋅ ωn = γ[ω1 ⋅ ⋅ ⋅ ωn ],
(2.52)
γ
where ∫γ is interpreted as a dpath and where we considered the oneforms ωi to be complexvalued, ωi ∈ ⋀ M ⊗ ℂ. This notation can be straightforwardly extended to complex matrix valued oneforms, and thus to Lie algebra valued oneforms as well. Let us give a simple example:
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.1 Shuffle algebra and the idea of algebraic paths  45
Example 2.71. X ω1 ω2 [γ] = ∫ ω1 ω2 , γ
is a matrix in GL(n, ℂ) with the elements given by: i
(∫ ω1 ω2 ) = ∫(ω1 )ik ⊗ (ω2 )kj γ
j
(2.53)
γ
with ω1 , ω2 ∈ ⋀ M ⊗ GL(n, ℂ) are matrices of oneforms on M. Recall now that Chen’s integrals can be considered as dpaths/loops. The above properties of these integrals allow us to give some extra notions related to dpaths. Definition 2.72 (Elementary equivalent paths). Two paths are called elementary equivalent if γ1 γ2 γ2−1 γ3 = γ1 γ3 . This equivalence induces an equivalence relation on the dpaths, and thus also induces equivalence classes of paths [γ1 γ3 ].
Definition 2.73 (Piecewise regular paths). A piecewise regular path is a path in 𝒫ℳ with nonvanishing tangent vectors.
Definition 2.74 (Reduced paths). A path is called a reduced path if it is piecewise regular and if it does not belong to the type γ1 γ2 γ2−1 γ3 for arbitrary γ2 .
From these definitions, together with (2.49) and (2.50), it is clear that the functionals X defined in (2.52) depend only on the equivalence class and not on the specific path representing the class. As a specific example we see that γ and γγ γ −1 are representatives of the same class, and from the composition of paths property of the Chen integrals we get X ω1 ⋅⋅⋅ωn [γ] = X ω1 ⋅⋅⋅ωn [γγ γ −1 ].
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
46  2 Prolegomena to the mathematical theory of Wilson lines
Figure 2.5: The property of path reduction.
In the literature on loop space, this property is sometimes graphically represented as in Figure 2.5, where the functions X exhibit the property X ω1 ⋅⋅⋅ωn [γ] = X ω1 ⋅⋅⋅ωn [γγ γ −1 ] sometimes referred to as Stokes’ functionals. The properties of Chen integrals can be used to prove the following lemma: Lemma 2.75 (Nonvanishing Chen integrals). Consider a reduced piecewise regular path in 𝒫ℳ γ ≠ ϵ. For n ≥ 1, there exist oneforms ω1 , ω2 , . . . , ωn ∈ ⋀ M, such that X ω1 ⋅⋅⋅ωn [γ] ≠ 0. From the above lemma we can derive an important separation property of the functionals X ω1 ⋅⋅⋅ωn :
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.1 Shuffle algebra and the idea of algebraic paths  47
Theorem 2.76 (Separation Property Theorem). Two piecewise regular paths γ, γ are called equivalent if and only if for all sets of oneforms (n ≥ 1) ω1 , ω2 ⋅ ⋅ ⋅ , ωn ∈ ⋀ M corresponding Chen integrals equal each other X ω1 ⋅⋅⋅ωn [γ] = X ω1 ⋅⋅⋅ωn [γ].
(2.54)
Exercise 2.77. Prove Theorem 2.76 by using the definitions and properties of Chen integrals and reduced paths.
These lemma and theorem state that each dpath, defined by Chen integrals, is equivalent to exactly one reduced path. The theorem also says that the functionals X can be used to separate dpaths. Remark 2.78. With the principal fiber bundle approach in mind, we can go a step further. The following theorem states that if the dpaths of α1 and α2 return the same value for the exponential homomorphism Θ, then γ1 and γ2 only differ by parameterization and left translation provided they are reduced (see Definition 2.74): Theorem 2.79. Introduce the exponential homomorphism Θ: ∞
Θ[γ1 ] = 1 + ∑ ∑ ∫ ωi1 ⋅ ⋅ ⋅ ωin Xi1 ⋅ ⋅ ⋅ Xin , n=1
(2.55)
α
where the Xj are noncommutative indeterminates with respect to a base ω1 , . . . , ωn of the Maurer–Cartan forms (g −1 dg) of a real Lie group G, and Θ(γ1 ) is in an element of G. Then one of two irreducible piecewise regular continuous paths γ1 and γ2 can be obtained from the other by left translation and change of parameter if and only if Θ[γ1 ] = Θ[γ2 ]. Identifying the exponential homomorphism with the parallel transporter or Wilson line, the above theorem strengthens the equivalence relation on dpaths and dloops induced by path reduction from Definition 2.74. It permits as well a stronger separation of paths compared to the functionals X ω1 ⋅⋅⋅ωn . In other words, the parallel transporter can be used to distinguish or separate dpaths and dloops, a fact that will be quite helpful when introducing a topology on the algebra 𝒜p .
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
48  2 Prolegomena to the mathematical theory of Wilson lines
2.2 Gauge fields as connections on a principal bundle A mathematical point of view on Quantum Field Theory suggests that the fundamental interactions between matter fields can be conveniently expressed in a geometrical setting using principal fiber bundles.6 In this section we present some of the basic concepts of fiber bundle theory, which will be used to derive the parallel transport equation. Then we shall link the solution of this equation to the concept of Wilson lines.
2.2.1 Principal fiber bundle, sections and associated vector bundle The simplest structure we wish to define is a fiber bundle.7 A fiber bundle P(Y, π) consists of the base space Y, another set P, and the projection π : P → Y, which maps the fiber bundle P to the base space Y. Therefore, for each element y ∈ Y there exists a number of elements (fiber above y) Py = πy−1 ∈ P. So far the sets P and Y are arbitrary. More useful structures arise if we let them be differentiable manifolds and, correspondingly, π be a differentiable projection. Adding also a group G (which we shall assume to be a Lie group of Yang–Mills theory) enable us to define a principal (Yang–Mills) fiber bundle. Definition 2.80 (Yang–Mills principal fiber bundle). A principal fiber bundle P(Y , G, π) is a set of the following ingredients: 1. a base space Y , which is assumed in what follows to be a fourdimensional Minkowskian manifold M4 ; 2. a differentiable manifold P; 3. a surjective projection π : P → Y;
6 Notice that principal fiber bundles are not the only method to provide a geometrical description of physical interactions. Other approaches, which in some sense are closer to the ideas of Quantum Mechanics, are given by (Lie) algebroids and noncommutative geometry. 7 We base our exposition mostly on the works given in the section ‘Gauge theory and the principal fiber bundle approach’ in the Literature Guide.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.2 Gauge fields as connections on a principal bundle
4.
 49
a structure group G (in the Yang–Mills case it is a gauge (Lie) group), which is equivalent to a fiber Py , so that an inverse image yields the fiber at y πy−1 ≡ Gy ≅ G.
Notice the following properties: 1. A Lie group G acts on the fibers from the left. 2. There exists an open cover {Ui } of Y together with the diffeomorphisms Φi : Ui × G → π −1 (Ui ), so that (π ∘ Φi )(y, g) = y, where g is an element of G. The Φi are referred to as the local gauge or local trivialization since Φ−1 i acts as π −1 (Ui ) → Ui × G. 3.
The mapping Φi (y) : G → Gy is a diffeomorphism. On Ui ∩ Uj ≠ 0, it is required that Sij (y) ≡ Φ−1 i (y)Φj (y) which maps G→G is an element of the structure group G. Both maps Φi and Φj are related by a smooth map Sij : Ui ∩ Uj → G, so that Φj (y, g) = Φi (y, Sij (y)g). We refer to the Sij as the transition functions or passive gauge transformations.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
50  2 Prolegomena to the mathematical theory of Wilson lines
Figure 2.6: Right action of G on a fiber.
We also have a right action of G (see Figure 2.6) on the fiber which does not depend on the local gauges. Given that the structure group is equivalent to fiber, the right action of G on π −1 (Ui ) reads −1 Φ−1 i (π (y)g) = (y, gi g)
or π −1 (y)g = Φi (y, gi g). To see that this is independent of local gauges, let us consider a y ∈ Ui ∩ Uj , for which π −1 (y)g = Φj (y, gj g) = Φj (y, Sji gi g) = Φi (y, gi g).
(2.56)
Next to the principal fiber bundles we shall also need the concept of a section: Definition 2.81 (Section). A smooth map S:Y →P is called a section when π ∘ S = 1Y .
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.2 Gauge fields as connections on a principal bundle
 51
Suppose we have a section Si (y) over Ui . Then we can construct a corresponding local gauge Φi . To this end, let us consider for y ∈ U p ∈ π −1 (y), for which there exists a unique element gp ∈ G, such that p = Si (y)gp . Now we define Φi through its inverse Φ−1 i (p) = (y, gp ). Notice that in this specific gauge (often referred to as the canonical local trivialization) we get Si (y) = Φ(y, ϵ), where ϵ is the identity element in the structure group G. The gauge potentials can be defined in these principal fiber bundles that form an appropriate geometrical space. Exercise 2.82. How to include a matter field ψ(x), or put differently, how does one geometrize ψ(x)?
The geometrization is performed with the aid of another structure called associated vector bundle E(Y, G, V, P, πE ). The vector bundle E is constructed using an ndimensional vector space V, ((p, v) ∈ P × V), on which the gauge group G acts: (p, v) → (pg, ρ−1 (g)v)
(2.57)
with ρ the ndimensional unitary representation of G. Therefore, the vector bundle E(Y, G, V, P, πE ) forms an equivalence class P × V/G,
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
52  2 Prolegomena to the mathematical theory of Wilson lines such that (p, v) ≡ (pg, ρ−1 (g)v). The bundle E now also has a fiber bundle structure E = P ×ρ V, where πE : E → Y,
πE (p, v) = π(p)
with local trivialization Ψi : Ui × V → πE−1 (Ui ). Again we have transition functions, which are now ρ(Sij (y)) with the Sij transition functions on P. A local section Si on P can then not only be used to determine a local gauge on P, but also on E: Φ−1 i (y) ∘ Si (x) = 1G
(2.58)
Ψ−1 i (y)
(2.59)
∘ Si (x) = 1V ,
with −1 Ψ−1 i (y) : πE (y) → V.
Now the associated vector bundle E(Y, G, V, P, πE ) allows us to geometrize matter fields. Definition 2.83 (Matter field). A matter field of type (ρ, V ) is defined as a section ψ(y) : Y → E.
Being expressed in a gauge independent way, it yields: Definition 2.84 (Gauge independent definition matter field). A matter field of type (ρ, V ) (see Figure 2.7) is defined as a map ̃ : P → V. ψ This map is equivariant under the structure group G for each p ∈ P ̃ ̃ ψ(pg) = ρ(g−1 )ψ(p).
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.2 Gauge fields as connections on a principal bundle
 53
Figure 2.7: Definition of matter field.
2.2.2 Gauge field as a connection In a gauge theory the fields (or potentials) can be introduced as Lie algebravalued oneforms on the principal fiber bundle associated to the gauge theory.8 Now we shall motivate and discuss this identification.9 We shall give two equivalent definitions of connection, the first of which is more used by mathematicians, while the second one is more favored by physicists. Definition 2.85 (Connection (math)). Consider a principal fiber bundle P(Y , G, π). Then a connection on P is defined as a unique splitting of the tangent space Tp P into the vertical subspace Vp P and the horizontal subspace Hp P, such that 1. Tp P = Hp P ⊕ Vp P. 2. A smooth vector field X on P can be split into horizontal and vertical fields X = XH + XV , where X H ∈ Hp P and X V ∈ Vp P. 8 Let us emphasize that the identification of fields, as defined in quantum field theory in physics, with sections of the principal (gauge) fiber bundle is only valid in the perturbative sector. In the nonperturbative regime the situation becomes much more involved. Sometimes one runs into problems of uniqueness, even in the perturbative sector. An example of this is, for instance, the U(1)bundle over the sphere S2 . 9 A gauge field Aμ (also referred to as gauge potential ) transforms under a gauge transformation U(x)
as was discussed in the Introduction: Aμ → U(x)Aμ (x)U † (x) ∓ ei 𝜕μ U(x)U † (x), with e0 the coupling 0 constant. This obviously differs from the transformation law for vectors and looks more like the trans−1 −1 formation of a connection ω → g ωg + g 𝜕μ g.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
54  2 Prolegomena to the mathematical theory of Wilson lines
3.
For p ∈ P and g ∈ G one has Hpg P = Rg∗ Hp P.
The vertical space is considered to be tangent to Gx at p, which we shall discuss in more detail below. The last statement in the definition says that the horizontal spaces Hpg P and Hp P on the same fiber are related by a linear transformation generated by the right action of the structure group. Most physicists, however, would prefer the definition of a connection oneform introduced by Ehresmann. Before we give that definition, let us first overview some facts about Lie groups and Lie algebras. 2.2.2.1 Lie groups and Lie algebras Consider a Lie group G to which we can associate a left (Lg ) and a right action (Rg ) defined respectively as Lg h = gh and Rg h = hg for g, h ∈ G. The left action Lg generates the map (pushforward)10 Lg∗ : Th (G) → Tgh (G) between tangent spaces at different points in the Lie group G. This allows us to define a leftinvariant vector field X by demanding that Lg∗ Xh = Xgh . These leftinvariant vector fields generate a Lie algebra of G, which we write as g. Now X∈g is specified by its value at the Lie group’s unit element e, and vice versa. This means there exists a vector space isomorphism between the Lie algebra g and the tangent space of G at the unit element, i. e., g ≅ Te G. 10 Notice that this map is welldefined due to the fact that this action is an automorphism of G.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.2 Gauge fields as connections on a principal bundle
 55
From Lie theory we learn that the Lie algebra g has a set of generators {Tα } that also define the structure constants γ
γ
fαβ : [Tα , Tβ ] = fαβ Tγ . Besides the left and right action, Lie groups also allow for an adjoint action ad : G → G,
h → adg h ≡ ghg −1 ,
which in its turn generates the adjoint map Adg : Th (G) → Tghg −1 (G) between tangent spaces. By choosing h ∈ G in the adjoint map to be the unit element e, we immediately see that Adg maps Te (G) ≅ g onto itself. With the aid of this information on Lie groups and algebras, we now understand how to construct the vertical subspace Vp P, defined in Definition 2.85, of the tangent space of the principal fiber bundle Tp P. Suppose we have A∈g and p ∈ P. Then the right action: Re[tA] p = pe[tA] ,
(2.60)
defines a curve through p parameterized by t. Noticing that π(p) = π [pe[tA] ] = y implies the curve lies in Gy , the fiber above y ∈ Y. Using an arbitrary smooth function F:P→ℝ
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
56  2 Prolegomena to the mathematical theory of Wilson lines we define the vector A♯ ∈ Tp P as A♯ F(p) =
d F(pe[tA] )t=0 . dt
(2.61)
This vector is tangent to P at p and by definition tangent to G such that we have A♯ ∈ Vp P. Constructing such an A♯ at each point of P builds a vector field also noted as A♯ and referred to as the fundamental vector field generated by A. We obtain, therefore, the isomorphism ♯ : g → Vp P : A → A♯ . We identify the complement of Vp P with Hp P from Definition 2.85. We are now in a position to define the Ehresmann connection oneform: Definition 2.86 (Ehresmann connection oneform). A connection oneform ω ∈ T ∗P ⊗ g is a projection of Tp P onto the vertical component Vp P ≅ g, the Lie algebra of G. This oneform possesses the following properties: 1. ω(A♯ ) = A with A∈g 2. Rg∗ ω = Adg−1 ω or for X ∈ Tp P Rg∗ ωpg (X ) = ωpg(Rg∗ X ) = g−1 ωp (X )g.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.2 Gauge fields as connections on a principal bundle
 57
Using this definition, the horizontal subspace Hp P can also be identified with the kernel of ω. Recall now that we wish to relate Aμ to a Lie algebravalued oneform. We have constructed a oneform, so the following question naturally arises: Exercise 2.87. How to find the relation between the gauge fields Aμ and the Ehresmann connection oneform?
Take an open covering {Ui } of Y and let Si be a local section on each Ui . Using the Ehresmann connection ω define the Lie algebravalued oneform Ai on Ui by:11 Ai ≡ Si∗ ω ∈ ⋀(Ui ) ⊗ g.
(2.62)
Now it is also possible, given a gauge field and a section Si : Ui → π −1 (Ui ), to reconstruct a connection oneform ω. The following theorem helps us to proceed: Theorem 2.88. Given a gvalued oneform Ai on Ui and a local section Si : Ui → π −1 (Ui ), there exists a connection oneform ω whose pullback by Si∗ is Ai . It is worth mentioning, however, that the connection oneform ω can be defined globally, while the Lie algebravalued oneform Ai cannot because of the need for the local sections Si . Theorem 2.88 states that given a gauge potential Ai in Ui there exists a connection oneform ω, but it does not say whether it is unique. If we wish this connection oneform to be unique, it needs to satisfy an extra condition that is called the compatibility condition. This condition follows from the fact that if ω needs to be unique, one needs that ωi = ωj ,
on Ui ∩ Uj
with ωi = ωUi . 11 The indices i refer to the covering and not to the spacetime indices μ that accompany each Ai in Ui for a specific i.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
58  2 Prolegomena to the mathematical theory of Wilson lines From manifold theory it is clear that this restriction has something to do with the transition function associated to transformation from Ui to Uj , so we can expect a statement that restricts the transition functions. The explicit form of this condition can be derived by applying the connection oneform ω to (2.63) in the following lemma: Lemma 2.89. Consider a principal fiber bundle P(Y, G, π) and local sections Si , Sj over Ui and Uj , such that Ui ∩ Uj ≠ 0. For X ∈ Tp M with p ∈ Ui ∩ Uj ,
Si∗ X,
Sj∗ X
satisfy ♯
Sj∗ X = Rtij ∗ (Si∗ X) + (tij−1 dtij (X)) ,
(2.63)
where tij : Ui ∩ Uj → G is the transition function. After application of ω to (2.63), using ω(Sj∗ ) = Sj∗ ω together with the second property of Definition 2.86, we obtain Aj = tij−1 Ai tij + tij−1 dtij .
(2.64)
Identifying the Aj with gauge potentials, we obtain for the components A2μ = g −1 (p)A1μ (p)g(p) + g −1 (p)𝜕μ g(p),
(2.65)
which is identical to a gauge transformation in gauge theory. In local coordinates it reads Ai = (−igAaμ t a dxμ ) , i
where g is now the coupling constant and t a are the Lie algebra generators.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
(2.66)
2.2 Gauge fields as connections on a principal bundle
 59
2.2.3 Horizontal lift and parallel transport Now that we have defined the splitting of the tangent space TP of the principal fiber bundle P(Y, G, π) we can define the horizontal lift of a curve in the base manifold Y = M4. Definition 2.90 (Horizontal lift). Consider a principal fiber bundle P(Y , G, π) and a curve in Y γ : [0, 1] → Y . Then a curve γ̃ : [0, 1] → P is called a horizontal lift of γ if the tangent vector to γ̃(t) is contained in Hγ̃(t) P.
With this definition we have the following theorem: Theorem 2.91. Consider again a curve in Y γ : [0, 1] → Y and p ∈ π −1 [γ(0)]. One can show that there is a unique horizontal lift γ̃(t) in P (see Figure 2.8), such that γ̃(0) = p. It follows from this statement that if γ̃ is another horizontal lift of γ, such that γ̃ (0) = γ̃(0)g, then for all t ∈ [0, 1] one gets γ̃ (t) = γ̃(t)g. The last statement demonstrates the global gauge symmetry, a global right action does not change the connection on the principal fiber. Consider now X to be the tangent vector of γ(t) at γ(0), using the horizontal lift we have that ̃ = γ̃∗ X X
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
60  2 Prolegomena to the mathematical theory of Wilson lines
Figure 2.8: Horizontal lifts of a curve.
is tangent to γ̃ at p = γ̃(0). Given that this lifted tangent vector is horizontal by definition, we get ̃ = 0. ω(X) Rewriting equation (2.63) using the fact that the transition functions are elements of G returns: ̃ = g −1 (t)Si∗ Xgi (t) + (g −1 (t)dgi (X))♯ . X i i
(2.67)
Applying ω to this result we have ̃ = g −1 (t)ω(Si∗ X)gi (t) + g −1 (t) dgi (t) . 0 = ω(X) i i dt Exercise 2.92. Derive equation (2.68) by applying ω to equation (2.67).
This result can now be used to answer the question: Exercise 2.93. What is the parallel transport equation in gauge theory?
From the expression for the gauge potentials ω(Si∗ X) = Si∗ ω(X) = Ai (X)
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
(2.68)
2.3 Solving matrix differential equations: Chen iterated integrals  61
in equation (2.68) it follows that the parallel transport equation in the local form reads dgi (t) = −Ai (X)gi (t). dt
(2.69)
Thus we have discussed the relation between gauge potentials and connection oneforms on principal fiber bundles. This eventually allowed us to derive the parallel transport equation in gauge theory. In the next section we shall introduce the mathematical tools to solve this type of equations.
2.3 Solving matrix differential equations: Chen iterated integrals The main goal of this section is to clarify the relationship between iterated integrals, solutions of the parallel transporter equation in the perturbative sector of a gauge theory, and Wilson lines. We shall see that this parallel transporter equation is a linear differential matrix equation (LDME) given that we work with a matrix representation of the gauge group generators. Since we will be interested in the solutions of this transporter equation, it is instructive to discuss in detail the procedure of their construction. Finally, the solutions will be expressed in terms of product integrals and Chen iterated integrals. We restrict ourselves to the definitions and properties that are necessary in our exposition, with the aim to make it clear how Chen iterated integrals emerge in the solution of the parallel transporter equation. 2.3.1 Derivatives of a matrix function We assume that the reader is familiar with the basics of matrix theory, so that we only define the derivative and product integral of a matrixvalued function A : [a, b] → ℝn×n . For the moment we restrict ourselves to realvalued matrices, but most definitions and properties can be straightforwardly extended to complex matrices. The first concept we need is the one of differentiability of a matrix function. Definition 2.94 (Differentiability of a matrix function). A matrix function A : [a, b] → ℝn×n is called differentiable at a point x ∈ (a, b) if all its entries aij ,
i, j ∈ 1, . . . , n
are differentiable at x ∈ [a, b], where the entries are considered to be realvalued functions aij : [a, b] → ℝ.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
62  2 Prolegomena to the mathematical theory of Wilson lines
If the matrix function A is differentiable we use the notation: n
A (x) = {aij }i,j=1 .
(2.70)
Building on the differentiability of the matrix function A we can now define not one, but two derivatives. Definition 2.95 (Left and right derivative of a matrix function). Let A : [a, b] → ℝn×n be a differentiable and regular (singlevalued and analytic) matrix function at x ∈ (a, b), then we define the left derivative of A at x as A(x + Δx)A−1 (x) − I d A(x) = A (x)A−1 (x) = lim , Δx→0 dx Δx
(2.71)
and similarly the right derivative as: A(x)
A−1 (x)A(x + Δx) − I d = A−1 (x)A (x) = lim . Δx→0 dx Δx
(2.72)
Derivatives at the endpoints of the interval [a, b] are defined in the same way left and right12 derivatives are defined for scalar functions, again by using the matrix entries aij . Both the left and right derivatives of a matrix function share many properties with the common derivatives of functions, but still in some cases we have to be careful. To demonstrate this we just mention the application to a product. Theorem 2.96. Let A1 , A2 : [a1 , a2 ] → ℝn×n be differentiable and regular matrix functions at x ∈ (a1 , a2 ). Hence, one gets d d d d d A1 + A1 ( A2 )A−1 + A )A−1 (A1 A2 ) = 1 = A1 (A dx dx dx dx dx 2 1 d d d d d = A2 + A−1 ) = A−1 + A )A . (A1 A2 ) 2 (A1 2 (A1 dx dx dx dx dx 2 2
(2.73) (2.74)
Exercise 2.97. Prove Theorem 2.96.
12 Here the left and right refer to approaching the endpoints of the interval from the left or the right and not to the derivatives of the matrix function.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.3 Solving matrix differential equations: Chen iterated integrals  63
Exercise 2.98. Show that d d (CA) = C A dx dx where C is a constant matrix.
Exercise 2.99. Demonstrate that d d −1 (A ) = −A , dx dx
(A−1 )
d d = − A. dx dx
Theorem 2.100. Suppose that A1 , A2 : [a, b] → ℝn×n are differentiable and regular matrix functions at x ∈ (a, b), such that d d A = A. dx 1 dx 2
(2.75)
Then there exists a constant matrix A3 ∈ ℝn×n such that for all x ∈ {a, b} A2 (x) = A1 (x)A3 . Exercise 2.101. Prove Theorem 2.100. (Hint: use A3 = A−1 1 A2 .)
2.3.2 Product integral of a matrix function Having introduced the derivatives of a matrix function, we now turn to integrals of matrix functions. Consider a matrix function A : [a, b] → ℝn×n and a partition D of the interval [a, b] defined as a = t0 ≤ ξ1 ≤ t1 ≤ ξ2 ≤ ⋅ ⋅ ⋅ ≤ tm−1 ≤ ξm ≤ tm = b.
(2.76)
Next we introduce the notation Δti = ti − ti−1 ,
ν(D) = max Δti , 1≤i≤m
i = 1, . . . , m
(2.77) (2.78)
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
64  2 Prolegomena to the mathematical theory of Wilson lines and 1
P(A, D) = ∏ (I + A(ξi )Δti ) = (I + A(ξm )Δtm ) ⋅ ⋅ ⋅ (I + A(ξ1 )Δt1 )
(2.79)
P ∗ (A, D) = ∏ (I + A(ξi )Δti ) = (I + A(ξ1 )Δt1 ) ⋅ ⋅ ⋅ (I + A(ξm )Δtm ).
(2.80)
i=m m i=1
Volterra then defined the left and right integral of the matrix function A as: b
∫ {aij } = lim P(A, D), ν(D)→0
a
Left integral,
(2.81)
Right integral,
(2.82)
b
{aij } ∫ = lim P ∗ (A, D), a
ν(D)→0
where lim M(D) = M,
(2.83)
ν(D)→0
is defined as ∀ ϵ > 0,
∃δ > 0,
such that M(D)ij − Mij < ϵ for every partition D of [a, b] as defined in equation (2.76). This allows us to define the left and right product integrals: Definition 2.102 (Left and right product integrals). Consider a matrix function A, B : [a, b] → ℝn×n . If the limits b
lim P(A, D) = ∏ (I + A(t)dt),
ν(D)→0
a
b
lim P ∗ (A, D) = (I + A(t)dt) ∏
ν(D)→0
a
exist then they are called, correspondingly, the left and right product integral of A over [a, b].
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
(2.84) (2.85)
2.3 Solving matrix differential equations: Chen iterated integrals  65
In order to link this operation with the usual Riemann integrals, we observe that a matrix function A is Riemann integrable if its matrix entries aij are Riemann integrable functions on [a, b]. In this case one has b
n
b
{ } ∫ A(t)dt = {∫ aij (t) dt } . a {a }i,j=1
(2.86)
Riemann integrability allows us to expand the integrals of a matrix function in order to relate them to the Chen iterated integrals. This expansion is captured by the following theorem: Theorem 2.103. Introduce a Riemann integrable matrix function A : [a, b] → ℝn×n . Then the left and right product integrals exist and are given by13 x
t ∞ x k
t2
a
k=1 a a
a
∏ (I + A(t)dt) = I + ∑ ∫ ∫ ⋅ ⋅ ⋅ ∫ A(tk ) ⋅ ⋅ ⋅ A(t1 )dt1 ⋅ ⋅ ⋅ dtk , x
t ∞ x k
a
k=1 a a
(2.87)
t2
(I + A(t)dt) ∏ = I + ∑ ∫ ∫ ⋅ ⋅ ⋅ ∫ A(t1 ) ⋅ ⋅ ⋅ A(tk )dt1 ⋅ ⋅ ⋅ dtk ,
(2.88)
a
where the series converge absolutely and uniformly for x ∈ [a, b]. The following Theorem 2.103 takes place: Theorem 2.104. Consider a Riemann integrable matrix function A : [a1 , b] → ℝn×n and x
Y1 (x) = ∏ (I + A(t)dt), a
(2.89)
x
Y2 (x) = (I + A(t)dt) ∏ . a
(2.90)
Then for all x ∈ [a1 , b] 13 Notice the ordering of the matrix functions under the integral signs.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
66  2 Prolegomena to the mathematical theory of Wilson lines the integral equations are satisfied x
Y1 (x) = I + ∫ A(t)Y1 (t) dt
(2.91)
Y2 (x) = I + ∫ Y2 (t)A(t) dt.
(2.92)
a x
a
2.3.3 Continuity of matrix functions In order to continue toward our goal of finding solutions to the type of differential matrix equation that emerged in the parallel transport equation in gauge theory, we need to consider the continuity of matrix functions. Just as differentiability of the matrix function was defined using the differentiability of its matrix entries aij , we do the same for continuity. Definition 2.105. Consider a matrix function A : [a, b] → ℝn×n . Then A is called continuous if the entries aij of A are continuous functions on [a, b].
With this definition we can write down the types of differential equations we require, which are obtained by differentiating the integral equations of Theorem 2.104. Theorem 2.106. Consider a continuous matrix function A : [a, b] → ℝn×n . Then x ∈ [a, b] the product integrals x
Y1 (x) = ∏ (I + A(t)dt), a
x
(2.93)
Y2 (x) = (I + A(t)dt) ∏
(2.94)
Y1 (x) = A(x)Y1 (x),
(2.95)
a
satisfy the conditions Y2 (x)
= Y2 (x)A(x).
(2.96)
Written in a notation using the left and right derivatives defined in Section 2.3.1, the equations (2.95) and (2.96) can be rewritten as: d x ∏ (I + A(t)dt) = A(x), dx a
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.3 Solving matrix differential equations: Chen iterated integrals  67 x
(I + A(t)dt) ∏ a
d = A(x). dx
(2.97)
Moreover, we have Corollary 2.107. Consider a function Y : [a, b] → ℝn×n . It delivers a solution to the equation for x ∈ [a, b] Y (x) = A(x)Y(x).
(2.98)
Moreover, it satisfies Y(a) = I if and only if Y solves the integral equation x
Y(x) = I + ∫ A(t)Y(t)dt.
(2.99)
a
From the above it is now evident that solutions of equations (2.95) and (2.96) can be presented as x ∞ b k
x2
k=1 a a
a
Y1 (x) = I + ∑ ∫ ∫ ⋅ ⋅ ⋅ ∫ A(xk ) ⋅ ⋅ ⋅ A(x1 )dx1 ⋅ ⋅ ⋅ dxk , xk ∞ b 1
x2
k=1 a a
a
Y2 (x) = I + ∑ ∫ ∫ ⋅ ⋅ ⋅ ∫ A(x1 ) ⋅ ⋅ ⋅ A(xk )dx1 ⋅ ⋅ ⋅ dxk ,
(2.100) (2.101)
to be compared to the expressions given in Example 2.108. All the above properties and theorems can by readily extended to matrix functions A : [a, b] → ℂn×n , such that this is not an obstacle when considering matrix representations of gauge groups such as, for example, SU(N). 2.3.4 Iterated integrals and path ordering In this section we shall rewrite the product integrals presented above in the iterated integrals form (Theorem 2.103) in a more familiar notation in the context of Wilson lines. To this end we start with a wellknown example:
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
68  2 Prolegomena to the mathematical theory of Wilson lines Example 2.108. Consider the Schrödinger equation for a quantum evolution operator in the interaction representation: i𝜕t U(t) = H(t)U(t),
U(0) = 1
(2.102)
where H(t) is the interaction Hamiltonian – an operator function acting in the Hilbert space. This unitary operator can also be treated as a complexvalued scalar matrix function U(t) : [0, t] → ℂ. The iterated integrals which contribute to the solution of equation (2.102) can be rewritten as t t1
tl−1
t
0
0
1 ∫ ∫ ⋅ ⋅ ⋅ ∫ H(t1 ) ⋅ ⋅ ⋅ H(tl )dt1 ⋅ ⋅ ⋅ dtl = ∫ dt1 ⋅ ⋅ ⋅ dtl T{H(t1 ) ⋅ ⋅ ⋅ H(tl )}, l! 0 0
(2.103)
where T indicates the timeordering operation for the Hamilton operator H(t). That is, this operator orders the H(t) . . . H(t ) in time. The previous expression then allows for the formal notation for the unitary operator U(t) t
Uτ (t) ≡ Pe[−i ∫0
dt H(t )]
,
(2.104)
which could be interpreted as a parallel propagator along a path through the time axis τ = [0, t]. We now wish to do the same thing, but replace the time integration variable t with the variable that parameterizes a curve (path) in a smooth real manifold M. More specifically, we are considering the matrix function A : [0, 1] → ℂn×n , so that A can be written as A=S∘ϕ where ϕ : [0, 1] → M
t → ϕ(t) = xμ (t)
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.3 Solving matrix differential equations: Chen iterated integrals  69
and S : M → ℂn×n
xμ → S(xμ ) = A(x(t)). Applying the same reasoning as in Example 2.108, we see that the equation Y (t) = A(t)Y(t),
(2.105)
has a unique solution t
Y(t) = Te[∫0 dt
A(t )]
y
= 𝒫 e[∫0 dx S(x)]
(2.106)
given the initial condition Y(0) = 1 and the timeordering is replaced with the pathordering, which orders the operators S(x) along the path in the manifold M. We shall return to this type of equations in what follows, after a brief discussion on the relation between product integrals and the Chen integrals from Section 2.1.3. Investigating equation (2.23) more closely it is easy to see that the operators ωi are ordered under the integral sign. Hence, we can rewrite it as 1
∫ ( ∫ ω1 ⋅ ⋅ ⋅ ωr−1 )ωn (t)dt = 𝒫 { ∫ ⋅ ⋅ ⋅ ∫ ω1 ⋅ ⋅ ⋅ ωn }, 0
γ
γt
(2.107)
γ
where we considered the integrals between the braces as ordinary integrals and not as a Chen iterated integrals. Using this result we can rewrite the function Y(t) from equation (2.106) with Chen iterated integrals: y
Y(t) = 𝒫 e[∫0 dx S(x)] = e
[∫γ S]
,
(2.108)
if one identifies the operator S(x) dx (interpreted as a form) with the forms ω = ω1 = ⋅ ⋅ ⋅ = ωn from (2.23). Exercise 2.109. One needs to be careful with this last statement about the ωi . We can indeed identify them all with ω, which will still depend the coordinates x μ after having chosen a coordinate chart. Consider the simple example ω1 ω2 → ω(x1 )ω(x2 ) to clarify this statement.
Now that the relation between product integrals, Chen integrals and path ordering has been explained we are ready to investigate the parallel transport equation in gauge theory and its connection with Wilson lines.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
70  2 Prolegomena to the mathematical theory of Wilson lines
2.4 Wilson lines, parallel transport and covariant derivative 2.4.1 Parallel transport and Wilson lines We return now to the parallel transport equation in gauge theory, equation (2.69) dgi (t) = −Ai (X)gi (t), dt
(2.109)
where Ai is a Lie algebravalued (i. e., a complex matrix when considering matrix representations for the Lie algebra) oneform. Given the initial condition gi (0) = e, a solution can be expressed using product integrals or Chen integrals, yielding (locally) the formal solution in the form of a functional of an arbitrary path γ(t) μ
t
[− ∫0 Ai μ (x(t)) dxdt dt]
gi [γ(t)] = 𝒫 eγ
γ(t) [− ∫γ(0)
= 𝒫e
(2.110)
Ai μ (x)dxμ ]
[− ∫γ Ai ]
= 𝒫e
(2.111)
where Aiμ = ig 𝒜ai μ t a with horizontal lift γ̃(t) = si [γ(t)]gi [γ(t)].
(2.112)
Note that the integrals in equation (2.111) are interpreted as Chen iterated integrals. More specifically, we find that if u0 ∈ π −1 [γ(0)], then u1 ∈ π −1 [γ(1)] is the parallel transport of u0 along the curve γ Γ(γ̃) : π −1 [γ(0)] → π −1 [γ(1)],
u0 → u1 .
Introducing a coordinate chart we can thus write locally: 1
μ
[− ∫0 Ai μ dxdt dt]
u1 = si (1) 𝒫 e
.
(2.113)
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.4 Wilson lines, parallel transport and covariant derivative
 71
Exercise 2.110. Why is the formal solution (2.110) only valid locally?
The relation with Wilson lines is now straightforward when considering equation (2.110). In other words, Wilson lines along a path γ are the parallel transporter along this path. Because of this relationship a Wilson line is sometimes also referred to as a gauge link. Using the properties of the principal fiber bundle we obtain Rg Γ(γ̃)(u0 ) = u1 g
(2.114)
Γ(γ̃)Rg (u0 ) = Γ(γ̃)(u0 g),
(2.115)
and
which together with the fact that γ̃(t)g is the horizontal lift through u0 g and u1 g returns that Γ(γ̃) commutes with the right action. Exercise 2.111. Using the properties of Chen integrals prove that −1
Γ(γ̃−1 ) = (Γ(γ̃)) .
Exercise 2.112. Again using the properties of Chen integrals prove that if we have two curves α1,2 : [0, 1] → M, such that α1 (1) = α2 (0), then ̃2 ) ∘ Γ(α ̃1 ). Γ(α̃ 1 α2 ) = Γ(α
2.4.2 Holonomy, curvature and the Ambrose–Singer theorem 2.4.2.1 Holonomy In the previous section we have clarified the relation between Wilson lines and the parallel transport equation. Now we wish to discuss the relation between Wilson loops and holonomies. Consider a fiber bundle P(Y, G, π) and the two curves in Y γ1 and γ2 , such that γ1 (0) = γ2 (0) = p0
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
72  2 Prolegomena to the mathematical theory of Wilson lines and γ1 (1) = γ2 (1) = p1 . If we consider the horizontal lifts of these curves for which γ̃1 (0) = γ̃2 (0) = u0 , then we do not necessarily get γ̃1 (1) = γ̃2 (1). This means that if we consider a loop γ in Y, i. e., γ(0) = γ(1), then, in general, the horizontal lift does not yield unavoidably γ̃(0) ≠ γ̃(1). In other words, a loop γ induces a transformation τγ : π −1 (p) → π −1 (p) on the fiber at p. Because the horizontal lift Γ(γ̃) commutes with the right action we obtain τγ (ug) = τγ (u)g.
(2.116)
Fixing a point in the manifold Y and considering all loops for which this point is the base point, written as Cp (Y), τγ can only reach certain elements of G. The set of elements that can be reached form a subgroup of the structure group G and generate the holonomy group at u, where π(u) = p Φu = {g ∈ G  τγ (u) = ug, γ ∈ Cp M}.
(2.117)
Exercise 2.113. Show that the elements of Φu form a group.
An interesting fact is that τγ−1 = τγ−1 inducing gγ−1 = gγ−1 . From the discussion on parallel transport, we find that the elements of the holonomy group can be treated as Wilson loops gγ = 𝒫 exp [− ∮Aiμ (x)dxμ ] . γ
(2.118)
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.4 Wilson lines, parallel transport and covariant derivative
 73
2.4.2.2 Curvature Before we continue the discussion of holonomies, we need to introduce the curvature twoform in gauge theory. Definition 2.114 (Covariant derivative). Suppose we have a vector space V of dimension k, and basis in V denoted by {eα }. Let ϕ : TP ∧ ⋅ ⋅ ⋅ ∧ TP → V and X1 , . . . , Xn+1 ∈ Tu P. The covariant derivative acting on k
ϕ = ∑ ϕα ⊗ eα α=1
is then defined as: H Dϕ(X1 , . . . , Xn+1 ) ≡ dP ϕ(X1H , . . . , Xn+1 ),
(2.119)
with dP ϕ ≡ dP ϕα ⊗ eα , where dP is the exterior differential for the fiber bundle P.
The curvature can then be introduced using this definition of the covariant derivative: Definition 2.115 (Curvature twoform). The curvature twoform Ω is the covariant derivative of the Ehresmann connection oneform ω 2
Ω ≡ Dω ∈ ⋀ P ⊗ g.
(2.120)
The right action on the curvature is expressed by the proposition: Proposition 2.116. The curvature transforms under the right action of an element g ∈ G as R∗g Ω = g −1 Ωg.
(2.121)
Exercise 2.117. Prove this proposition starting from the observation that Rg∗ preserves horizontal subspaces, and that dP Rg∗ = Rg∗ dP .
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
74  2 Prolegomena to the mathematical theory of Wilson lines In gauge theory notation this can be rewritten as R∗g Fμν = g −1 Fμν g, where Fμν is the gaugecovariant field strength. The above notation allows us to introduce Cartan’s structure equation which will also be familiar when written with field strength tensors. Theorem 2.118 (Cartan’s structure equation). Consider X1 , X2 ∈ Tu P. The curvature Ω and the Ehresmann connection ω satisfy the Cartan structure equation Ω(X1 , X2 ) = dP ω(X1 , X2 ) + [ω(X1 ), ω(X2 )].
(2.122)
It can also be written in the form Ω = dP ω + ω ∧ ω. as
(2.123)
Now the field strength tensor (also called the gauge curvature) Fμν can be written Fμν = dP Aμν + Aμ ∧ Aν = 𝜕μ Aν − 𝜕ν Aμ + [Aμ , Aν ]
(2.124)
which should look more familiar for physicists. 2.4.2.3 The Ambrose–Singer theorem The connection of Wilson loops with holonomies is supposed to allow one, in principle, to recast gauge theory in the space of generalized loops. The Ambrose–Singer theorem is the cornerstone of this program. Theorem 2.119 (Ambrose–Singer). Consider a principal fiber bundle P(Y, G, π) with connection ω, and curvature form Ω. Let Φ(u) be the holonomy group with reference point u ∈ P(Y, G, π) and P(u) the holonomy bundle of ω through u. Then the Lie algebra of Φ(u) is equal to the Lie subalgebra of g, generated by all elements of the form Ωp (v1 , v2 ) for p ∈ P(u) and v1 , v2 horizontal vectors at p, where g is the Lie algebra of G. Expressed in words, this theorem says that the physical content of the principal fiber bundle P theory with connection ω can also be found in the holonomy group Φ(u). In other words, there exists an equivalent loop space representation of a gauge
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.4 Wilson lines, parallel transport and covariant derivative
 75
theory. A downside of this approach is that if one considers the holonomy group which is infinite dimensional, we have abundant information or, said differently, the free loop space is overcomplete. Furthermore, the holonomy group is gauge dependent, such that if we want to express physical observables as functions of the holonomies, these functions will need to be gauge invariant. Fortunately, we shall see that considering generalized loops in the sense of Chen integrals as dloops enables us to deal with this issues. 2.4.2.4 Wilson loop functional Let us summarize and recapitulate some of the properties of Wilson lines and loops from a gauge theory point of view and introduce the gauge invariant Wilson loop functionals, which in the next sections will be used to introduce and study generalized loop space. Remember that a Wilson line Uγ = 𝒫 e
[∫γ Aμ ]
,
(2.125)
is a solution of the parallel transport equation. When γ is a closed path (a loop) this becomes Uγ = 𝒫 e
[∮γ Aμ ]
.
(2.126)
Notice that this infinite series, when one expands the exponential, converges to an element g ∈ G. As we have seen before, the gauge link is not gauge invariant, but transform as Uγg = gy−1 Uγ gx ,
(2.127)
Uγg = gx−1 Uγ gx ,
(2.128)
for a path γ from x to y or as:
when γ is a loop with base point x = γ(0). Since observables are by definition gauge invariant, and as we will see, the advantage of using generalized loop space is its gauge invariance, we define the gauge invariant Wilson path/loop functional 𝒲 : ℒℳ → ℂ
by W(γ) =
1 Tr Uγ , N
(2.129)
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
76  2 Prolegomena to the mathematical theory of Wilson lines where ℒℳ represents the space of all loops in M. By continuity of the trace and the expansion of the exponential in Chen integrals we get W(γ) =
1 ωω ⋅ ⋅ ⋅ ω ∑ Tr ∫ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ N n≥0 n
(2.130)
γ
with as before the convention that ωω ⋅ ⋅ ⋅ ω = Id, ∫ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ γ
n
if n = 0. Expressed with the gauge potentials Aμ this Wilson loop can be written as [∫γ Aμ ]
,
(2.131)
[∮γ Aμ ]
,
(2.132)
𝒲γ = Tr 𝒫 e
for open paths and 𝒲γ = Tr 𝒫 e
for loops. Both expressions are now gauge invariant, due to the traces, such that Wilson loop functionals are indeed gauge invariant functions of the holonomies. In terms of dpaths these Wilson loop functionals are complexvalued dpaths 𝒲γ ∈ Alg(Sh(Ω), ℂ),
i. e., they vanish on the ideal I(d, p) defined in Section 2.1.2.
2.5 Generalization of manifolds and derivatives We wish to show that the generalized loop space exhibits a manifold structure, which is not, however, usual. Namely, this space is not locally homeomorphic to the Euclidean space ℝn , as it is required for manifolds. To describe the manifoldlike structure we need to generalize the manifold concept to allow for spaces that are modeled on, for instance, Banach spaces. This generalization allows us to extend the manifold concept to infinite dimensional spaces. With the aid of the generalized manifolds one can generalize derivatives. The most important generalization for our purposes is the Fréchet derivative. In the last section of the present chapter we shall discuss this derivative and some of its nice properties in more detail, here we only present the necessary mathematical preliminaries.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.5 Generalization of manifolds and derivatives 
77
2.5.1 Manifold: Fréchet derivative and Banach manifold A real smooth manifold is a topological space that is locally homeomorphic to ℝn . This manifold concept can be extended to a larger class where now the manifold is no longer modeled on a Euclidean but on a Banach space.14 Put differently, the underlying topological space is locally homeomorphic to an open set in a Banach space, allowing to extend the manifold concept to infinite dimensions. A more formal definition will be given below, but we first need to generalize the derivative concept to the so called Fréchet derivative. This derivative is defined on Banach spaces and can be interpreted as a generalization of the derivative of a one parameter realvalued function to the case of a vectorvalued function depending on multiple real values, which is what we will need to define derivatives on the generalized loop space and is actually necessary to define the functional derivative in this space as we will see. To give the definition of the Fréchet derivative we need the concept of a bounded linear operator. Definition 2.120 (Bounded linear operator). A bounded linear operator is a linear transformation L between normed vector spaces X and Y for which the ratio of the norm of L(v) to that of v is bounded by the same number, over all nonzero vectors v ∈ X . Therefore, there exists M > 0, such that for all v ∈ X ‖L(v)‖Y ≤ M‖v‖X . The smallest M is called the operator norm ‖L‖op
of L.
A bounded linear operator is generally not a bounded function, which would require that the norm of L(v) be bounded for all v, which is not possible unless Y is the zero vector space. Put more correctly, a bounded linear operator is a locally bounded function. Let us recall that a linear operator on a metrizable vector space is bounded if and only if it is continuous. With the above we are now ready to define the Fréchet derivative.
14 Complete vector spaces with norm.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
78  2 Prolegomena to the mathematical theory of Wilson lines
Definition 2.121 (Fréchet derivative). Consider Banach spaces X1 , X2 , and let U ⊂ X1 be an open subset. A function F : U → X2 is called Fréchet differentiable at x∈U if there exists a bounded linear operator Ax : X1 → X2 such that lim
Δ→0
‖F (x + Δ) − F (x) − Ax (Δ)‖X2 ‖Δ‖X1
= 0,
(2.133)
where the limit is defined as in the usual sense. If this limit exists, then DF (x) = Ax stands for the Fréchet derivative. We call the function F C 1 if DF : U → B(X1 , X2 );
x → DF (x) = Ax ,
(2.134)
is continuous, B here highlights the fact that this is the space of bounded linear operators.
Note the difference with the continuity of DF(x) in the previous definition. The usual derivative of a real function can be easily restored from this definition. To this end, let us take F : ℝ → ℝ, such that DF(x) is the function t → t F (x). The Fréchet derivative can be extended to arbitrary topological vector spaces (TVCs). The latter are defined as vector spaces with a topology that makes the addition and scalar multiplication operations continuous, i. e., the topology is consistent with the linear structure of the vector space.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.5 Generalization of manifolds and derivatives 
79
Definition 2.122 (Fréchet derivative for topological vector spaces). Let now X1 , X2 be topological vector spaces with U ∈ X1 an open subset that contains the origin and given a function F : U → X2 preserving the origin F (0) = 0. To continue it is necessary to explain what it means for this function to have 0 as its derivative. We call the function F tangent to 0 if for every open neighborhood V2 ⊂ X2 ,
of 0X2 ,
V1 ⊂ X1 ,
of 0X1 ,
there is an open neighborhood
together with a function H : ℝ → ℝ, such that lim
Δ→0
H(Δ) =0 Δ
and for all Δ F (ΔV1 ) ⊂ H(Δ)V2 . This somewhat strange constraint can be removed by defining F to be Fréchet differentiable at a point x0 ∈ U given that there exists a continuous linear operator λ : X1 → X2 , such that F (x0 + Δ) − F (x0 ) − λΔ, considered as a function of Δ, is tangent to 0.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
80  2 Prolegomena to the mathematical theory of Wilson lines It can further be demonstrated that if the Fréchet derivative exists, then it is unique. Similarly to the usual properties of differentiable functions we find that – if a function is Fréchet differentiable at a point it is necessarily continuous at this point; – sums and scalar multiples of Fréchet differentiable functions are differentiable. Hence we conclude that the space of Fréchet differentiable functions at some point x forms a subspace of the functions that are continuous at that point x. Moreover, the chain rule also holds as does the Leibniz rule whenever Y is an algebra and a topological vector space in which multiplication is continuous. This will turn out to be exactly the case for the space of generalized loops, where the algebra multiplication is the shuffle product. Using the above generalization of derivative we can extend the manifold concept to that of a Banach manifold: Definition 2.123 (Banach manifold). Take a set X . An atlas of class C n , n ≥ 0,
on X
is defined as a collection of pairs (charts) (Ui , ϕi ),
i ∈ I,
such that 1. for each i ∈ I, 2.
Ui ⊂ X ,
⋃ Ui = X ; i
for each i ∈ I, ϕi is a bijection from Ui onto an open subset ϕi (Ui ) of some Banach space Ei and ϕi (Ui ∩ Uj )
3.
is open in Ei ; the crossover map ϕj ∘ ϕ−1 i : ϕi (Ui ∩ Uj ) → ϕj (Ui ∩ Uj ) is a smooth function rtimes continuously differentiable function for all i, j ∈ I meaning that the nth Fréchet derivative Dn (ϕj ∘ ϕ−1 i ):
ϕi (Ui ∩ Uj ) → Lin (Ein ; Ej )
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.5 Generalization of manifolds and derivatives 
81
exists and is a continuous function with respect to the Ei norm topology on subsets of Ei and the operator norm topology (i. e., the topology induced by a norm on the space of bounded linear operators, Definition (2.120)) on the space of linear operators Lin (Ein ; Ej ) , where Ein takes into account that the ntimes iterated application of the linear operator defines the nth Fréchet derivative.
It can be shown there is a unique topology on X such that for all i ∈ I, Ui is open and ϕi is a homeomorphism. This topological space is assumed to be a Hausdorff space in most cases, but this is not necessary from the point of view of the formal definition. In the cases where all the Ei are equal to the same space E, the atlas is called an Eatlas. However, it is not necessary that the Banach spaces Ei be the same space, or even isomorphic as topological vector spaces. But, if two charts (Ui , ϕi ) and (Uj , ϕj ) are such that Ui ∩ Uj ≠ 0, it clearly follows from the derivative of the crossover map ϕj ∘ ϕ−1 i : ϕi (Ui ∩ Uj ) → ϕj (Ui ∩ Uj ) that Ei ≅ Ej that is they are isomorphic as topological vector spaces. It is important to realize that the set of points x ∈ X for which there is a chart (Ui , ϕi ) : x ∈ Ui and Ei isomorphic to a given Banach space E is both open and closed. Hence, one can assume that, on each connected component of X, the atlas is an Eatlas for some fixed E. Similarly to the common differentiable manifolds, a new chart (U, ϕ) is called compatible with a given atlas {(Ui , ϕi  i ∈ I} if the crossover map ϕi ∘ ϕ−1 : ϕ(U ∩ Ui ) → ϕi (U ∩ Ui )
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
82  2 Prolegomena to the mathematical theory of Wilson lines is an rtimes continuously differentiable function for all i ∈ I. Two atlases are compatible when each chart in one atlas is compatible with the other atlas. Compatibility of atlases defines an equivalence relation on the class of all possible atlases on X. Just like in the situation with real smooth manifolds, a C r manifold structure on X is defined as a choice of an equivalence class of atlases on X of class C r . If all the Banach spaces Ei are isomorphic as topological vector spaces (as is guaranteed to be the case if X is connected), then an equivalent atlas can be found for which they are all equal to some Banach space E. X is then called an Emanifold, or one says that X is modeled on E. We end this discussion by making the remark that a Hilbert manifold is a special case of a Banach manifold in which the manifold is locally modeled on Hilbert spaces.
2.5.2 Fréchet manifold The concept of the Banach manifolds can be further generalized by making use of Fréchet spaces, which are a special kind of topological vector spaces. Fréchet spaces are locally convex spaces which are complete with respect to a translation invariant metric and their metric does not need to be generated by a norm. Notice that this means that not every Fréchet space is a Banach space, which requires a norm. Typical examples are spaces of infinitely differentiable functions. We give below two equivalent definitions of a Fréchet space, one using translation invariant metrics and one using a family of seminorms. Definition 2.124 (Fréchet spaces via translation invariant metrics). A topological vector space X is a Fréchet space if and only if it satisfies the following three properties: 1. There is a local basis for its topology at every point, i. e., it is locally convex. 2. Its topology can be induced by a translation invariant metric, meaning that a subset U⊂X is open if and only if for all u1 ∈ U there exists ϵ > 0, such that {u2 : d(u2 , u1 ) < ϵ} ⊂ U. 3.
It is a complete metric space.
Note that there is no natural notion of distance between two points of a Fréchet space: many different translation invariant metrics may induce the same topology. The second definition is built on a family of seminorms. Definition 2.125 (Fréchet spaces via family of seminorms). A topological vector space X is a Fréchet space if and only if it satisfies the following three properties.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.5 Generalization of manifolds and derivatives 
1. 2.
83
It is a Hausdorff space. Its topology may be induced by a countable family of seminorms ‖ ⋅ ‖l ,
l = 0, 1, 2, . . . .
This means that a subset U⊂X is open if and only if for all u1 ∈ U there exists K ≥ 0, ϵ > 0  {u2 : ‖u2 − u1 ‖l < ϵ, ∀l ≤ K} ⊂ U. 3.
It is complete with respect to the family of seminorms.
A sequence (xn ) ∈ X converges to x in the Fréchet space defined by a family of seminorms if and only if it converges to x with respect to each of the given seminorms.
Note that every Banach space is a Fréchet space, as the norm induces a translation invariant metric and the space is complete with respect to this metric. The following examples show how the shuffle algebra can be made topological by seminorms, turning it into a Fréchet space. Example 2.126. The vector space of infinitely differentiable functions C ∞ ([0, 1]) F : [0, 1] → ℝ becomes a Fréchet space with the seminorms l d F(l) = sup { l F(x) : x ∈ [0, 1]}, dx
(2.135)
∀ ℕ ∋ l ≥ 0. A sequence (Fn ) of functions converges to F ∈ C ∞ ([0, 1]) if and only if for all ℕ ∋ l ≥ 0, the sequence (Fn(l) ) converges uniformly to F (l) , where F (l) =
dl F(x). dxl
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
84  2 Prolegomena to the mathematical theory of Wilson lines Considering differentiation of maps between Fréchet spaces one has to be careful. Take Fréchet spaces X1 , X2 . The set of all continuous linear maps L(X1 , X2 ) X1 → X2 is not a Fréchet space in any natural manner. This is where the theory of Banach spaces and that of Fréchet spaces strongly deviate and we need a different definition for continuous differentiability of functions defined on Fréchet spaces, the Gâteaux derivative: Definition 2.127 (Gâteaux derivative). Suppose X1 , X2 are Fréchet spaces, U ⊂ X1 open, and P : U → X2 a function, x ∈ U,
and V ∈ X1 .
Then P is called differentiable at x in the direction V if the following limit exists DV [P(x)] = lim
Δ→0
P(x + V Δ) − P(x) . Δ
(2.136)
Then P is called continuously differentiable in U if D[P] : U × X1 → X2 ,
(2.137)
is continuous.
Since the product of Fréchet spaces is again a Fréchet space, we can then differentiate D[P] and define the higher derivatives of P in this fashion. The derivative operator P : C ∞ ([0, 1]) → C ∞ ([0, 1]) defined by P(x) = x is itself infinitely differentiable. The first derivative reads DV [P(x)] = V
(2.138)
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
2.5 Generalization of manifolds and derivatives  85
for any two elements x, V ∈ C ∞ ([0, 1]). This is an important advantage of the Fréchet space C ∞ ([0, 1]) as compared to the Banach space C k ([0, 1]) for finite k. If P : U → X2 is a continuously differentiable function, then the differential equation x (t) = P(x(t)),
x(0) = x0 ∈ U,
(2.139)
need not have any solutions, and even if it does, the solutions need not be unique, in strong contrast to the situation in Banach spaces.15 One can now define Fréchet manifolds as spaces that locally look like Fréchet spaces, and one can then extend the concept of Lie groups to these manifolds, leading to a Fréchet Lie group . Such a Lie group is a group G which is also a manifold, but now a Fréchet manifold (infinite dimensional) such that the map: G × G → G, (g, h) → gh−1
(2.140)
is continuous. This is useful because for a given (ordinary) compact C ∞ manifold M, the set of all C ∞ diffeomorphisms F:M→M forms a generalized Lie group in this sense, and this Lie group captures the symmetries of M. Some of the relations between Lie algebras and Lie groups remain valid in this setting, which will be used when studying the group structure and Lie algebra structure of generalized loop space.
15 We emphasize that the inverse function theorem does not hold in Fréchet spaces. A partial substitute to it is the Nash–Moser theorem, which extends the notion of an inverse function from Banach spaces to a class of Fréchet spaces. In contrast to the Banach space case, in which the invertibility of the derivative (where the derivative is interpreted as a linear operator) at a point is sufficient for a map to be locally invertible, the Nash–Moser theorem requires the derivative to be invertible in a vicinity of a point. The theorem is widely used to prove local uniqueness for nonlinear partial differential equations in spaces of smooth functions.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:38 PM
3 The group of generalized loops and its Lie algebra 3.1 Introduction In the previous chapter we introduced dpaths and dloops as algebra morphisms. We have already demonstrated that Shc(d, p) forms a group with respect to the multiplication introduced in Definition 2.10 and that this algebra is isomorphic to the (Chen) integral algebra 𝒜p generated by all functionals X ω1 ⋅⋅⋅wn , such that dloops can be identified with elements of the algebra morphisms Alg(𝒜p , K). From now on we set K ≡ ℂ. The algebra Shc(d, p) can be supplied with a topology turning it into a topological algebra, more specifically into a locally multiplicative convex (LMC) algebra. This topology is built from seminorms, a construction that is due to Tavares. We shall explicitly discuss the construction of this topology, next to a diagrammatic overview of the different steps in this topologization process. Equipped with such a topology Shc(d, p) turns into a Fréchet space and combined with the fact that the generalized loops form a group this will also return a Fréchet Lie group and algebra. The algebraic properties combined with the differential operations from Section 2.1.1 and the fact that limits are welldefined in this new space allows to extend differential calculus on manifolds to the generalized manifolds discussed before. Several differential operators will be introduced in Section 2.5, which generate variations of the loops. The exposition in this chapter is based mostly on the works by Tavares (see References).
3.2 The shuffle algebra over Ω = ⋀ M as a Hopf algebra The main advantage of dpaths is that they can be considered as algebraic paths, in the sense that they have a rich algebraic structure that can be used to derive many interesting properties. In this section we investigate this in more detail. We start by restating the comultiplication and counit of the shuffle algebra and their properties: n
Δ(ω ⋅ ⋅ ⋅ ωn ) = ∑ ω1 ⋅ ⋅ ⋅ ωi ⊗ ωi+1 ⋅ ⋅ ⋅ ωn i=0
ϵ(ω1 ⋅ ⋅ ⋅ωn ) = 0, = 1,
if n ≥ 1
if n = 0.
https://doi.org/10.1515/9783110651690003
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
(3.1)
3.2 The shuffle algebra over Ω = ⋀ M as a Hopf algebra
 87
Properties of the comultiplication and counit are the following: (Δ ⊗ 1) ∘ Δ = (1 ⊗ Δ) ∘ Δ
(coassociative law)
Δ(u ∙ v) = Δ(u) ∙ Δ(v)
(Δ is an algebra morphism)
(1 ⊗ ϵ) ∘ Δ = (ϵ ⊗ 1) ∘ Δ = 1 (counitary property) ϵ(u ∙ v) = ϵ(u) ∙ ϵ(v)
(ϵ is an algebra morphism)
(3.2)
∀u, v ∈ Sh. A complete Hopf algebra structure is given by a multiplication (the shuffle product), a unit, a comultiplication, a counit and an antipode. The antipode was defined in Definition 2.8 as a Klinear map: J : Sh → Sh and is restated here for convenience: J(ω1 ⋅ ⋅ ⋅ ωn ) = (−1)n ωn ⋅ ⋅ ⋅ ω1 ,
(3.3)
with the properties given before, see equation (2.7). Exercise 3.1. Using the above definitions and properties prove that n
n
∑ (−1)i ωi ⋅ ⋅ ⋅ ω1 ∙ ωi+1 ⋅ ⋅ ⋅ ωn = ∑ (−1)n−i ω1 ⋅ ⋅ ⋅ ωi ∙ ωn ⋅ ⋅ ⋅ ωi+1
i=0
i=0
= ϵ(ω1 ⋅ ⋅ ⋅ ωn ).
(3.4)
The above part describes the Hopf algebra structure of Sh(Ω), but we wish to extend this structure to the algebra 𝒜p generated by the functionals X ω1 ⋅⋅⋅ωn from equation (2.52). This extension follows from Proposition 2.39 that turns the surjective map Sh(Ω) → 𝒜p , defined by 1 → 1,
and ω1 ⋅ ⋅ ⋅ ωn → X ω1 ⋅⋅⋅ωn
into a homomorphism of algebras. Since this map is now an algebra morphism, the algebraic structure of Sh(Ω) is preserved under this map. Proposition 2.42 and Theorem 2.34 imply that the kernel of this morphism contains the ideal I(d, p). This ideal reads ω1 ⋅ ⋅ ⋅ ωi−1 (Fωi )ωi+1 ⋅ ⋅ ⋅ ωn − F(p)ω1 ⋅ ⋅ ⋅ ωn − ((ω1 ⋅ ⋅ ⋅ ωi−1 ) ∙ dF)ωi ⋅ ⋅ ⋅ ωn ,
(3.5)
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
88  3 The group of generalized loops and its Lie algebra or in reduced notation u1 (Fω)u2 − (u1 ∙ dF)ωu2 − F(p)u1 ωu2
(3.6)
for u1 , u2 ∈ Sh,
ω ∈ ⋀M,
F ∈ C ∞ M.
With this algebra morphism we obtain that dpaths can be seen as elements of the set of algebra morphisms Alg(𝒜p , ℂ) that is, a dpath is an algebra morphism γ ∈ Alg(𝒜p , ℂ) where γ : 𝒜p → ℂ vanishes on the ideal I(d, p) by definition. In the case of dloops, we need to extend the ideal to include dC ∞ (M). However, in the integral algebra this is included by definition since ∫ dF = 0 γ
for γ ∈ ℒℳ and thus dC ∞ (M) ∈ ker (Sh(Ω) → 𝒜p ). As before we denote this ideal by Jp Jp = I(d, p) + ⟨dC⟩,
(3.7)
where I(d, p) is the shuffle algebra ideal associated to the pointed differentiation (d, p). We have already seen that this new ideal induces the algebra isomorphism Sh(Ω)/Jp ≃ 𝒜p .
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
(3.8)
3.2 The shuffle algebra over Ω = ⋀ M as a Hopf algebra
 89
The algebra 𝒜p has an induced Hopf algebra structure, where the unit and multiplication follow from Proposition 2.39 and the comultiplication, counit and antipode follow from these operations on Sh(Ω) as n
Δ(X ω1 ⋅⋅⋅ωn ) = ∑ X ω1 ⋅⋅⋅ωi ⊗ X ωi+1 ⋅⋅⋅ωn i=0
ϵ(X ω1 ⋅⋅⋅ωn ) = 0 J(X
ω1 ⋅⋅⋅ωn
=1
if
n≥1
if n = 0
) = (−1)n X ωn ⋅⋅⋅ω1 .
(3.9)
Exercise 3.2. How can one understand the comultiplication Δ : 𝒜p ⊗ 𝒜p → 𝒜p in equation (3.9) taking into account Proposition 2.39?
This explains the Hopf algebra structure, but the integral algebra can be equipped with a much richer structure, namely that of a nuclear locally multiplicativeconvex (NLMC) algebra. This structure is generated by topology, giving it the structure of a Fréchet space. Figure 3.1 gives a diagrammatic overview of how different topologies are constructed on the involved algebras. Let us derive a topology on n
⨂ ⋀ M, which can then be used to obtain a topology on Sh(Ω) consistent with its linear structure. We write Ω = ⋀M as before. The construction of the topology will give us more than just a topology, it will enrich Sh(Ω) with the structure of a nuclear locally multiplicativeconvex topological vector space (TVS), or Fréchet space, that is also Hausdorff, Banach and Hopf. The construction of the topology starts from the Riemannian metric and connection on M. The connection allows us to define a covariant derivative D and the metric induces a norm ⋅. On the other hand, we know M as a manifold has a topology induced from its Riemannian metric. Combining this with the atlas of M we get local basis for this topology (Uk )k∈ℕ .
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
90  3 The group of generalized loops and its Lie algebra
Figure 3.1: Topology on Sh(Ω).
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
3.2 The shuffle algebra over Ω = ⋀ M as a Hopf algebra
 91
Using this local basis it is possible to construct a sequence of nested compacts U {Km }m≥1
in a local coordinate chart (U, x), such that U = U. ⋃ Km
m≥1
We can then define a first family of seminorms on (U, x) by using the Riemannian metric induced norm and covariant derivative ‖ωi ‖m,p = sup (Dp ωi (x)), U x∈Km
(3.10)
where ωi ∈ C ∞ U defined by the vectors n
ω = ∑ ωi dxi ∈ ⋀ M. i=1
Dp denotes the pth covariant derivative with respect to the connection. A second family of seminorms is now constructed from the first family of seminorms ‖ω‖m,p by U (ω) = max ρUm (ωi ), Nm 1≤i≤n
ω ∈ ⋀U,
(3.11)
where ρUm (ωi ) = sup (sup Dp ωi (x)). p≤m
x∈KmU
(3.12)
As a result we obtain a family of seminorms on the local coordinate chart (U, x). The next step is to extend this to the entire manifold M. This will be implemented by means of the inclusion map on the local basis for the (Riemannian) topology on M. Consider again the local basis {Uk }k∈ℕ which can now also be interpreted as local charts. Define the map ik : Uk → M
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
92  3 The group of generalized loops and its Lie algebra as the inclusion map which embeds the local basis into M. The linear pullback maps ik∗ : ⋀M → ⋀Uk now define a map between the oneforms on this local basis and the same oneforms but now considered on M. Endowing ⋀M with the initial topology defined by these maps, successfully equips it with a topology induced by seminorms. Notice that by definition this topology is the weakest topology for which all the maps ik∗ are continuous, and a local topology basis consists of sets of the form r
⋂(ik∗j )−1 (𝒪kj ), j=1
where the sets 𝒪kj run over a local basis of ⋀Ukj . Therefore, ⋀M becomes a nuclear locallyconvex topological vector space (Fréchet space), with the topology that is given by the family of seminorms Uk
pk,m,l (ω) = max Nm j (iU∗ k ω). 1≤j≤l
j
(3.13)
From elementary calculus one learns that the definition of a differentiation depends on taking limits, which in its turn is defined by the convergence of a sequence concept. Given that we eventually will be interested in welldefined derivatives let us briefly consider convergence with the above family seminorms. With such a family of seminorms a sequence only converges if it converges for all seminorms in the family. In other words, a sequence of oneforms (ωn )n≥1 ,
in ⋀M
converges to zero if and only if, in a vicinity of every point of M, each derivative of each coefficient of ωk converges uniformly to zero. The tensor powers n
⨂ ⋀M
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
3.2 The shuffle algebra over Ω = ⋀ M as a Hopf algebra
 93
now get a topology by the projective tensor product topology and becomes a Banach space when we also complete this space with respect to the seminorms that describe this tensor topology. In other words, this topology is described by the seminorms (r) Nk,m,l which are the tensor product of the above ones. To make this explicit, consider the example where r = 2. Example 3.3. Let u ∈ ⋀M ⊗ ⋀M for which we have n
(2) (u) = inf ∑ pk,m,l (ωi ) ⋅ pk,m,l (ηi ), Nk,m,l i=1
(3.14)
where inf is taken over all expressions of the element u in the form n
u = ∑ ωi ⊗ ηi . i=1
Extending now to elements in n
⨁ (⨂ ⋀M), n≥0
which are finite sums u = ∑ un , n
n
with un ∈ ⨂ ⋀M,
we get the seminorms (n) Nk,m,l (u) = ∑ Nk,m,l (un ) n
(3.15)
inducing a nuclear locallyconvex topology on T(⋀M). Due to the fact that all above topologies are consistent with the linear structures of the algebras, the shuffle product is a continuous map in this last topology. Moreover, the shuffle product is commutative so that Sh(Ω) inherits the structure of a commutative LMC algebra from T(⋀M) that also is Hopf, Banach and Hausdorff. We continue to write Sh(Ω) for this algebra. We end this section with the remark that the integral algebra 𝒜p inherits the same structure through the isomorphism equation (3.8).
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
94  3 The group of generalized loops and its Lie algebra
3.3 The group of loops The (naive) piecewise smooth loops based at p form a loop space ℒℳp , which is a semigroup with respect to the product γ1 ⋅ γ2 ,
for γ1 , γ2 ∈ ℒℳp .
Looking again at the equivalence relation introduced in equation (2.54), we can introduce a multiplication [γ1 ] ∗ [γ2 ] = [γ1 ⋅ γ2 ],
(3.16)
on the set ℒℳp /∼ of equivalence classes turning this set into a group and where [γ1 ] is the equivalence class of γ1 . The inverse of an element [γ] ∈ ℒℳp /∼ is clearly given by [γ]−1 = [γ −1 ] and the unit element reads ϵ = [p], the class of the constant loops equal to the point p. Therefore, we have described the group ℒℳp /∼, ∗
referred to as the group of loops on the manifold M based at p. In what follows, we symbolically represent this group by LMp .
3.4 The group of generalized loops In order to be able to introduce the space of generalized loops, or equivalently the space of dloops, as the algebra morphisms from Sh(Ω) to ℂ that vanish on the ideal Jp , we need to extend the consideration of the algebra 𝒜p . The main concept that we need in this extension is that of a spectrum on a commutative Banach algebra.1 1 Where the commutative refers to the shuffle product which is commutative.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
3.4 The group of generalized loops  95
Definition 3.4 (Gel’fand space or spectrum). Consider a commutative Banach algebra A. Let (A) (or ) stand for the collection of nonzero complex homomorphisms H : A → ℂ. Elements of the Gel’fand space are called characters.
Applying this definition to the algebra2 𝒜p and writing p for the spectrum, we find that ϕ ∈ p is also an element of the dual space 𝒜∗p of 𝒜p . We can now also consider the dual space 𝒜∗∗ p of the dual space in which we can embed the original space 𝒜p by the map x → Φx : Φx (ϕ) = ϕ(x). With the maps Φx we can define a coarsest topology on 𝒜∗p , such that all the Φx are continuous maps ∗
𝒜p → ℂ.
This topology is referred to as the weak∗ topology, in which the characters are now continuous by definition. From Section 3.2 we know that 𝒜p inherits a seminorm structure from Sh(Ω), such that by the Banach–Alaoglu theorem 𝒜p is reflexive ∗∗
𝒜p ≡ 𝒜p .
From this it follow that every bounded sequence has a weakly converging subsequence, similar to the case in regular calculus. Under the weak∗ convergence we have that a sequence ϕn ∈ 𝒜∗p converges if and only if ϕn (x) → ϕ(x),
∀x ∈ 𝒜p .
The Hausdorff property of 𝒜p can be understood from the separation property (2.76) of the functionals X ω1 ⋅⋅⋅ωn . As a consequence the dloops γ̃ : Sh(Ω) → ℂ can be identified with elements of 𝒜∗p . 2 For the moment considering the oneform to be complex valued.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
96  3 The group of generalized loops and its Lie algebra Notice that up until now we have only considered complex valued oneforms, but in a gauge theory setting and using the principal fiber bundle formalism we need to deal with Lie algebra valued oneforms. Choosing to represent the Lie algebra by matrices, the algebra elements form a subalgebra of GL(n, ℂ). So let us consider the case of GL(n, ℂ) valued oneforms. In this case the nuclear property of Sh(Ω) comes to the rescue. The fact that this algebra is of the nuclear or trace class, the trace of the matrices does not spoil the algebraic or topological structures such that convergence is still welldefined. Thus by adding the trace operator to the integrals in the functionals of 𝒜p in the case of matrix valued oneforms we get again a set of continuous characters (complex valued!). The nuclear property also assures that there exists a welldefined trace operator on the linear bounded operators used to define the Fréchet derivative in (2.121). Moreover, it also assures that this trace is finite. We thus find that dloops are identified in this way with the spectrum of 𝒜p with the remark that if the ω ∈ ⋀1 M are GL(n, ℂ)valued, we need to take the trace to reduce the GL(n, ℂ)valued matrix to an element of ℂ. Let us now extend the previously introduced equivalence relation (2.76) on dloops to: 𝒲γ1 = Tr Uγ1 = Tr Uγ2 = 𝒲γ2 ,
(3.17)
for two dloops γ1 , γ2 ∈ M with U and 𝒲 defined in equations (2.125) and (2.129). These form a subset of the dloops, and also of the generalized loops, that are still separable by Theorem 2.79. Weak∗ convergence is also still applicable due to the fact that convergence requires convergence for all elements in 𝒜p . The continuity of the trace now allows to define generalized loops. Definition 3.5 (Generalized loop). A generalized loop based at p∈M is a character of the algebra 𝒜p or, equivalently, a continuous complex algebra homomorphism γ̃ : Sh(Ω) → ℂ that vanishes on the ideal Jp .
By making use of the weak∗ topology we can define convergence on the space of generalized loops as above. With this new space we can now ask the question: Exercise 3.6. How are the naive loops from the previous section embedded in the space of generalized loops?
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
3.4 The group of generalized loops  97
This embedding is realized by the Dirac map: δ : LMp → p ,
[γ] → δ[γ]
(3.18)
δ[γ] (X ω1 ⋅⋅⋅ωn ) = X ω1 ⋅⋅⋅ωn ([γ]),
(3.19)
defined by:
[γ] ∈ LMp .
We now have an injective embedding due to Theorem 2.76. Identifying LMp with its image, under δ, in p , it also inherits an induced topology. Another question one can pose at this point is ‘How to compose loops?’. Exercise 3.7. Find out what is the definition of the composition, or multiplication, operation in the space of generalized loops?
The multiplication for the generalized loops is introduced as a convolution multiplication γ̃1 ⋆ γ̃2 of the two elements γ̃1 , γ̃2 ∈ p defined by γ̃1 ⋆ γ̃2 ≡ (γ̃1 ⊗ γ̃2 ) ∘ Δ,
(3.20)
which gives p a group structure and where we used K ⊗ K ≃ K,
ℂ ⊗ ℂ ≃ ℂ.
Hence, we have obtained the group of generalized loops. Writing this out explicitly with the definition of Δ from equation (3.9) we have r
γ̃1 ⋆ γ̃2 (X ω1 ⋅⋅⋅ωn ) = ∑ γ̃1 (X ω1 ⋅⋅⋅ωi ) ⋅ γ̃2 (X ωi+1 ⋅⋅⋅ωn ). i=0
(3.21)
With the aid of the Dirac map we can rewrite equation (3.21) on LMp as n
n
i=0
i=0 n
∑ γ̃1 (X ω1 ⋅⋅⋅ωi ) ⋅ γ̃2 (X ωi+1 ⋅⋅⋅ωn ) = ∑ γ1 (X ω1 ⋅⋅⋅ωi ) ⋅ γ2 (X ωi+1 ⋅⋅⋅ωn ) = ∑ (X ω1 ⋅⋅⋅ωi )(γ1 ) ⋅ (X ωi+1 ⋅⋅⋅ωn )(γ2 ) i=0
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
98  3 The group of generalized loops and its Lie algebra n
= ∑ ∫ ω1 ⋅ ⋅ ⋅ ωi ⋅ ∫ ωi+1 ⋅ ⋅ ⋅ ωn i=0 γ
1
γ2
= ∫ ω1 ⋅ ⋅ ⋅ ωn ,
(3.22)
γ1 ⋅γ2
which also shows that the convolution product defined on the generalized loop space makes sense as a composition of dloops. The inverse in the group of γ̃ ∈ p , reads γ̃ ∘ J, so that γ̃−1 (ω1 ⋅ ⋅ ⋅ ωn ) = (−1)n γ̃(ωn ⋅ ⋅ ⋅ ω1 ),
(3.23)
with ϵ to be the unit element. Considering the topologization of the previous sections, the group of generalized loops can also be considered as a topological group. Definition 3.8 (Generalized loop space as topological group). The paths −1 γ̃1 ⋆ γ̃2 , γ̃1 and ϵ
belong to the set of generalized loops based at p. In other words, they are continuous characters on the algebra 𝒜p . In addition, (p , ⋆) has the properties of a topological group.
Definition 3.9 (Group of generalized loops). This topological group (Δp , ⋆) is then called the group of generalized loops of M at p ∈ M. It will be denoted by ?p . Lℳ
The Dirac map preserves group operations, so that LMp is a topological subgroup of ̃p . The above discussion clearly shows that the naive loops form a subset of generLM alized loops. Let us clarify the distinction with an example.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
3.4 The group of generalized loops  99
Example 3.10. Consider a manifold M = S1 . Then one has LSp1 = ℤ and ̃1 = ℝ. LS p Given that H 1 (S1 , ℝ) = ℝ, each oneform ω ∈ S1 equals to a constant multiple of ω0 ≡ dθ, i. e., the volume form in S1 , modulo an exact form ω = cω0 + dF,
c ∈ ℝ.
Therefore, we obtain ⋀S1 = ℝω0 ⊕ dC ∞ (S1 ).
(3.24)
Now we are in a position to prove that 𝒜p , being a Hopf algebra, is isomorphic to the polynomial ring in one variable ℝ[t] t ↔ X ω0 . The Hopf operations on ℝ[t] read Δ(t) = 1 ⊗ t + t ⊗ 1, J(t) = −t,
ϵ(t) = 0.
Hence, we find ̃1 = ℝ. LS p
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
100  3 The group of generalized loops and its Lie algebra Remark 3.11 (Generalized paths). Consider the path space 𝒫ℳp of paths based at p ∈ M, and the algebra ℬp generated by all the functions X ω1 ⋅⋅⋅ωn , considered now as functions on 𝒫ℳp . Similarly to the previous case, there exists an algebra isomorphism Sh(Ω)/Ip ≃ ℬp , which allows to consider ℬp as an LMC algebra and define generalized paths, based at p, as continuous characters on Sh(Ω) that vanish on Ip . These generalized paths, however, do not form a group but only a semigroup.
3.5 Generalized loops and the Ambrose–Singer theorem The equivalence relation, equation(3.17), has its origin in the Ambrose–Singer theorem (2.119). In our discussion of this theorem we argued that a naive loop space is overcomplete and we would solve this by introducing an equivalence relation, which is exactly realized by the definition above. We also mentioned algebraic constraints and nonlinear constraints coming from the fact that it has to be possible to write the complex value of the Wilson loop functional as a trace of an N × N SU(N) matrix, both constraints are combined in the Mandelstam constraints.3 Below we discuss how this equivalence takes care of these constraints. Recall that due to the translation invariance of dpaths and path reduction, the algebraic structure is independent from the chosen base point for the dloops just like the fundamental group was base point independent. Choosing now a fixed base point for the loops we look for the equivalent expression in Wilson loop variables of the following property of the holonomy Uγ [[x2 , x1 ] ∘ [x3 , x2 ]] = Uγ [x2 , x1 ]Uγ [x3 , x2 ],
(3.25)
where γ : [x2 , x1 ] represent that path between the points x1 and x2 in the base manifold. This eventually gives rise to the socalled Mandelstam constraints. Writing down the equivalence relationship, introduced by the Wilson loop functionals for n loops4 reproduces the Mandelstam constraints and returns the Wilson loop variant of equation (3.25). These 3 Giles [39] demonstrated this for the classical case, in the quantum field case there is no strict proof that this is really the case. 4 More explicitly one considers a loop formed by n loops, where the equivalence relation then states that they form the same loop if their Wilson loop functionals have the same complex value.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
3.6 The Lie algebra of the group of the generalized loops  101
constraints now also allow the reconstruction of the N ×N matrices and thus the gauge fields Aμ up to a gauge transformation starting from characters of the spectrum. In other words, adopting this equivalence is equivalent to taking into account the Mandelstam constraints. Note that this equivalence reduces the infinite dimensional group algebra of the holonomy to the finite dimensional matrix representation of the holonomy group. This means that many elements of the holonomy group algebra are represented by the same matrix, thus taking care of the overcompleteness which actually has its origin in this infinite dimensionality of the holonomy group algebra. Hence, we have found an alternative representation of gauge theory that does not make use of gauge potentials, where the fundamental degrees of freedom are gauge invariant due to the trace operation. Although this is a very nice property, one has to keep in mind that we have paid some price for this: instead of the gauge dependence we are charged with the path dependence, which is also sometimes hard to deal with.5 However, if we are able to keep the path dependence under control, this does not produce any serious problems. A good thing now is that we can construct the relations which are by definition gauge invariant.
3.6 The Lie algebra of the group of the generalized loops In the previous section we have found that the generalized loops form a topological group, namely a Fréchet Lie group. Now we investigate if we can also construct the associated Lie algebra. As we know from Section 2.2.1, Lie algebras have a close connection with right/left invariant vector fields. With this in mind we repeat here the definition of a left invariant derivation (respectively, right invariant derivation) on 𝒜p . Definition 3.12 (Left invariant derivation). A Klinear map d : 𝒜p → 𝒜p is called a left invariant derivation (respectively, right invariant derivation) on 𝒜p if D satisfies the following two conditions u u u u d(X u1 X 2 ) = X 1 d(X 2 ) + d(X 1 )X
Δ ∘ d = (1 ⊗ d) ∘ Δ
u2
(3.26) (3.27)
(respectively, Δ ∘ D = (D ⊗ 1) ∘ Δ), for all u1 , u2 ∈ Sh.
5 In many cases one considers paths and loops on the light cone, where there is only one possible path between two sequential points on the same light cone if one assumes that the complete path needs to stay on the light cone.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
102  3 The group of generalized loops and its Lie algebra Using the topological group property of L? ℳp and the above invariant derivations we have the following definition from the general theory of affine Kgroups: ?p ). The Lie algebra of the group Lℳ ?p is defined as the Klinear Definition 3.13 (Lie algebra of Lℳ ? space lℳ of all continuous left invariant derivations on 𝒜 . With the Lie bracket defined as p
p
[d1 , d2 ] = d1 d2 − d2 d1 .
(3.28)
Note that these fields are left invariant vector fields on L? ℳp . Just as in the case of principal fiber bundles one can show that this Lie algebra is isomorphic to the tangent space at the unit element L? ℳp , i. e., with ℳp Tϵ L?
at ϵ.
To justify this relation and make the derivations more explicit, let us consider the convolution product F1 ⋆ F2 of two elements F1 , F2 ∈ 𝒜∗p , the topological (weak) dual of 𝒜p F1 ⋆ F2 (X ω1 ⋅⋅⋅ωn ) = (F1 ⊗ F2 ) ∘ Δ(X ω1 ⋅⋅⋅ωn ) n
= ∑ F1 (X ω1 ⋅⋅⋅ωi ) ⋅ F2 (X ωi+1 ⋅⋅⋅ωn ). i=0
(3.29)
With this convolution product we can define left and rightinvariant endomorphisms on 𝒜∗p : Lemma 3.14. (𝒜∗p , ⋆) is a topological Kalgebra, isomorphic (antiisomorphic) to the
topological algebra EndLL (𝒜p ) (EndRL (𝒜p )) of all left (or right) invariant Klinear endomorphisms of 𝒜p (i. e., Klinear morphisms σ : 𝒜p → 𝒜p that satisfy the left (right) invariance condition) Δ ∘ σ = (1 ⊗ σ) ∘ Δ
and, respectively, Δ ∘ σ = (σ ⊗ 1) ∘ Δ,
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
(3.30)
3.6 The Lie algebra of the group of the generalized loops  103
and endowed with the topology of pointwise convergence. The elements of EndLL (𝒜p ) commute with the elements of EndRL (𝒜p ). To understand these definitions and properties better, it is instructive to start with some examples. Example 3.15. Let γ ∈ LMp and δγ ∈ 𝒜∗p be the Dirac map as defined before. Then Ψδγ is the automorphism Xu → γ ⋅ Xu corresponding to the action of γ, on Lℳp from the right.6 In fact, the right action of Lℳp on itself, through right translations rγ1 : γ2 → γ2 ⋅ γ1 , induces a left action of LMp on 𝒜p by (γ1 ⋅ X u )(γ2 ) ≡ X u (γ2 ⋅ γ1 ).
(3.31)
By the identification γ1 → δγ1 , we can write the righthand side of the above equation in the form X u (γ2 ⋅ γ1 ) = δγ2 ⋅γ1 (X u ) = δγ2 ⋆ δγ1 (X u )
= δγ2 ((1 ⊗ δγ1 )ΔX u ) = δγ2 (Ψδγ (X u )), 1
(3.32)
6 X u (β) → γ ⋅ X u (β) = ∫βγ X u note the order change of the paths which makes it into a right action on Lℳp although it is written as a product from the left.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
104  3 The group of generalized loops and its Lie algebra while the lefthand side is simply δγ2 (γ1 ⋅ X u ), which allows the above mentioned identification Ψδγ ≃ γ1 ⋅ X u . 1
In the same way we can prove that Λδγ is the automorphism 1
X u → X u ⋅ γ1 corresponding to the action of γ1 , on LMp from the left. Taking now the σ, defined in Lemma 3.14, to be a left invariant derivation σ = d, we can write Φ(d) = Fd = ϵ ∘ d ∈ 𝒜∗p with Fd (X u1 X u2 ) = ϵd(X u1 X u2 )
= ϵ(X u1 dX u2 + dX u1 X u2 )
= ϵ(X u1 )Fd (X u2 ) + Fd (X u1 )ϵ(X u2 ),
(3.33)
demonstrating that the Lie algebra l̃ ℳp is isomorphic, as Klinear space, to the sub∗ space of 𝒜p consisting of the pointed derivations (2.19) at ϵ: ∗ u u u u u u l̃ ℳp ≅ {δ ∈ 𝒜p : δ(X 1 X 2 ) = ϵ(X 1 )δ(X 2 ) + δ(X 1 )ϵ(X 2 )}.
(3.34)
It is this Klinear space of pointed derivations at ϵ that is considered to be the tangent space ℳp , Tϵ L?
just as we wanted. To motivate why we call this space the tangent space, consider γ̃t , a curve of generalized loops such that γ̃0 = ϵ
lim γ̃Δ = ϵ
Δ→0
lim
Δ→0
1 γ̃ − ϵ = δ ∈ 𝒜∗p Δ Δ
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
(3.35)
3.6 The Lie algebra of the group of the generalized loops  105
where δ ∈ 𝒜∗p and the limits are defined in the weak (topology) sense, for all u ∈ Sh(Ω) lim γ̃Δ (X u ) = ϵ(X u ).
Δ→0
Applying δ to X u X v returns: 1 γ̃ (X u1 X u2 ) − ϵ(X u1 X u2 )Δ Δ Δ 1 1 = lim (γ̃Δ (X u1 ) γ̃Δ (X u2 ) − ϵ(X u2 ) + γ̃Δ (X u1 ) − ϵ(X u )ϵ(X v )) Δ→0 Δ Δ = ϵ(X u1 )δ(X u2 ) + δ(X u1 )ϵ(X u2 ),
δ(X u1 X u2 ) = lim
Δ→0
(3.36)
where we used the property that the Chen integrals preserve multiplicity (2.39): γ̃Δ (X u X v ) = (X u X v )(γ̃Δ ) = X u (γ̃Δ )X v (γ̃Δ ) = (γ̃Δ )(X u )(γ̃Δ )(X v ).
(3.37)
Therefore, we obtain a Klinear isomorphism: ℳp ≅ l̃ ℳp , Tϵ L?
(3.38)
given by δ → dδ = (1 ⊗ δ) ∘ Δ. The Lie bracket of ℳp Tϵ L?
for the pointed differentiations is defined by [δ, η] ≡ ϵ ∘ [dδ , dη ] = δ ⋆ η − η ⋆ δ.
(3.39)
Notice that for any pointed derivation δ, at ϵ, for all n, n ≥ 1: δ(X ω1 ⋅⋅⋅ωn X ωn+1 ⋅⋅⋅ωn+n ) = 0,
(3.40)
which stems from the product properties of the X u and the definition of ϵ(X u ) that show up when taking the δ of a product (see Proposition 2.39, equations (3.9) and (3.39)). From the above result we also conclude that for all m > n ≥ 0 δm (X ω1 ⋅⋅⋅ωn ) = 0,
(3.41)
where for m ≥ 1 δm ≡ δm−1 ⋆ δ.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
106  3 The group of generalized loops and its Lie algebra The exponential map eδ can now be defined for each δn n≥1 n!
δ ℳp ≅ l̃ ℳp : e ≡ ϵ + ∑ δ ∈ Tϵ L?
(3.42)
where for each X ω1 ⋅⋅⋅ωn ,
eδ (X ω1 ⋅⋅⋅ωn )
is given by δm )(X ω1 ⋅⋅⋅ωm ), m≥1 m!
(ϵ + ∑
(3.43)
which is only valid under the assumption that the series converges. By equation (3.41) this series is finite, so that eδ is welldefined. Interestingly we can show that eδ is a generalized loop, again similar to the situation with the usual Lie groups and algebras. Considering the inverse case, given γ̃ ∈ L? ℳp , we can define (−1)n−1 (γ̃ − ϵ)n , n n≥1
log γ̃ ≡ ∑ with
(γ̃ − ϵ)n ≡ (γ̃ − ϵ)n−1 ⋆ (γ̃ − ϵ), by virtue that for m > n ≥ 0, (γ̃ − ϵ)m (X ω1 ⋅⋅⋅ωn ) = 0, log γ̃ is welldefined and is an element of ℳp ≅ l̃ ℳp . Tϵ L?
The formal power series (for k ∈ Z) allows for exp(k log γ̃) = γ̃k , log(expδ) = δ,
which can be extended to define for each Δ∈K
γ̃Δ ≡ exp(Δ log γ̃).
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
(3.44)
3.6 The Lie algebra of the group of the generalized loops  107
It is now not so hard to show that Δ → γ̃Δ is a oneparameter subgroup of L? ℳp , generated by log γ̃, i. e., γ̃0 = ϵ γ̃Δ ⋆ γ̃Δ = γ̃Δ+Δ 1 lim [γ̃Δ − ϵ] = log γ̃ = δ, Δ→0 Δ
such that γ̃Δ = eδ , a generalized loop and where in the last line the limit is taken in the weak (topology) sense. Now that we have introduced the left and rightinvariant derivations, discussed their relation with the derivations defined on the shuffle algebra and have defined a Lie algebra, we can move on in the next section to differential calculus in this loop space.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
4 Shape variations in the loop space In this chapter we introduce the differential operators which enable the formulation of the equations of motion in the generalized path and loop space, with the final goal being to define the socalled Fréchet derivative.
4.1 Path derivatives The first class of differential operators we wish to introduce acts on both generalized paths and loops, they are referred to as the initial and endpoint derivatives. This class of derivatives depends on a vector field, which for the terminal endpoint derivative is assumed to exist in a vicinity U of the endpoint q = γ(1) of the generalized path γ. Writing V(γ(1)) = v ∈ Tγ(1) M for the vector field at q this local vector field v generates a local integral curve, starting at q = γ(1),
s=0
which we symbolically write as ηVs = ΦV (s)(q). We will write γs = γ ⋅ ηVs for the new path composed of the combination of the original path γ followed by the local integral curve induced by v and qs = γs (1) for the varying endpoint of the combined path, this is graphically represented in the left panel of Figure 4.1. The right panel shows that extending the original path γ → γs returns a different path, i. e., a different point in the path space 𝒫ℳ, from which it is clear that the endpoint derivatives are actually directional derivatives in 𝒫ℳ. The https://doi.org/10.1515/9783110651690004
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
4.1 Path derivatives  109
Figure 4.1: γs = γ ⋅ ηVs and qs = γs (1).
direction of this directional derivative is determined by the local vector V. We implicitly assumed a reparameterization such that the parameter that describes the curve is in the interval [0, 1]. Here we identified the generalized paths and loops with Chen integrals, where reparameterization invariance is naturally included. One could also introduce the invariance explicitly by dividing out the equivalence relation for paths that only differ by reparameterization. In a quantum setting, using the pathintegral formalism, this results in integrating over all reparameterizations which gives rise to a constant factor. This factor then divides out if one divides by the vacuum diagrams in the calculation of an expectation value in quantum field theory.1 With these notations and parameterizations we can now give the definition of the terminal covariant endpoint derivative.2 Definition 4.1 (Terminal covariant endpoint derivative). Consider a functional Uγ defined on a path in 𝒫ℳ, which returns the elements of ℝ. The terminal covariant endpoint derivative ∇VT (qs )Uγ of Uγ , at γ, in the direction of V , is defined by ∇VT (qs )Uγ = lim
Uγs+Δ − Uγs
Δ→0
Δ
.
(4.1)
Replacing γs with γs = (ηVs )−1 ⋅ γ, yields the initial covariant endpoint derivative ∇VI (qs )Uγ .
1 Notice that this reparameterization invariance is not assumed in all path or loop spaces described in the literature. 2 Instead of ℝ one can consider other sets ℂ; GL(n, ℂ), the definitions do not change.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
110  4 Shape variations in the loop space Clearly this is only welldefined in a vicinity of the endpoint qs = γs (1), and moreover depends on the vector field V, which gives it the directional derivative like behavior. In the special case that s = 0, we can define the terminal endpoint derivative. Definition 4.2 (Terminal endpoint derivative). Consider a functional Uγ defined on a path in 𝒫ℳ, which returns the elements of ℝ. The terminal endpoint derivative 𝜕vT Uγ of Uγ , in the direction of v ∈ Tγ(1) M, is defined by 𝜕vT Uγ = lim
UγΔ − Uγ Δ
Δ→0
(4.2)
given that this limit exists independently of the choice of the vector field V ∈ 𝒳 ℳ, with V (γ(1)) = v.
To demonstrate how this class of derivatives work in practice, consider the following example. Example 4.3. Suppose we have a smooth function F ∈ C∞ M and a path functional UF which reads UF [γ] = F[γ(1)]. Applying the terminal endpoint derivative yields ∇VT (qs )UF [γ] = V ⋅ F[qs ] = dF[Vqs ],
(4.3)
𝜕vT UF [γ] = V ⋅ F[γ(1)] = dF[v],
(4.4)
and
depending only on the vector v, and not on the specific extension V.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
4.1 Path derivatives 
111
The above example is not only useful to demonstrate the operation of the endpoint derivatives, but also to introduce the concept of a marked path functional, where the marked refers to the fact that it is determined by the evaluation of some function at a certain point along the path. This is where it might make a difference if one assumes reparameterization invariance or not. Definition 4.4 (Marked path functional). Consider a pathdependent functional Uγ and F ∈ C ∞ M. We define the marked path functional F ⊙ Uγ , by (F ⊙ Uγ )[γ] = F [γ(1)]Uγ .
(4.5)
Similar to the regular derivatives the endpoint derivatives obey the Leibniz rule: Lemma 4.5 (Leibniz Rule). Suppose that the limit in equation (4.1) exists for a path functional Uγ , which has the continuity condition for s ≥ 0 lim Uγs+Δ = Uγs .
Δ→0
The covariant endpoint derivative then obeys the Leibniz rule: ∇VT (qs )(F ⊙ Uγ )(γ] = V ⋅ F(qs )Uγs + F[qs ]∇VT (qs )Uγ
= ∇VT (qs )F[γ]Uγs + F[γs ]∇VT (qs )Uγ
(4.6)
with qs = γs (1). In particular, if 𝜕vT Uγ exists in the sense of Definition 4.2, then we have at the endpoint q = γ(1) that 𝜕vT (F ⊙ Uγ )[γ] = 𝜕vT F[γ]Uγ + F[γ]𝜕vT Uγ
(4.7)
which depends only on the vector v, and not on the particular extension V. We can thus safely state that the object defined in Definition 4.1 is really a derivative.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
112  4 Shape variations in the loop space
Exercise 4.6. What does the endpoint derivative of the path functionals X ω1 ⋅⋅⋅ωn look like?
To be able to answer this question we need the following lemma: Lemma 4.7. Suppose we have ηx = ηVx . Then the following limits occur for n ≥ 2 1 ∫ ω = ω(v) Δ
(4.8)
1 ∫ ω1 ⋅ ⋅ ⋅ ωn = 0. Δ→0 Δ
(4.9)
lim
Δ→0
ηΔ
lim
ηΔ
The proof of this lemma is straightforward after introduction of local coordinates, for which we can write ω = Aμ (x) dxμ . Proof. lim
Δ→0
1 1 ∫ ω = lim ∫ Aμ (x)dxμ Δ→0 Δ Δ ηΔ
ηs
Δ
dxμ 1 dt ∫ Aμ [x(t)] Δ→0 Δ dt
= lim
0
Δ
1 = lim ∫ Aμ [x(t)]vμ (t) dt Δ→0 Δ 0
= Aμ [x(0)]vμ (0) = ω(v),
(4.10)
which is valid under the assumption that there are no divergences in the kernel of the integral. Thus, Lemma 4.7 and equation (2.49) with γ1 ⋅ γ2 = γ ⋅ ηs , where γ is a path from γ(0) to γ(s)) for n ≥ 1,
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
4.1 Path derivatives 
113
allow for the calculation of the endpoint path derivative ∇VT (qs )X ω1 ⋅⋅⋅ωn (γ) = X ω1 ⋅⋅⋅ωn−1 (γs ) ⋅ ωn (Vqs )
(4.11)
and for 𝜕vT this reduces to 𝜕vT X ω1 ⋅⋅⋅ωn (γ) = X ω1 ⋅⋅⋅ωn−1 (γ) ⋅ ωn (v).
(4.12)
The dependence on the vector v is worth noticing here. Exercise 4.8. Derive equation (4.11).
Equivalent expressions for the initial terminal point derivative can be derived for n ≥ 1 ∇VI (qs )X ω1 ⋅⋅⋅ωn (γ) = −ω1 (Vqs ) ⋅ X ω2 ⋅⋅⋅ωn (γs ),
(4.13)
𝜕vI X ω1 ⋅⋅⋅ωn (γ) = −ω1 (v) ⋅ X ω2 ⋅⋅⋅ωn ,
(4.14)
and
where ω1 , . . . , ωn ∈ ⋀M and ω1 , . . . , ωn ∈ ⋀M ⊗ GL(n, ℂ). Keeping quantum field theory in mind we can consider the commutator of two endpoint derivatives. Applying the commutator to X ω1 ⋅⋅⋅ωn returns the following result [𝜕uT1 , 𝜕uT2 ]X ω1 ⋅⋅⋅ωn (γ) = X ω1 ⋅⋅⋅ωn−2 (γ) ⋅ (ωn−1 ∧ ωn )(u1 ∧ u2 ).
(4.15)
The above results clearly show that ∇VT (qs )X ω1 ⋅⋅⋅ωn (γ) given by equation (4.11), is a marked path functional, as defined in 4.4 with F = ωn (V). It then follows that when we consider two vector fields U1 , U2 , locally defined around q = γ(1),
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
114  4 Shape variations in the loop space we can apply the Leibniz rule (4.5) and dω(U1 , U2 ) = U1 ⋅ ω(U2 ) − U2 ⋅ ω(U1 ) − ω([U1 , U2 ]), to derive that at q we get [∇UT1 (q), ∇UT2 (q)]X ω1 ⋅⋅⋅ωn (γ) = X ω1 ⋅⋅⋅ωn−1 (γ) ⋅ dωn (u1 ∧ u2 )
+ X ω1 ⋅⋅⋅ωn−2 (γ).(ωn−1 ∧ ωn )(u1 ∧ u2 ).
(4.16)
This only depends on the local vectors u, v, and not on the particular extensions U1 , U2 allowing for the notation [∇uT1 , ∇uT2 ]X ω1 ⋅⋅⋅ωn (γ). Here we are specifically interested in the result of the application of the endpoint derivatives on the parallel transporter. Consider the parallel transport path functional Uγ : 𝒫ℳ → GL(p) which was introduced in equation (2.125). Applying the terminal endpoint derivative results in 𝜕uT2 Uγ = Uγ ⋅ ω(u2 ),
(4.17)
and for the initial endpoint derivative it returns: 𝜕uI 2 Uγ = −ω(u2 ) ⋅ Uγ .
(4.18)
The relevance of the application of the commutator to the gauge link will become clear in the next section when we demonstrate its relation to the area derivative. For the moment let us just state the result. [∇uT1 , ∇uT2 ]Uγ = Uγ ⋅ (dω + ω ∧ ω)(u1 ∧ u2 ) = Uγ ⋅ Ω(u1 ∧ u2 ),
(4.19)
where Ω is the curvature of the connection oneform ω. In a more familiar quantum field notation this becomes: μ
[∇uT1 , ∇uT2 ]Uγ = Uγ ⋅ Fμν (u1 ∧ uν2 ),
(4.20)
where now Fμν is the usual field strength tensor.3 In the above exposition we sometimes referred to the endpoint derivatives as being covariant, without explicitly explaining why. The following example demonstrates where the name covariant endpoint derivative has its origin. 3 Note the type of the different tensors: F is a dual covariant tensor and the vectors u1 , u2 are indeed two contravariant tensors. This makes the contractions welldefined.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
4.1 Path derivatives 
115
Example 4.9. Consider a path λ ∈ 𝒫ℳp and a function F ∈ C ∞ M. A marked path functional reads ω ⋅⋅⋅ωn
Z(i)1
(λ; F) ≡ X ω1 ⋅⋅⋅ωi (λ)F[λ(1)]X ωi+1 ⋅⋅⋅ωn (λ−1 )
(4.21)
where ω1 ⋅ ⋅ ⋅ ωn ∈ ⋀M. Applying the Leibniz rule returns ω ⋅⋅⋅ωn
∇vT Z(i)1
(λ; F) = X ω1 ⋅⋅⋅ωi (λ) ⋅ dFq (v)X ωi+1 ⋅⋅⋅ωn (λ−1 )
+ X ω1 ⋅⋅⋅ωi−1 (λ) ⋅ ωi (v) ⋅ F(q) ⋅ X ωi+1 ⋅⋅⋅ωn (λ−1 )
− X ω1 ⋅⋅⋅ωi (λ) ⋅ F(q) ⋅ ωi+1 (v) ⋅ X ωi+2 ⋅⋅⋅ωn (λ−1 ),
(4.22)
where q = λ(1) Consider now a connection oneform ω. A marked path functional Ψ can be defined as Ψ(λ; F) ≡ Uλ ⋅ F(q) ⋅ Uλ−1
(4.23)
where q = λ(1) U is the parallel transport operator of the connection ω and F ∈ C ∞ M ⊗ GL(n, ℂ). By means of equation (4.22), one finds ∇vT Ψ(λ; F) = Uλ ⋅ (dFq (v) + [ω, F](v)) ⋅ Uλ−1 = Uλ ⋅ Dω q F(v) ⋅ Uλ−1 ,
(4.24)
where Dω q F(v) ≡ dFq (v) + [ω, F(v)] stands for the usual covariant derivative of F. This explains the name of the operator ∇vT , as terminal endpoint covariant derivative.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
116  4 Shape variations in the loop space
4.2 Area derivative In the previous section we introduced the path endpoint derivatives. Now we focus on the area derivative, Figure 4.2. We start with the definition of an area extension.
Figure 4.2: Δλ;u1 ∧u2 (q)X ω1 ⋅⋅⋅wr (γ).
Definition 4.10 (Area extension). Consider a loop γ ∈ ℒℳp , a point q ∈ M, and a path λ ∈ 𝒫ℳp , going from p to q = λ(1). Given an ordered pair (u1 , u2 ) of tangent vectors u1 , u2 ∈ Tq M, we extend them by two commuting vector fields U1 , U2 ∈ 𝒳 𝒰 , defined in a vicinity 𝒰 of q = λ(1).
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
4.2 Area derivative  117
We introduce the infinitesimal loop ◻Δ 1
(U ,U2 )
based at q, which is defined by the local flows Φ: ◻Δ 1
(U ,U2 )
U U U = ΦU2 (−Δ)Φ 1 (−Δ)Φ 2 (Δ)Φ 1 (Δ)(q)
(4.25)
where ΦU1 ,U2 is the local flow of U1 , U2 .
Use the notation λΔ for the (Δdependent) loop, see Figure 4.2, where in the right panel we now have a curve of loops in ℒℳ, λ ⋅ ◻Δ
(U1 ,U2 )
⋅ λ−1 ,
for which, due to the path reduction property (2.74), we obtain lim λΔ = ϵ,
Δ→0
where ϵ is the unity in the group Lℳp (of equivalence classes) of loops at p ∈ M. In the classical case one can write lim λΔ (X u ) = lim X u (λΔ ) = ϵ(X u ).
Δ→0
Δ→0
(4.26)
Given that λΔ ⋅ γ represents an infinitesimal deformation of the loop γ, in the topology of Lℳp , the area derivative can be defined as follows: Definition 4.11 (Area derivative). Given a loop functional Uγ
on Lℳp ,
with values in ℝ, its area derivative Δλ;(u1 ,u2 ) (q) ⋅ Uγ is defined by the limit (if it exists independently of the choice of the vector fields U1 , U2 ∈ 𝒳 𝒰 ) Δλ;(u1 ,u2 ) (q)Uγ = lim
Δ→0
1 [UλΔ ⋅γ − Uγ ]. Δ2
(4.27)
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
118  4 Shape variations in the loop space With the goal of applying the area derivative to Wilson loop variables (for SU(N) gauge theory) 1 [ig ∮ A (x)dxμ ] 0⟩ , Tr 𝒫 e γ μ N
𝒲γ = ⟨0
(4.28)
we investigate the application of this derivative to the Chen iterated integrals X ω1 ⋅⋅⋅ωn : Lℳp → ℝ. The fact that the area derivative of the functionals X ω1 ⋅⋅⋅ωn is welldefined stems from the following lemma. Lemma 4.12. Write ◻Δ for (U1 ,U2 )
◻Δ as before. Then lim
Δ→0
1 ∫ ω = ∫ dω = dω(u1 ∧ u2 ) Δ2 ◻Δ
(4.29)
V
1 lim ∫ ω1 ω2 = (ω1 ∧ ω2 )(u1 ∧ u2 ) Δ→0 Δ2
(4.30)
◻Δ
lim
Δ→0
1 ∫ ω1 ⋅ ⋅ ⋅ ωn = 0, Δ2
(4.31)
◻Δ
where ω, ω1 , . . . , ωn ∈ ⋀M. Just like in the case of the endpoint derivatives this lemma can be proved by introducing local coordinates. Again these integrals are welldefined by the Stokes theorem, but one does need to make a remark with respect to the goal of applying the area derivative to the Wilson loop variables, equation (4.28). To show the intricacies rewrite the integral in (4.30) in a more familiar gauge theory notation: ∫ ω1 ω2 = ∫ Aμ Aν , ◻t
(4.32)
◻t
where Aμ , Aν are the gauge connection oneforms. This integral is welldefined in the classical case, but will become problematic in a field theory setting when taking vacuum expectation values, even when considering both oneforms locally constant the vacuum expectation value will give rise to a tadpole, which can be taken care of by a convenient regularization scheme. But in the general case, more specifically in the case of Wilson loops laying entirely on the light cone, one will not be able to resolve
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
4.2 Area derivative 
119
this issue. The question is then if one can interchange the integrals and the vacuum expectation values. For the remainder of this section we will assume that the integrals in Lemma 4.12 are welldefined and use the values shown there. Notice that the above defined area derivative introduces extra cusps along the contour, which is the main cause of divergencies of the integrals in Lemma 4.12 in a quantum field theory setting. To investigate the properties of the area derivative applied to the functionals X ω1 ⋅⋅⋅ωn further, we define the following derivation Definition 4.13. For 2
u1 ∧ u2 ∈ ⋀ T q M define a derivation Du1 ∧u2 (q) in the algebra of iterated integrals, by ω ⋅⋅⋅ω Du1 ∧u2 (q)X ω1 ⋅⋅⋅ωn = X 1 n−1 ⋅ dωn (u1 ∧ u2 ).
(4.33)
From the (algebraic) commutator T
[𝜕uT1 , 𝜕u2 ] of two terminal endpoint derivatives at q, we also define the derivation 𝒟u1 ∧u2 (q) according to: 𝒟u1 ∧v2 (q) = Du1 ∧u2 (q) + [𝜕uT1 , 𝜕uT2 ],
(4.34)
for which we formulate the lemma below, establishing its relationship to the area derivative Lemma 4.14. Let Δλ;(u1 ,u2 ) (q) be as introduced in Definition 4.11. Then n
Δλ;(u1 ,u2 ) (q)X ω1 ⋅⋅⋅ωn (ϵ) = ∑ (𝒟u1 ∧u2 (q)X ω1 ⋅⋅⋅ωi (λ))(X ωi+1 ⋅⋅⋅ωn (λ−1 )), i=1
(4.35)
where ω1 ⋅ ⋅ ⋅ ωn ∈ ⋀M.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
120  4 Shape variations in the loop space Since the derivative depends only on the wedge product u1 ∧u2 of the local vectors, we introduce the notation Δ(λ;u1 ∧u2 ) (q)X ω1 ⋅⋅⋅ωn (ϵ). The lemma can be proved using the properties of the Chen iterated integrals for products of loops in Definition 4.11 and comparing the resulting expression with the results of applying the derivative, equation (4.13), to the functionals X ω1 ⋅⋅⋅ωn . Example 4.15. Δ(λ;u1 ∧u2 ) (q)X ω (ϵ) = dω(u1 ∧ u2 )
Δ(λ;u1 ∧u2 ) (q)X ω1 ω2 (ϵ) = dω1 (u1 ∧ u2 )X ω2 (λ−1 ) + X ω1 (λ) ⋅ dω2 (u1 ∧ u2 ) + (ω1 ∧ ω2 )(u1 ∧ u2 )
(4.36)
and, more generally n
Δ(λ;u1 ∧u2 ) (q)X ω1 ⋅⋅⋅ωn (ϵ) = ∑ X ω1 ⋅⋅⋅ωi−1 (λ) ⋅ dωi (u1 ∧ u2 )X ωi+1 ⋅⋅⋅ωn (λ−1 ) i=1
n
+ ∑ X ω1 ⋅⋅⋅ωi−2 (λ) ⋅ (ωi−1 ∧ ωi )(u1 ∧ u2 ) ⋅ X ωi+1 ⋅⋅⋅ωn (λ−1 ). i=2
(4.37)
Keeping in mind that lim λΔ = ϵ,
Δ→0
still assuming that the integrals in Lemma 4.12 are welldefined, one can demonstrate that lim
Δ→0
λΔ − ϵ =0 Δ
by introducing local coordinates. At the same time we find that lim
Δ→0
λΔ − ϵ Δ2
exists and is actually the area derivative. We write now δ(λ;u∧v) for the operator in the algebra of iterated integrals 𝒜p , defined through the derivations from equations (4.33) and (4.34): δ(λ;u1 ∧u2 ) X u ≡ Δ(λ;u1 ∧u2 ) (q)X u (ϵ)
= (λ ⊗ λ)((𝒟u1 ∧u2 (q) ⊗ J) ∘ Δ)X u ,
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
(4.38)
4.2 Area derivative 
121
for u ∈ Sh, where in the last line J is the antipode and Δ comultiplication of the Hopf algebra structure on 𝒜p . The last equality can be understood from combining the definitions of the operators written in the last line of equation (4.38) with the left action of a loop on the space of generalized loops. Since this is a topological group as we have seen in the previous section, this action is welldefined. Exercise 4.16. Demonstrate that the last line in equation (4.38) is indeed equivalent to equation (4.37).
In a similar way it is possible to show that δ(λ;u1 ∧u2 ) is a pointed derivation at ϵ. δ(λ;u1 ∧u2 ) (X u1 X u2 ) = δ(λ;u1 ∧u2 ) (X u1 )ϵ(X u2 ) + ϵ(X u1 )δ(λ;u1 ∧u2 ) (X v ),
(4.39)
∀u1 , u2 ∈ Sh(Ω).
Note that equation (4.38) indicates that δ(λ;u1 ∧u2 ) : 𝒜p → k is a linear map. In the discussion on the Lie algebra on the generalized loop space we derived that the tangent space Tϵ Lℳp , to the group Lℳp , at ϵ, is the Klinear subspace of 𝒜∗p . We will now demonstrate that this space is generated by all the δ(λ;u1 ∧u2 ) . Suppose we have a loop γ ∈ Lℳp , a path λ ∈ 𝒫ℳp and evaluate the area derivative, Figure 4.3, △(λ;u1 ∧u2 ) (q)X ω1 ⋅⋅⋅ωn [γ]. We have the following lemma:
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
122  4 Shape variations in the loop space
Figure 4.3: △λ;u1 ∧u2 (q)X ω1 ⋅⋅⋅wr (γ).
Lemma 4.17. For 2
u1 ∧ u2 ∈ ⋀ Tλ(1) M one has r
△(λ;u1 ∧u2 ) (q)X ω1 ⋅⋅⋅ωn (γ) = ∑ △(λ;u1 ∧u2 ) (q)X ω1 ⋅⋅⋅ωi (ϵ)X ωi+1 ⋅⋅⋅ωn (γ) i=1
= γ ∘ (δ(λ;u1 ∧u2 ) ⊗ 1) ∘ Δ(X ω1 ⋅⋅⋅ωn )
(4.40)
with (δ(λ;u1 ∧u2 ) ⊗ 1) ∘ Δ the right invariant derivation on the algebra 𝒜p , which is associated to the tangent vector δ(λ;u1 ∧u2 ) . This lemma motivates the notation △R(λ;u1 ∧u2 ) : Lℳp → 𝒜∗p ,
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
4.2 Area derivative 
123
given by △R(λ;u1 ∧u2 ) (γ) ≡ γ ∘ (δ(λ;u1 ∧u2 ) ⊗ 1) ∘ Δ
(4.41)
and its designation as the right invariant ‘vector field’ on Lℳp , determined by δ(λ;u1 ∧u2 ) . If λ = ϵ, then △(ϵ;u1 ∧u2 ) (p) is said to be the initial endpoint area derivative, denoted by △I(ϵ;u1 ∧u2 ) (p), as visualized in Figure 4.4.
Figure 4.4: △I(ϵ;u1 ∧u2 ) (p).
In this case the area derivative reduces to: △I(ϵ;u1 ∧u2 ) (p)X ω1 ⋅⋅⋅ωn (γ) = dω1 (u1 ∧ u2 ) ⋅ X ω2 ⋅⋅⋅ωn (γ)
+ (ω1 ∧ ω2 )(u1 ∧ u2 ) ⋅ X ω3 ⋅⋅⋅ωn (γ).
(4.42)
Another possibility is to consider λ = γ ⋅ η,
γ ∈ Lℳp ,
η ∈ 𝒫ℳp ,
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
124  4 Shape variations in the loop space
Figure 4.5: △E(η;u1 ∧u2 ) (q).
and 2
u1 ∧ u2 ∈ ⋀ Tη(1) M. In the latter case, Figure 4.5, λΔ ⋅ γ ≡ (λ ⋅ ◻Δ
(U1 ,U2 )
⋅ λ−1 ) ⋅ γ = γ ⋅ η ⋅ ◻Δ
(U1 ,U2 )
= γ ⋅ (η ⋅ ◻Δ
⋅ γ ⋅ η−1 ⋅ γ
(U1 ,U2 )
⋅ η−1 ) ≡ γ ⋅ ηΔ .
(4.43)
The area derivative in this situation is referred to as the terminal endpoint area derivative and is denoted by △E(η;u1 ∧u2 ) (q). Similarly to the case of the initial endpoint area derivative one derives n
△E(η;u1 ∧u2 ) (q)X ω1 ⋅⋅⋅ωn (γ) = ∑ X ω1 ⋅⋅⋅ωi (γ) △(η;u1 ∧u2 ) (q)X ωi+1 ⋅⋅⋅ωn (ϵ) i=1
= γ ∘ (1 ⊗ δ(η;u1 ∧u2 ) ) ∘ Δ(X ω1 ⋅⋅⋅ωn ).
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
(4.44)
4.2 Area derivative 
125
Analogous to the right invariant derivations, we can define the left invariant derivation (1 ⊗ δ(η;u1 ∧u2 ) ) ∘ Δ associated to δ(η;u1 ∧u2 ) . Naturally △L(η;u1 ∧u2 ) : Lℳp → 𝒜∗p , given by △L(η;u1 ∧u2 ) (γ) ≡ γ ∘ (1 ⊗ δ(η;u1 ∧u2 ) ) ∘ Δ,
(4.45)
now referred to as the left invariant “vector field”, on Lℳp , determined by δ(η;u1 ∧u2 ) . If η=ϵ (see Figure 4.6), the above formula reduces to △E(ϵ;u1 ∧u2 ) (p)X ω1 ⋅ωn (γ) = 𝒟u1 ∧u2 (p)X ω1 ⋅⋅⋅ωn
(4.46)
= X ω1 ⋅⋅⋅ωn−1 (γ) ⋅ dωn (u ∧ v) + X ω1 ⋅⋅⋅ωn−2 (γ) ⋅ (ωn−1 ∧ ωn )(u1 ∧ u2 ).
Particularly interesting in this last case is that we can relate the area derivative to the Lie bracket of terminal endpoint path derivations, equation (4.16): △(ϵ;u1 ∧u2 ) (q)X ω1 ⋅⋅⋅ωn (γ) = [∇uT , ∇vT ]X ω1 ⋅⋅⋅ωn (γ).
(4.47)
Figure 4.6: △I(ϵ;u1 ∧u2 ) (p).
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
126  4 Shape variations in the loop space If, in this specific case, we consider not the functionals X ω1 ⋅ωn but the holonomy Uγ instead, we obtain △E(ϵ;u1 ∧u2 ) (p)Uγ = Uγ ⋅ (dω + ω ∧ ω)(u1 ∧ u2 ) = Uγ ⋅ Ω(u1 ∧ u2 )
(4.48)
where again Ω is the curvature of the connection ω. The fact that 𝒜p has a nuclear algebra means that we can apply this derivation to the Wilson loop W: △E(ϵ;u1 ∧u2 ) (p)W(γ) = Tr ((dξ + ω ∧ ω)(u1 ∧ u2 ) ⋅ Uγ ) = Tr (Ω(u1 ∧ u2 ) ⋅ Uγ ),
(4.49)
which are also referred to as the Mandelstam formulas. Since we are dealing with Lie algebras, it is not a surprise that we also have a Bianchi identity. Theorem 4.18 (Bianchi identity). ∑
cycl{u1 ,u2 ,u3 }
∇uT1 (λ(1))δ(λ;u2 ∧u3 ) = 0,
(4.50)
where ∑
cycl{u1 ,u2 ,u3 }
stands for the sum over the cyclic permutations of the vectors u1 , u2 , u3 . Just like with the endpoint derivatives, we can consider the commutator of two area derivatives which, as elements of the Lie algebra, will allow the determination of the structure constants [δ(λ;a1 ∧a2 ) , δ(η;u1 ∧u2 ) ] = δ(λ;a1 ∧a2 ) ⋆ δ(η;u1 ∧u2 ) − δ(η;u1 ∧u2 ) ⋆ δ(λ;a1 ∧a2 ) .
(4.51)
Using the definition of the area derivative this can be written as: [δ(λ;a1 ∧a2 ) , δ(η;u1 ∧u2 ) ]X ω1 ⋅⋅⋅ωn n
i
= ∑ ∑ (𝒟a1 ∧a2 (λ(1))X ω1 ⋅⋅⋅ωk (λ))(X ωk+1 ⋅⋅⋅ωi (λ−1 ))δ(η;u1 ∧u2 ) (X ωi+1 ⋅⋅⋅ωn ) i=0 k=0 n
i
− ∑ ∑ (𝒟u1 ∧u2 (η(1))X ω1 ⋅⋅⋅ωk (η))(X ωk+1 ⋅⋅⋅ωi (η−1 ))δ(λ;a∧b) (X ωi+1 ⋅⋅⋅ωn ), i=0 k=0
(4.52)
from which one is formally able to extract the structure constant. With this we end our introduction of the area derivative and move on to the variational derivative.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
4.3 Variational calculus  127
4.3 Variational calculus In the previous section we introduced the area derivative, which depends on the local flows of two independent local vector fields. A crucial problem with this derivative in the context of quantum field theory, while calculating perturbative matrix elements, vacuum expectation values etc., is that it may introduce extra cusps (anglelike obstructions) in the loops and, consequently, may generate extra singularities in the perturbative expansion. To deal with this problem we shall define a different area derivative that, under certain assumptions, will not introduce extra singularities. Within this scheme the variations of the shape of a contour are generated by diffeomorphisms, which can be related to the Fréchet derivative that in some aspects can be considered as a differential operators situated, in a sense, between the path and an areaderivative. To introduce this derivative consider Diff(M), the diffeomorphism group of M. Let now φ ∈ Diff(M) be a diffeomorphism of M and γ ∈ 𝒫ℳ be a path in M. Then we have φ⋅γ for the image of the path γ under the diffeomorphism φ. From elementary manifold theory we conclude that the action of the diffeomorphism φ on the functionals X ω1 ⋅⋅⋅ωn is given by X ω1 ⋅⋅⋅ωn [φ ⋅ γ] = X φ
∗
ω1 ⋅⋅⋅φ∗ ωn
[γ]
(4.53)
where, as before, ω1 , . . . , ωn ∈ ⋀M, and φ∗ ωi are the pullbacks of ωi under the map φ. We focus our attention to the diffeomorphisms that form a one parameter group, infinitesimally generated by Y ∈ 𝒳 M, a vector field on M. The vector field Y then generates a one parameter group of active diffeomorphisms ψ(t) by the identification ψYt (p) := cpY (t),
(4.54)
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
128  4 Shape variations in the loop space where t → cpY (t) is the maximal integral curve in M starting at p ∈ M with tangential vector field Y at each point of the curve. For a composition of diffeomorphisms we have ψYt (p) ∘ ψYs (p) = ψYs+t (p), so that Diff(M) turns into a group. The local flow of a vector field allows us to define a Lie derivative of any tensor field by the identification (ℒY (t))(p) := (
d Y ∗ (ψs ) (t))(p), ds s=0
(4.55)
which being applied to equation (4.53) results in DV X ω1 ⋅⋅⋅ωn (γ) ≡
d ω ⋅⋅⋅ω X 1 n (φs ⋅ γ) ds s=0 n
= ∑ X ω1 ⋅⋅⋅ωi−1 (LY ωi )ωi+1 ⋅⋅⋅ωn (γ), i=1
(4.56)
where ℒY ω refers to the Lie derivative of the oneform ω, in the direction of Y. Making use of Cartan’s formula: ℒY = ιY d + dιY ,
(4.57)
where ιY ωn (v2 , . . . , vn ) = ωn (Y, v2 , . . . , vn ) is the interior product, equation (4.56) reduces to4 n
DV X ω1 ⋅⋅⋅ωn (γ) = ∑ X ω1 ⋅⋅⋅ωi−1 ⋅(ιY dωi )⋅ωi+1 ⋅⋅⋅ωn [γ] i=1
n
+ ∑ X ω1 ⋅⋅⋅ωi−2 ⋅ιY (ωi−1 ∧ωi )⋅⋅⋅ωi+1 ⋅⋅⋅ωn [γ] i=2
+ ωn [V(1)]X ω1 ⋅⋅⋅ωn−1 (γ) − ω1 (V(0))X ω2 ⋅⋅⋅ωn [γ].
(4.58)
From Section 3.2 we know that 𝒜p is a Fréchet space, such that the Lie derivative DV X ω1 ⋅⋅⋅ωn [γ] associated with V now is a Fréchet derivative of X ω1 ⋅⋅⋅ωn at γ, 4 Note the different limits of the summations.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
4.3 Variational calculus  129
in the direction of the tangent vector V = Y ∘ γ ∈ γ ∗ TM, as introduced in Definition 2.121. Restricting ourselves to the pointed diffeomorphism group Diffp (M), the diffeomorphisms φ that fix the point p, and keeping in mind that this is also a topological group, we can consider its ‘Lie algebra’ 𝒳p (M). This algebra consists of the vector fields Y that vanish on p. The action of the elements of this algebra on the algebra 𝒜p can be naturally defined by making use of the pullbacks of the oneforms ωi : (φ, X ω1 ⋅⋅⋅ωn ) → φ ⋅ X ω1 ⋅⋅⋅ωn ≡ Xφ
∗
ω1 ⋅⋅⋅φ∗ ωn
(4.59)
.
Since diffeomorphisms do not change the algebraic structure of the oneforms, it also preserves its Hopf algebra structure and as such φ is a Hopf algebra automorphism, written explicitly φ ⋅ (X u X v ) = (φ ⋅ X u ) ⋅ (φ ⋅ X v ) Δ ∘ φ = (φ ⊗ φ) ∘ Δ.
(4.60)
As a direct consequence of this fact φ induces an automorphism of L? ℳp , through the identification: ̃ (X ω1 ⋅⋅⋅ωn ) ≡ α ̃ (φ ⋅ X ω1 ⋅⋅⋅ωn ) φ⋅α ̃ (X φ =α
∗
ω1 ⋅⋅⋅φ∗ ωn
),
(4.61)
φ, as an element of Aut(L? ℳp ), has a differential dφ : l̃ ℳp → l̃ ℳp , defined as in standard differential geometry by dφ(δ)(X ω1 ⋅⋅⋅ωn ) ≡ δ(X φ
∗
ω1 ⋅⋅⋅φ∗ ωn
),
(4.62)
with δ a tangent vector (or derivation) of L̃ ℳp . This differential allows φ → dφ
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
130  4 Shape variations in the loop space to produce a linear representation of the pointed diffeomorphism group Diffp (M) on l̃ ℳp . The infinitesimal action of Y ∈ 𝒳p (ℳ) on δ, written as Y ⋅ δ, is represented by: n
(Y ⋅ δ)(X ω1 ⋅⋅⋅ωn ) = ∑ δ(X ω1 ⋅⋅⋅ωi−1 ⋅(ℒY ωi )⋅ωi+1 ⋅⋅⋅ωn ), i=1
(4.63)
where we used Y(0) = 0 = Y(1). Using Cartan’s expression (4.57) for the Lie derivative and the expressions that defined the ideal Jp of 𝒜p , the above result can be reduced to n
(Y ⋅ δ)(X ω1 ⋅⋅⋅ωn ) = ∑ δ(X ω1 ⋅⋅⋅ωi−1 ⋅(ιY dωi )⋅ωi+1 ⋅⋅⋅ωn ) i=1
n
+ ∑ δ(X ω1 ⋅⋅⋅ωi−2 ⋅ιY (ωi−1 ∧ωi )⋅ωi+1 ⋅⋅⋅ωn ), i=2
(4.64)
where ιY stands for the interior product.
4.4 Fréchet derivative in a generalized loop space We end the mathematical introduction to the theory of the Wilson lines with the discussion of the connection between the Fréchet derivative and diffeomorphisms. Namely, we shall show how the diffeomorphism generating vector field V from the previous section becomes a variational vector field. Suppose we have a path γ ∈ 𝒫ℳp based at p, with Tγ 𝒫ℳp the tangent space of 𝒫ℳp at γ as visualized in Figure 4.7. The vector fields along γ are defined throughout the pullback bundle γ ∗ T ℳ. Notice that these vanish at p since this point needs to stay fixed. Let us now choose such a vector V ∈ Tγ 𝒫ℳp .
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
4.4 Fréchet derivative in a generalized loop space
 131
Figure 4.7: Diffeomorphism of a path.
Defining s → γs as a curve of paths in 𝒫ℳp , starting at γ, in s = 0, with velocity V, we can write: γ0 = γ
(4.65)
𝜕 V(t) = γs (t) 𝜕s s=0 V(0) = 0.
(4.66) (4.67)
The map s → γs is the variation of γ = γ0 , with associated variational vector field V. In the special case that the variation γs is induced by a diffeomorphism, like in the previous section γs = φs ∘ γ, and the vector field is the diffeomorphism generator V = Y ∘ γ, we can determine the Fréchet derivative of the path functionals X ω1 ⋅⋅⋅ωn , at γ ∈ 𝒫ℳp . This derivative was defined in Definition 2.121 as the linear map D⋅ X ω1 ⋅⋅⋅ωn [γ] : Tγ ℒℳ → ℝ.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
132  4 Shape variations in the loop space In the previous section we concluded that in the case we are considering here it can be written as: DV X ω1 ⋅⋅⋅ωn [γ] ≡
d ω ⋅⋅⋅ω X 1 n [γs ]. ds s=0
(4.68)
The derivation of this result requires one more lemma: Lemma 4.19. Consider a manifold N (I or S1 ), an immersion γ : N → M, and a differential form ω in M. Suppose that Γ : N × [0, ϵ] → M is a smooth variation of γ, with variational vector field V. It means that given γs (t) = Γ(t, s), for (t, s) ∈ N × [0, ϵ], one finds γ0 = γ and V(t) =
𝜕 𝜕 Γ(t, s) = Γ∗(t,0) ( ), 𝜕s s=0 𝜕s (t,0)
for t ∈ N. Therefore, we have d ∗ ∗ γ ω = γ (ιV dω + d(ιV ω)) ds s=0 s = γ ∗ (ιV dω) + d(γ ∗ (ιV ω)) as differential forms on N = N × {0}.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
(4.69)
4.4 Fréchet derivative in a generalized loop space
 133
In this lemma ιV(t) ω is the interior product of the form ω(γ(t)) with V(t) ∈ Tγ(t) ℳ and d is the usual differential operator. Consider the case when γ:I→M is an immersed path, based at p, and γs a variation of this path generated by the variational vector field V. The Fréchet derivative then becomes: d d ∗ ∗ (∫ ω) = (∫γs ω) = ∫ γ (ιV dω + d(ιV ω)) ds s=0 ds s=0 γs
I
I
= ∫ γ ∗ (ιV dω) + ∫ γ ∗ d(ιV ω) I
𝜕I
= ∫ γ ∗ (ιV dω) + ω[V(1)] − ω[V(0)] I
= ∫ ιV dω + ω[V(1)],
(4.70)
γ
where we used the above lemma and where in the last equality we have used the identity ∫ ιV dω = ∫ γ ∗ (ιV ω). γ
I
Applying this to a loop γ ∈ ℒℳp and taking into account that V(0) = 0 = V(1) results in DV X ω [γ] = X ιV dω [γ] = ∫ ιV dω
(4.71)
γ
for the functional X ω . Lemma 4.19 combined with an induction procedure results in the following expression for a general functional X ω1 ⋅⋅⋅ωn n
DV X ω1 ⋅⋅⋅ωn [γ] = ∑ ∫ ω1 ⋅ ⋅ ⋅ ωi−1 ⋅ ιV (dωi ) ⋅ ωi+1 ⋅ ⋅ ⋅ ωn i=1 γ
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
134  4 Shape variations in the loop space n
+ ∑ ∫ ω1 ⋅ ⋅ ⋅ ωi−2 ⋅ ιV (ωi−1 ∧ ωi ) ⋅ ωi+1 ⋅ ⋅ ⋅ ωn i=2 γ
+ (∫ ω1 ⋅ ⋅ ⋅ ωn−1 ) ⋅ ωn [V(1)],
(4.72)
γ
which is the same as equation (4.58). For an immersed loop γ ∈ ℒℳp , we have only to consider variations V, that keep the base point p fixed: 𝒱p ≡ {V ∈ γ TM : V(0) = 0 = V(1)}. ∗
(4.73)
Let us mention that any solution Ψ of the equation for all V ∈ 𝒱p Dγ Ψ(V) = 0, is called a relative homotopy invariant of the loop γ, which has its own interesting properties, for instance in Chern–Simons theories or in String Theory. Returning to our motivation for introducing the Fréchet derivative, we now see that if we consider smooth diffeomorphisms the number of cusps is preserved and we still have an area variation. Of special interest for us is the subgroup of diffeomorphisms that also preserve angles, i. e., the locally conformal diffeomorphisms. Despite this striking difference between the area derivative and the Fréchet derivative they are still very well related to each other. To make this relation apparent we define an element of l̃ ℳp by 1
Θ(γ; V) ≡ ∫ δ(γot ;V(t)∧γ(t)) (γ(t))dt ̇
(4.74)
0
where V ∈ 𝒱p and γ0t stand for the part of γ, from γ(0) to γ(t). Applying (4.74) to the functionals Xu,
u ∈ Sh(Ω)
results in 1
Θ(γ; V)(X u ) ≡ ∫ δ(γot ;V(t)∧γ(t)) (γ(t))(X u )dt ̇ 0
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
(4.75)
4.4 Fréchet derivative in a generalized loop space
 135
if, of course, this is welldefined. Tavares showed that 1
DV X u (γ) = ∫ △(γot ;V∧γ)̇ (γ(t))X u dt
(4.76)
0
= (γ ∘ (Θ(γ; V) ⊗ 1) ∘ Δ) X u .
(4.77)
From this we conclude that the Fréchet derivative associated with the variational vector field V can be considered as an integral along the path of area derivatives. If one considers area variations induced by the area derivatives as little squares along the path, and integrate over them we get a smooth area variation. The fact that this is possible is due to the fact that the overlapping sides of the little squares are traversed in opposite direction such that they disappear due to the path reduction property and inverses. This cancelling effectively eliminates the cusps introduced by every square such that in the end we have not introduced any new cusp and the result is a smoothly varied contour. Figure 4.8 represents this idea graphically. Naturally we are interested in the application of this result not only to the functionals Xu,
u ∈ Sh(Ω),
Figure 4.8: Variation induced by Fréchet derivative.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
136  4 Shape variations in the loop space but also to the holonomy or Wilson loop U : Lℳp → GL(n, ℂ) of a connection ω. The result for the holonomy can be written as: 1
̇ DV Uγ = Uγ ⋅ (∫ Uγt Ωγ(t) (V(t) ∧ γ(t)) ⋅ U(γt )−1 ), 0
0
(4.78)
0
also referred to as the ‘nonAbelian Stokes’ theorem’, which for Wilson loop variables becomes (making use of the nuclear or trace class property): δ(λ;u∧v) ⟨0 Tr Uλ  0⟩ = ⟨0 Tr {Uλ Fμν (λ(1))(uμ ∧ vν ) ⋅ Uλ−1 } 0⟩ .
(4.79)
Similar computations yield δ(λ;u1 ∧u2 ) Uλ = Uλ ⋅ Fλ(1) (u1 ∧ u2 ) ⋅ Uλ−1 .
(4.80)
Now that we have concluded that the Fréchet derivative induces a smooth variation of the (Wilson) loops it can be used to derive equations of motion in generalized loop space (Figure 4.9 shows such a variation, the effect on the holonomy and on the spectrum). In the applications of this derivative in the previous chapters of this book we have only considered variations that are anglepreserving. Certainly, we could also consider smooth diffeomorphisms which still preserve the number of angles, but do not preserve the angle sizes. Investigations of such variations have not been done yet as far as we know, opening the door to extend contour variations to a bigger class of transformations.
Figure 4.9: Loop variation and its effects on the holonomy and on the spectrum.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
5 Wilson lines in highenergy QCD In this chapter we focus on the major use of Wilson lines in field theories, more specifically in highenergy QCD.1 As we shall show in the first section, a Wilson line on a linear path resums soft and collinear gluon radiation to all orders. This property can then be used to construct parton density functions, both collinear and k ⊥ dependent, essentially making them gaugeinvariant.
5.1 Eikonal approximation A highly energetic fermion does not deviate much from its path when radiating a soft gluon, that is, we can limit the effect of radiating soft gluons to a phase factor. This is called the eikonal approximation. 5.1.1 Wilson line on a linear path Before we investigate the eikonal approximation, let us first derive Feynman rules for Wilson lines on a linear path. We start with a path going from −∞ to a point bμ along a direction nμ . In what follows we use the notation 𝒰[y ; x] = Uγ [y, x]
(5.1)
for the Wilson lines evaluated along straightlinear paths γ, which are determined by vectors nμ . In this case, the pathdependence becomes trivial and reduces to the dependence on these vectors. Such a path can be parameterized as zμ = bμ + nμ λ
λ = −∞ . . . 0.
(5.2)
In the coordinate representation the Wilson line evaluated along this path reads 0
μ
𝒰 = 𝒫 exp [−ignμ ∫ dλA (nλ + bμ )] .
[
(5.3)
]
−∞
Its perturbative expansion is given by 0
μ
𝒰 = 1 − ignμ ∫ dλA (nλ + bμ ) + ⋅ ⋅ ⋅
(5.4)
−∞
1 The QCD Lagrangian, Feynman rules etc. are given in Appendix B. https://doi.org/10.1515/9783110651690005
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
138  5 Wilson lines in highenergy QCD The Fourier transform of the gauge fields allows us to rewrite this series in the momentum representation. Then we can expand the Wilson line as:2 ∞
𝒰 = ∑ (− n=0
n ig n ) ∫d4 k1 ⋅ ⋅ ⋅ d4 kn n⋅A(kn ) ⋅ ⋅ ⋅ n⋅A(k1 ) e−i b⋅∑ kj In , 16π 4
λn
λ2
−∞ −∞
−∞
0
n
In (k1 , . . . , kn ) = ∫ ∫ ⋅ ⋅ ⋅ ∫ dλn ⋅ ⋅ ⋅ dλ1 e−i n⋅∑
kj λj
(5.5) (5.6)
.
Remember that the pathordering is defined such that the field with the highest value for λ is written leftmost. This is the field which will be drawn at the rightmost side of the diagram (assuming the path to be drawn from left to right), implying that we read Wilson lines in a Feynman diagram as we do with Dirac lines: from right to left when writing the corresponding formula. The pathordering is manifest in the integral borders of In , as they make sure that λ1 ≤ λ2 ≤ ⋅ ⋅ ⋅ ≤ λn . Solving this integral is straightforward. First we calculate the innermost integral: λ2
∫ dλ1 e−i n⋅k1 λ1 = −∞
i e−i n⋅k1 λ2 . n⋅k1 + i ϵ
The effect of the innermost integral is a factor Then the next integral will give a factor get:
1 n⋅k1
1 n⋅k1 +n⋅k2
(5.7)
and an extra term n⋅k1 in front of λ2 .
and so on. In other words, we simply
n
i
In (k1 , . . . , kn ) = ∏
j j=1 n⋅∑l=1 kl
+ iϵ
,
(5.8)
giving ∞
n
𝒰[b ; −∞] = ∑ (−i g) ∫ n=0
n d4 ki i −i b⋅∑n kj . n⋅A(k ) ⋅ ⋅ ⋅ n⋅A(k ) e ∏ n 1 j 16π 4 j=1 n⋅∑l=1 kl + i ϵ
This now gives rise to the following Feynman rules: k
(1) Wilson line propagator:
k
(2) External point: (3) Line from infinity: (4) Wilson vertex:
= bμ
−∞ j
k μ, a
i
i n⋅k + i ϵ
=
e−i b⋅k
=
1
=
−i g nμ (t a )ij
(no momentum flow)
2 The symbol n is used both as an index (in the nth order expansion) and provisional as a directional vector. The Brought to you by account difference should be clear from context. Unauthenticated Download Date  1/8/20 7:39 PM
5.1 Eikonal approximation
 139
We need to realize that when drawing Wilson lines with these rules, the momenta of the gluons should always be pointing inwards (towards the Wilson line) and be collected at the external point, to ensure the correct momentum summations in the Wilson line propagators. For each momentum ki that happens to point outwards, make the substitution ki → −ki in all other Feynman rules for that Wilson line. The resulting nth order diagram is drawn in Figure 5.1.
−∞
k1 k1
n
n−1
k1 + k2 k3
k2
∑ kj
∑ kj
⋅⋅⋅
j=1
j=1
⋅⋅⋅
kn−1
kn
bμ
Figure 5.1: ngluon radiation for a Wilson line going from −∞ to bμ .
The logical next step is to investigate a path that starts at a point aμ and now goes up to +∞, which we parameterize as zμ = aμ + nμ λ
λ = 0 ⋅ ⋅ ⋅ + ∞.
(5.9)
In this case, it is easier to reverse the integration variables and borders as follows: +∞ +∞
In = ∫ 0
+∞
n
∫ ⋅ ⋅ ⋅ ∫ dλ1 ⋅ ⋅ ⋅ dλn e−i n⋅∑
λ1
kj λj
(5.10)
.
λn−1
This keeps the same pathordering (in other words λ1 ≤ ⋅ ⋅ ⋅ ≤ λn remains valid). The calculation goes as before, giving: n
In (k1 , . . . , kn ) = ∏ 𝒰[+∞ ; a] =
−i
,
j j=1 n⋅∑l=1 kn−l+1 − i ϵ ∞ d4 ki n⋅A(kn ) ⋅ ⋅ ⋅ ∑ (−i g)n ∫ 16π 4 n=0
n
n⋅A(k1 ) e−i a⋅∑
kj
n
∏ j=1
j
−i
n⋅∑l=1 kn−l+1 − i ϵ
.
The Feynman rules derived before remain valid if we make the substitution k → −k in the Wilson line propagators (but not in the external point). Then we can draw the nth order diagram for a Wilson line going from aμ to +∞, as demonstrated in Figure 5.2. The path still flows from left to right, but now the momentum is opposite to the path flow. Let us now investigate what changes when we reverse the path of a Wilson line from aμ to bμ . First, the integration borders are of course interchanged, because the path flows from bμ to aμ . This is the same as keeping the integration borders as they are, and flipping the sign in the exponent. But the most important thing is that the
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
140  5 Wilson lines in highenergy QCD n
μ
a
n
∑ kj
∑ kj
j=1
k1
k2
kn
kn−1 + kn
⋅⋅⋅
j=2
⋅⋅⋅
kn−2
kn−1
kn
+∞
Figure 5.2: ngluon radiation for a Wilson line going from aμ to +∞.
order of the fields is reversed, because the field first on the path will be encountered last when following the reversed path flow. This is the idea of antipathordering 𝒫 , defined such that the field with the highest value for λ is written rightmost. The reversed Wilson line is thus given by3 b
ig ∫a dz⋅A
𝒰[a ; b] = 𝒫 e
.
(5.11)
But this is exactly the same as the Hermitian conjugate. The latter also reverses the order of the fields, as (An ⋅ ⋅ ⋅ A1 )† = A†1 ⋅ ⋅ ⋅ A†n . By using the fact that A(k)† = A(−k) is a Hermitian function,4 and making the substitution k → −k, the relation to the reversed path becomes apparent. We thus have (see e. g. [30]) †
𝒰[a ; b] = 𝒰[b ; a] .
(5.12)
But of course it would be desirable to express the Hermitian conjugate line in function of normal pathordered fields, such that we can use the same Feynman rules as before. Let us see for instance how a Wilson line from −∞ to bμ behaves when Hermitian conjugated: † 𝒰[b ; −∞]
n ig n 4 i −i b⋅∑n kj = [ ∑ (− ) k n⋅A(k ) ⋅ ⋅ ⋅ n⋅A(k ) e ] ∏ ∫d i n 1 4 j 16π n=0 j=1 n⋅∑l=1 kl + i ϵ ∞
†
∞ n −i ig n 4 i b⋅∑n kj = ∑( ) k n⋅A(−k ) ⋅ ⋅ ⋅ n⋅A(−k ) e ∏ ∫d i 1 n 4 j 16π n=0 j=1 n⋅∑l=1 kl − i ϵ
∞ n i ig n 4 −i b⋅∑n kj , ) = ∑( k n⋅A(k ) ⋅ ⋅ ⋅ n⋅A(k ) e ∏ ∫d i 1 n 4 j 16π n=0 j=1 n⋅∑l=1 kl + i ϵ
where we used the fact that A(k)† = A(−k) and made the substitution k → −k in the integral. We can relabel the fields by doing 1 → n, 2 → n − 1, . . . , n → 1, 3 Remember that the notation 𝒰[a ; b] represents a Wilson line from b to a. 4 Because A(x) is real.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
5.1 Eikonal approximation
 141
which gives ∞
†
n ig n 4 i −i b⋅∑n kj . ) k n ⋅ A(k ) ⋅ ⋅ ⋅ n ⋅ A(k ) e ∏ ∫d i n 1 j 16π 4 k + iϵ n⋅∑ j=1 l=1 n−l+1
𝒰[b ; −∞] = ∑ ( n=0
This is the expansion of a Wilson line from bμ to +∞, but with reversed path direction, i. e., † . (5.13) 𝒰[b ; −∞] = 𝒰[+∞ ; b] n̂ → −n̂ Watch the change in the sign of ∞. This relation is illustrated schematically in Figure 5.3. †
(
=
) †
(
)
=
Figure 5.3: Taking the Hermitian conjugate of a Wilson line literally mirrors it: the sign of ∞ is flipped and the path direction reversed.
It is now useful to extend the Feynman rules to include rules for reversed paths. First, for the Wilson line propagator we see that it gets complex conjugated when the momentum flow is opposed to the path direction: k
k
i = , n⋅k + i ϵ
k
=
−i , n⋅k − i ϵ
(5.14a)
=
i . n⋅k + i ϵ
(5.14b)
k
−i , = n⋅k − i ϵ
Note that nμ is always defined in the positive direction. The vertex coefficient changes as well: j
k μ, a
j
i = −i g nμ (t a ) , ij
k μ, a
i = i g nμ (t a ) . ij
(5.15)
On the other hand, the sign in the exponent for an external point does not depend on the direction of the path, but only on the direction of the momentum flow with respect to the external point (with the momentum arriving in the point or departing from it): k
r
μ
= r
μ
k
k =
r
μ
= r
μ
k
= e−i r⋅k , (5.16a)
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
142  5 Wilson lines in highenergy QCD k
r
μ
= r
k
μ
k
r
=
μ
= r
k
μ
= ei r⋅k . (5.16b)
Most of the time, we will drop the arrow indicating the path direction on the Wilson line, as it obscures readability. We will assume the path flows from left to right, unless specified otherwise. Another possible configuration is an infinite Wilson line, going from −∞ to +∞ along a direction nμ , while passing through a point r μ . This we parameterize as zμ = rμ + nμ λ
λ = −∞ . . . + ∞.
(5.17)
In this case we can calculate the n − 1 innermost integrals as before, while the outermost integral gives a δfunction: +∞
λn
λ2
−∞
−∞ +∞
In = ∫ dλn e −∞ n−1
i
= (∏
j j=1 n⋅∑l=1 kl
n−1
= (∏ j=1
+ iϵ
i
j
n−1
∫ ⋅ ⋅ ⋅ ∫ dλn−1 ⋅ ⋅ ⋅ dλ1 e−i n⋅∑1
−i n⋅kn λn
n⋅∑l=1 kl + i ϵ
) ∫ dλn e−i (n⋅∑ −∞
n
kj λj
kj )λn
n
) 2π δ(n⋅ ∑ kj + i ϵ) .
(5.18)
There are some technical difficulties with the validity of the integral representation for the δfunction (as written here it is divergent because of the convergence terms i ϵ). But after a suitable regularization of the path, it can be shown that a δfunction with a complex argument is welldefined if used with the sifting property: ∫dk δ(k ± i ϵ) f (k ± i ϵ) = f (0), but the integral representation remains divergent. This implies that writing δ(k ± i ϵ) = ∫dx ei x (k±i ϵ) should be avoided. Returning to the infinite Wilson line, we can reverse the integration variables and borders, as we did before in equation (5.10), to get an equivalent definition: +∞
−i n⋅k1 λ1
In = ∫ dλ1 e −∞ n−1
= (∏ j=1
j
−i
+∞
+∞
λ1
λn−1
n
∫ ⋅ ⋅ ⋅ ∫ dλ2 ⋅ ⋅ ⋅ dλn e−i n⋅∑2 kj λj
n⋅∑l=1 kn−l+1 − i ϵ
n
) 2π δ(n⋅ ∑ kj − i ϵ) .
We add the following Feynman rules:
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
(5.19)
5.1 Eikonal approximation
 143
(5) An infinite Wilson line is parameterized as passing through a point r μ . This point is connected to ±∞ on one side, and to a cut line on the other side. All gluons are radiated from the part between this cut line and ∓∞. k (6) Wilson cut propagator: = 2π δ(k + i ϵ). For the Wilson cut propagator we have the same rule as with the normal Wilson line propagator, viz. that k → −k when the momentum direction is opposite to the path direction. We thus have two options for drawing the nth order diagram of an infinite Wilson line, as shown in Figure 5.4. n
n
∑ kj
−∞
−∞
∑ kj
j=1
rμ
k1
k1 k1
kn−1 + kn
⋅⋅⋅
j=2
k2
kn−2
⋅⋅⋅
⋅⋅⋅ k3
k2
⋅⋅⋅
kn
kn−1
+∞
n
n−1
k1 + k2
kn
∑ kj
∑ kj
j=1
j=1
kn−1
kn
rμ
+∞
Figure 5.4: Two possible diagrams for ngluon radiation from a Wilson line going from −∞ to +∞. The upper diagram corresponds to equation (5.19) and the lower one to (5.18).
The last possible configuration is a finite Wilson line, going from a point aμ to a point μ −aμ ). We parameterize this as: bμ (where now the direction is defined by nμ = b‖b−a‖ zμ = aμ + nμ λ
λ = 0 . . . ‖b − a‖ ,
(5.20)
and expand the Wilson line as ∞
𝒰= ∑( n=0
n −i g n ) ∫d4 k1 ⋅ ⋅ ⋅ d4 kn n⋅A(kn ) ⋅ ⋅ ⋅ n⋅A(k1 ) e−i a⋅∑ kj In 16π 4
‖b−a‖ λn
λ2
n
In (k1 , . . . , kn ) = ∫ ∫ ⋅ ⋅ ⋅ ∫dλn ⋅ ⋅ ⋅ dλ1 e−i n⋅∑ 0
0
kj λj
(5.21)
.
(5.22)
i (e−i n⋅k1 λ2 − 1) . n⋅k1
(5.23)
0
The innermost integral can be easily calculated: λ2
∫dλ1 e−i n⋅k1 λ1 = 0
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
144  5 Wilson lines in highenergy QCD For higher orders we find a recursion relation: i (I (k + k2 , . . . , kn ) − In−1 (k2 , . . . , kn )) , n⋅k1 n−1 1
In (k1 , . . . , kn ) =
(5.24)
which we can solve exactly by careful inspection:5 n−1
m
n
In = ∑ (e−i (b−a)⋅∑m+1 kj − 1) ∏
n
−i
i
∏
j j=1 n⋅∑l=1 km−l+1 j=m+1
m=0
j n⋅∑l=m+1 kl
(5.25)
.
We will rewrite this result in a more symmetrical way. First note that n
m
∑
∏
m=0
n
−i
i
∏
j j=1 n⋅∑l=1 km−l+1 j=m+1
j n⋅∑l=m+1 kl
= 0,
meaning we can rewrite the integral in the following way: n−1
m
n
In = ∑ e−i (b−a)⋅∑m+1 kj ∏ m=0
n−1 m
− ∑∏ m=0 j=1
n−1
j
−i
m
∏
n
n
j
i
n
−i −i
n
i
∏
j n⋅∑l=m+1 kl
∏
j n⋅∑l=m+1 kl
j j=1 n⋅∑l=1 km−l+1 j=m+1 n m
= ∑ e−i (b−a)⋅∑m+1 kj ∏
j j=1 n⋅∑l=1 km−l+1 j=m+1
m=0
i
j n⋅∑l=m+1 kl
j=m+1 n⋅∑l=m+1 kl
= ∑ e−i (b−a)⋅∑m+1 kj ∏ m=0
∏
j j=1 n⋅∑l=1 km−l+1 j=m+1 n
n⋅∑l=1 km−l+1 n
n
−i
i
−i
+∏
j j=1 n⋅∑l=1 kn−l+1
(5.26)
.
Combining this with the exponent in equation (5.21) we finally get n
e−i a⋅∑
kj
n
m
m
n
In = ∑ e−i a⋅∑1 kje−i b⋅∑m+1 kj ∏
−i
n
∏
j j=1 n⋅∑l=1 km−l+1 j=m+1
m=0
i
j n⋅∑l=m+1 kl
.
(5.27)
Using the fact that the product of two infinite sums can in general be written as a chained sum: ∞
∞
i=0
j=0
∞
n
( ∑ Ai ) ( ∑ Bj ) = ∑ ∑ Am Bn−m , n=0 m=0
we can transform equation (5.27) into a product of two Wilson lines: †
𝒰[b ; a] = 𝒰[+∞ ; b] 𝒰[+∞ ; a] .
(5.28)
There is thus only one Feynman rule to add: 5 Remember that by definition ∑bj= a f (j) = 0 if a > b, this is an ‘empty sum’. The same is true for multiplication: ∏bj= a f (j) = 1 if a > b.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
5.1 Eikonal approximation
 145
(7) A finite Wilson line from aμ to bμ can be calculated by cutting the path in two at +∞ or −∞, where the second part is a Hermitian conjugate line. This is illustrated schematically in the diagram in Figure 5.5. ⋅⋅⋅ ⋅⋅⋅
= +
⋅⋅⋅
+ ⋅⋅⋅ +
⋅⋅⋅
+
⋅⋅⋅
Figure 5.5: The diagram for ngluon radiation from a finite Wilson line can be understood as a sum of products of two halfinfinite Wilson lines.
Let us recapitulate all the Feynman rules we have derived for a general linear Wilson line: Feynman rules for linear Wilson lines (1) Wilson line propagator: k i = n⋅k + i ϵ (momentum in the direction of the path). (2) External point: k −i r⋅k rμ = e (momentum flowing towards the external point). (3) Infinite point: +∞ = 1 (k = 0). (4) Wilson vertex: j i = −i g nμ (t a ) . ij k μ, a (5)
An infinite Wilson line is parameterized as passing through a point r μ . This point is connected to ±∞ on one side, and to a cut line on the other side. All gluons are radiated from the part between this cut line and ∓∞.
(6) Wilson cut propagator: k = 2π δ(k + i ϵ). (7)
A finite Wilson line from aμ to bμ can be calculated by cutting the path in two at +∞ or −∞, where the second part is a Hermitian conjugate line.
It is important to realize that different conventions exist in the literature. We follow the same convention as Peskin and Schroeder [60], where the inverse Fourier transform has a minus sign in the exponent. If one uses the convention with a plus sign in the
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
146  5 Wilson lines in highenergy QCD exponent, as in Collins’s book [30], one has to draw a Wilson line diagram with gluon momenta pointing outwards instead of inwards (essentially making the flip k → −k). Also, the sign in the Wilson line exponential is defined by the sign choice of the couping constant g. As is clear from equation (5.5), we use a negative sign. If one were to use a positive sign, one would get the complex conjugate of rule (4). Of course, the use of the right convention only matters for intermediate calculations; the final result concerning observables is invariant. 5.1.2 Wilson line as an eikonal line Now we return to the original goal of this section: the investigation of the eikonal approximation. In the eikonal approximation we assume a quark with momentum large enough to neglect the change in momentum due to the emission or absorption of a soft gluon. We take an incoming quark with momentum p that absorbs a soft gluon with momentum q. This is illustrated in Figure 5.6 (where the blob represents all possible diagrams connected to the quark propagator). This diagram is equal to F
i (p/ + q/ ) (−i g t a γ μ ) u(p). (p + q)2 + i ϵ
(5.29)
Making the soft approximation is the same as neglecting q/ with respect to p/ , and q2 with respect to p⋅q, giving i pν γ ν γ μ (−i g t a ) u(p). 2 p⋅q + i ϵ
F
Because of the Dirac equation (B.16) p/ u(p) = 0, we can add a term i pν γ μ γ ν to the numerator of the fraction: F
i pν {γ ν , γ μ } (−i g t a ) u(p). 2 p⋅q + i ϵ
(5.30)
Last we use the anticommutation rule (B.14) and write the momentum as pμ = p nμ , with nμ a normalized directional vector, in order to get F p
p+q
i nμ
n⋅q + i ϵ
(−i g t a ) u(p).
F
q μ, a
Figure 5.6: A quark radiating a soft gluon.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
(5.31)
 147
5.1 Eikonal approximation
What we see is that the Dirac propagator has been replaced by a Wilson line propagator, and the Diracgluon coupling by a Wilsongluon coupling. By using the eikonal approximation, we literally factorized out the gluon contribution from the Dirac part. Of course this remains valid when radiating more than one gluon. In the latter case, the resulting formula is straightforward: (−i g)n F t an ⋅ ⋅ ⋅ t a1 u(p)
i nμn n
n⋅∑ qi + i ϵ
⋅⋅⋅
nμ2
nμ1
n⋅(q1 + q2 ) + i ϵ n⋅q1 + i ϵ
.
This is exactly the result for an incoming bare quark connected to the blob, multiplied with a Wilson line going from −∞ to 0: F 𝒰(0 ; −∞) u(p).
(5.32)
̃ does not denote This is illustrated in the diagram in Figure 5.7. Note that the symbol ⊗ a convolution, but is used to remind ourselves that the relation does not give a bare multiplication either, because the t ai are placed between the u(p) of the external quark and the blob. Writing out the Dirac and Lie indices makes this clear: i
j
(F)β δβα (t an ⋅ ⋅ ⋅ t a1 )ji (u(p))α
...
i g nμn n
n⋅∑ qi + i ϵ
≈
⋅⋅⋅
⊗̃
i g nμ1
n⋅q1 + i ϵ
.
...
Figure 5.7: A quark radiating n soft gluons can be represented as a bare quark multiplied with a Wilson line going from −∞ to 0.
From this result, we introduce the concept of an eikonal quark. This is a quark that is only interacting softly with the gauge field, and thus does not deviate from its straight path: An eikonal quark can be understood as a bare quark multiplied with a Wilson line to all orders: i j ij ψeik. ⟩ = 𝒰(0 ψ ⟩ . ; −∞)
(5.33)
In other words, the net effect of multiple soft gluon interactions on an eikonal quark is just a color rotation (nothing but a phase).
It is common to denote an eikonal quark with a double line, but this gives rise to ambiguities: the double line was already used to denote a Wilson line propagator. These are, although related, not the same. The eikonal line represents a quark (carrying
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
148  5 Wilson lines in highenergy QCD spinor indices) resummed with soft gluon radiation to all orders, while the Wilson line propagator represents gluon radiation at a specified order (not necessarily soft), still to be multiplied with the quark (carrying no spinor indices itself). In short, Wilson line propagators are used in the calculation of an eikonal line. To appreciate the difference, have a look at equation (5.32): the eikonal quark is the combination 𝒰(0 ; −∞) u(p), while the Wilson line propagators are components of 𝒰(0 ; −∞) . To avoid confusion, we will no longer use the arrowhead on a Wilson line to represent its path flow, but reserve this arrowhead to represent an eikonal line instead (where it now represents the eikonal quark’s momentum flow): p q
An eikonal line, i. e., ψieik. (p)⟩α
(5.34a)
A Wilson line propagator, i. e.,
(5.34b)
i . n⋅q + i ϵ
But keep in mind that these are commonly interchanged in literature. Using our notation for the eikonal line, we can write down the eikonal approximation diagrammatically as in Figure 5.8. soft limit ... Figure 5.8: In the soft limit, a bare quark can be represented as an eikonal quark.
A final remark: in the derivation of the eikonal approximation, more specifically equation (5.30), we used the fact that the quark in question is external, by adding a term γ μ p/ u(p) = 0. This is a crucial step, without which we would not have been able to resum all gluons into a Wilson line, i. e., Wilson lines as a resummation of gluon radiation can only appear next to quarks that are onshell. It is possible to resum gluon radiation into a Wilson line even if it is not soft. For example, in the collinear approximation we allow for large radiated momenta q which are collinear to p, i. e., if pμ = p nμ then qμ = q nμ in the same direction. The Dirac equation tells us that p/ u(p) = 0 and thus n/ u(p) = 0, which implies we can add a term γ μ q/ u(p) to equation (5.29). If we keep the quasi onshell constraint, q2 ≈ 0 as compared to p ⋅ q, this again leads to a Wilson line, but this time with possibly big q momentum components (as long as they are collinear to p).
5.2 Deep inelastic scattering Now that we have identified the physical interpretation of a Wilson line, being a resummation of soft gluons, we continue by exploring how this translates into realworld ex
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
5.2 Deep inelastic scattering  149
amples. By far the most used application of Wilson lines in QCD is inside the definition of a Parton Density Function, or PDF for short, which is a function containing all information on the proton content. We start with the easiest setup, which is Deep Inelastic Scattering, or DIS for short. Here an electron is collided with a proton, but in the final state only the electron is measured while all other final states are integrated out. This means we only need to use one PDF and we can integrate out the transversal component of the momentum of the struck quark, leaving only longitudinal dependence in the PDF. In Section 5.3 we go one step further by identifying a final state hadron, implying the need of two PDFs concurrently, and the preservation of transversal momentum dependence.
5.2.1 Kinematics Deep inelastic scattering is the most straightforward process to probe the insides of a hadron. An electron is collided headon with a proton (or whatever hadron), destroying it. The kinematic diagram is shown in Figure 5.9. We will always neglect electron masses. The centerofmass energy squared s is then given by s = (P + l)2 = m2p + 2P ⋅ l,
(5.35)
and q is the momentum transferred by the photon: q μ = lμ − l μ . N
(5.36)
Because q2 = 2Ee Ee (cos θee − 1) ≤ 0, we define Q2 = −q2 ≥ 0. The invariant mass of the final state X is then given by N
m2X = (P + q)2 = m2p + 2P ⋅ q − Q2 .
electron
proton
l
l
P
(5.37)
q k
X
Figure 5.9: Kinematics of deep inelastic electronproton scattering.
In order for the photon to probe the contents of the proton, it should have a wavelength λ ≪ rp with λ ∼ Q1 and rp the radius of the proton. The latter is fully destroyed if we
have deep (Q2 ≫ m2p ) and inelastic (m2X ≫ m2p ) scattering. The two Lorentz invariants
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
150  5 Wilson lines in highenergy QCD of interest in the process are Q2 and P ⋅ q, but it is convenient to use the variables Q and xB instead, where N
xB =
Q2 2P⋅q
(5.38)
is called the Bjorkenx. Unless necessary to avoid confusion, we will always drop the index ‘B’, just remember that x always denotes the Bjorkenx (and thus not a general Q2 fraction, see further). Its kinematics restrain x to lie between s+Q 2 (neglecting terms of m
(𝒪( 𝒬p )) and 1 (the elastic limit). Another useful variable is N
y= =
P⋅q P⋅l
(5.39a)
Q2 . x (s − m2p )
(5.39b)
, the fractional energy loss of the lepton. It is not In the rest frame this equals y = E−E E an independent variable because
Q2 = x y(s − m2p ).
(5.40)
Let us finish this subsection on kinematics with two trivial relations: 2x P⋅l =
Q2 y
(5.41a)
l⋅q = −l ⋅q = −
Q2 . 2
(5.41b)
The latter can be demonstrated by calculating (l − q)2 = l 2 = 0. 5.2.2 Invitation: the free parton model A ‘parton’ is a term used to denote any pointlike constituent of the proton, being quarks, antiquarks or gluons. The Parton model (or PM for short) describes the proton as a box containing an undetermined amount of such partons. The mutual interactions of these partons have large timescales compared to the interaction with the photon, allowing us to separate the latter from the former. For instance, inside the proton a gluon could fluctuate into a quarkantiquark pair. The photon would enter the proton and kick out one of the quarks, much faster than the pair can recombine. The pair looks ‘frozen’ to the photon: because of the much larger timescale of the parton interactions, all dynamics are hidden for the photon. The PM thus describes DIS without the strong interaction participating, i. e., we set gs = 0, because all strong interactions are hidden in the proton.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
5.2 Deep inelastic scattering
 151
It is convenient to call the shortdistance process, the interaction with the photon, the hard part, which we will often denote with a hat, e. g., ŝ is the hard c. o. m. energy squared. In contrast to this stands the soft part, which, as we will see in the later subsections, contains all interactions at large distances. For now, we can make an intuitive distinction: everything inside the proton is soft, everything outside the proton (the photon and the struck quark) is hard. Later on we will give a more rigorous formulation for this distinction. Before we really delve into the PM, we try to get a general idea by investigating an extreme case: the Free Parton model (FPM). In this toy model the proton has no dynamic structure, but merely consists of exactly three quarks, totally unaware of each other’s existence. From the point of view of the photon it does not matter how the proton structure looks, be it in the FPM or the standard PM, it just hits a parton like it would hit any electromagnetically charged particle, ignoring all other particles in the proton. The hard part of the PM is therefore genuine electronquark scattering, which we can describe similarly to electronmuon scattering.6 This is illustrated schematically in Figure 5.10. The differential cross section for (unpolarized) e− μ+ scattering can be calculated by basic QED techniques and equals dσ − + 4πα2 s y2 (e μ → e− μ+ ) = (1 − y + ) , 4 dy 2 Q
(5.42)
1 is the electromagnetic finestructure constant. The only difference bewhere α ≈ 137 tween the cross section for e− μ+ scattering and that for e− q± scattering is the charge of the quark:
y2 d σ̂ − ± 4πα2 ŝ (1 − y + (e q → e− q± ) = eq2 ), dy 2 Q4 but now ŝ = (l + k)2 , the centreofmass energy squared of the electron and the quark. In order to relate the hard cross section to the full cross section, we define the quark
free parton model
Figure 5.10: Deep inelastic scattering in the free parton model. The virtual photon strikes one of the quarks, while the other two quarks are left unharmed and don’t influence the process anyhow.
6 Note that we deliberately chose e− μ+ scattering over e− e+ scattering, because the latter also contains a diagram where the two electrons annihilate into a virtual photon, which has no correspondence with e− q scattering.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
152  5 Wilson lines in highenergy QCD momentum as a fraction of the proton momentum: k = ξP
0 < ξ < 1,
such that ŝ = ξs ŷ = y, because we ignore the quark and electron masses. For the outgoing quark to be onshell, we have the requirement (k + q)2 ≈ 2ξP ⋅ q − Q2 ≡ 0 ⇒ ξ ≡ x.
In this case, the momentum fraction equals the Bjorken variable, but this is certainly not a general result. The electronquark cross section is then given by d3 σ̂ q
dx dy dξ
=
y2 4πα2 s (1 − y + ) eq2 ξ δ(x − ξ ) . 4 2 Q
(5.43)
We have made a distinction between x and ξ from a physical point of view. The Bjorken fraction x is related to and kinematically constrained by the type of experiment (in the case of DIS it is given by (5.38)), while ξ is related to the proton only, by representing the momentum fraction the quark carries in a specific event. Going to the electronproton cross section is obvious in the FPM. We simply integrate over the quark fraction ξ and make a weighted sum over the three quarks: d3 σ̂q 1 d2 σ FPM = ∑ ∫dξ dx dy 3 q dx dy dξ =
y2 1 4πα 2 s (1 − y + ) x ∑ e2q . 4 2 3 q Q
(5.44)
5.2.3 A more formal approach Let us redo our intuitive derivation from the previous section in a more formal way. We will treat the proton as a ‘black box’ (contrary to the FPM representation where it is a packet of three partons), which we deeply probe with a highly virtual photon. This is depicted in Figure 5.11. What we keep from the parton model is the assumption that the photon interacts with one constituent of the proton only (a quark, an antiquark, or at higher orders possibly a gluon), on a timescale sufficiently small to allow the struck
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
5.2 Deep inelastic scattering
 153
Figure 5.11: Deep inelastic scattering to all orders: a photon hitting a proton and breaking it.
X
parton to be considered temporarily ‘free’. To motivate this quantitatively, we write the components of the proton momentum P and the parton momentum k in lightcone coordinates (see Appendix B.3): P μ = (P + ,
m2p
2P +
, 0⊥ ) k μ = (k + , k − , p⊥ ).
In the remaining frame of the proton, the distribution of its constituents is isotropic, i. e., all components of pμ are of the order ≲ mp . In the limit P + → ∞, the socalled infinitemomentum frame, the only remaining component of the proton momentum is its pluscomponent. The parton naturally follows the proton in the boost. Then the 4momenta become: μ
μ
PIMF = (P + , 0− , 0⊥ ) kIMF ≈ (k + , 0− , 0⊥ ). The parton’s transverse component p⊥ ∼ mp can be trivially neglected when compared to p+ → ∞. The ratio of the plus momenta is boost invariant, so that we can write: ξ =
k+ P+
⇒
μ
μ
kIMF = ξ PIMF .
As long as we can boost to a frame where P + is the only remaining large component of the proton momentum, the parton is fully collinear to the parent proton and can thus be considered to be ‘free’. From now on we will always parameterize the proton momentum and the struck quark momentum based on the dominantly large P + : P μ = (P + ,
m2p
2P
, 0⊥ ) +
k μ = (ξP + ,
k 2 + k 2⊥ , k⊥) , 2ξP +
(5.45)
where we can safely assume k 2 , k 2⊥ ≪ 2ξP + and m2P ≪ 2P + , reproducing the IMF limit. Note that ξ now no longer represents the fraction of the proton’s momentum, but the fraction of the proton’s momentum’s lightcone pluscomponent. We can always choose a frame such that qμ = (0+ ,
Q2 ,q ), 2xP + ⊥
(5.46)
where q2⊥ = Q2 .
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
154  5 Wilson lines in highenergy QCD Returning our attention to the mechanics behind the process, we write the matrix element for a given final state X in function of the leptonic and hadronic states: μ ν ℳX = ⟨l Jleptonic l⟩ Dμν (q) ⟨X Jhadronic P⟩ ,
(5.47)
with Dμν the photon propagator. The differential cross section is then given by dσ = El
d3 l d3 pX 1 ∑∫ (2π)4 δ(4) (P + l − pX − l ) ℳ2 3 4P ⋅ l (2π) 2El X (2π)3 2EX
d3 σ dl
3
=
2 α2 L W μν . s − m2p Q4 μν
(5.48)
It will be convenient to define a set of Cartesian basis vectors before we continue. We start by choosing a spacelike normal vector in the direction of qμ . We thus define the normal vector q̂ μ as q̂ μ = N
qμ , Q
(5.49)
which indeed has q̂ 2 = −1, the necessary condition to make it spacelike. Next we construct the timelike basis vector from the proton momentum P μ by subtracting from it its projection on q,̂ 7 and dividing by its total length: t μ̂ =
1
N
=
√m2p
2
+ (P⋅ q)̂
(P μ + P⋅ q̂ q̂ μ )
1 1 (2xP μ + qμ ) , κ Q
(5.50)
m2
where x is the Bjorkenx and κ = √4x2 Q2p + 1 → 1 in the limit Q2 ≫ m2p . The next basis
vector is then constructed by subtracting from it its projection on q̂ μ and t μ̂ . This is the same as contracting it with μν g⊥ = g μν + q̂ μ q̂ ν − t μ̂ t ν̂ ,
(5.51)
with the following useful properties: μν μν q̂ μ g⊥ = g⊥ q̂ ν = 0 μν μν tμ̂ g⊥ = g⊥ tν̂ = 0
(5.52a) (5.52b)
7 The projection of P μ on q̂ μ has to be normalised by the length of q̂ μ , giving rise to an extra minus q̂ μ q̂ = −P⋅ q̂ q̂ μ . sign: P⋅ q⋅̂ q̂
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
5.2 Deep inelastic scattering
μν g⊥ g⊥νρ = δρμ + q̂ μ q̂ ρ − t μ̂ tρ̂
μν g⊥ g⊥μν
= 2.
 155
(5.52c) (5.52d)
μν
Note that this definition of g⊥ is compatible with the definition in (B.40). We can construct a third orthonormal (spacelike) vector from, say, lμ : lμ̂ =
1
N
=
√−lμ g⊥μν g⊥νρ lρ 1
√1 − y +
1−κ2 4
μν
g⊥ lν
y 2 − y μ̂ y (κ lμ − κ q̂ μ − t ), Q 2 2 y2
where we used the relations in (5.41). It is again a spacelike orthonormal vector: l2̂ = −1.
(5.53)
Now normally we would proceed with the construction of the last orthonormal basis vector, but we do not have any independent vectors left in our process. But we still can define an antisymmetric projection tensor as follows: μν ε⊥ = εμνρσ tρ̂ q̂ σ .
(5.54)
μν
As before, this definition of ε⊥ is compatible with the definition in (B.42). It is easy to show that μν ε⊥ tν̂ = 0,
μν ε⊥ q̂ ν μν ε⊥ g⊥ νρ μν ε⊥ g⊥ μν
(5.55a)
= 0, =
=
μ ε⊥ ρ , μ ε⊥ μ =
(5.55b) (5.55c) 0,
(5.55d) μρ
μ
by use of the antisymmetry of εμνρσ . Note that ε⊥ ν has the same components as ε⊥ but with opposite signs. Furthermore, because in general εμνρσ εμντυ = −2 (δρτ δσυ − δρυ δστ ) ,
(5.56)
we have μν
ε⊥ ε⊥ μν = 2.
(5.57)
Let us summarize our new basis: Orthonormal basis vectors: q̂ μ =
qμ Q
(5.58a)
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
156  5 Wilson lines in highenergy QCD t μ̂ = lμ̂ =
1 1 (2xP μ + qμ ) κQ 1 √1 − y +
1−κ 2 2 y 4
(5.58b) (κ
y 2−y ̂ y μ l − κ q̂ − t) . Q 2 2
(5.58c)
Transversal tensors: μν g⊥ = gμν + q̂ μ q̂ ν − t μ̂ t ν̂
(5.59a)
μν
ε⊥ = εμνρσ tρ̂ q̂ σ .
(5.59b)
Now we can express all these momenta in our new basis using (5.45) (remember that the projections on q̂ μ and lμ̂ give an extra minus sign, because q̂ 2 = l2̂ = −1): qμ = Q q̂ μ
(5.60a)
Pμ = κ
(5.60b)
Q μ̂ Q t − q̂ μ 2x 2x ξQ μ ξ Q q̂ k μ ≈ κ t μ̂ − x 2 x 2
(5.60c)
lμ =
Q Q Q 2 − y μ̂ 1 − κ2 2 μ̂ y l t + q̂ μ + √1 − y + κy 2 2 κy 4
(5.60d)
l μ =
1 − κ2 2 μ̂ Q Q Q 2 − y μ̂ y l . t − q̂ μ + √1 − y + κy 2 2 κy 4
(5.60e)
It is easy to verify that these formulae indeed reproduce the correct definitions; for instance one can quickly check the onshell conditions q2 = −Q2 , k 2 ≈ ξ 2 m2p , l2 = l 2 = 0. Let us return to equation (5.48), and specify the lepton and hadron tensor in our new basis. We consider the electron beam to be polarized, say longitudinally, but we do not measure the polarization of the outgoing electron, implying we have to sum over outgoing polarization states using (B.22). Then the lepton tensor Lμν is given by Lμν = ∑ (uλ (l)γ μ uλ (l )) (uλ (l )γ ν uλ (l)) N
λ
= −Q2 g μν + 4l(μ l ν) + 2i λεμνρσ lρ lσ .
(5.61)
Writing it in our new basis gives Lμν =
y 4 1 − κ2 2 Q2 μν μν y ) (t μ̂ t ν̂ + lμ̂ lν̂ ) − i λ (2 − y) ε⊥ [−y 2 g⊥ + 2 (1 − y + 4 κ y2 κ +
y 4 1 − κ 2 2 (μ 1 − κ 2 2 μνρσ ν) y t ̂ l ̂ + 2i λ √ 1 − y + y ε q̂ ρ lσ̂ ] . (2 − y) √ 1 − y + 4 κ 4 κ2
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
(5.62)
5.2 Deep inelastic scattering
 157
On the other hand, from equation (5.48) we see that the hadronic tensor is defined as W μν = 4π 3 ∑ ∫ N
X
=
d3 pX δ(4) (P + q − pX ) ⟨P J †μ (0) X⟩ ⟨X J ν (0) P⟩ (2π)3 2EX
1 ∫d4 z ei q⋅z ⟨P J †μ (z)J ν (0) P⟩ , 4π
(5.63)
where we used the translation operator ⟨P J †μ (0) X⟩ ei (P−pX )⋅z = ⟨P J †μ (z) X⟩ ,
(5.64)
and integrated out a complete set of states by use of the completeness relation: ∑∫ X
d3 pX X⟩ ⟨X = 1. (2π)3 2EX
(5.65)
Figure 5.12 shows the common convention to draw the hadronic tensor. It is a squared amplitude for a proton absorbing a photon going to any final state X, while summing over all possible final states. The vertical line, a socalled ‘finalstate cut’, acts both as a separator (everything to the left is the amplitude ℳ, everything to the right is the complex conjugate ℳ∗ ) and as a symbol representing the completeness relation (reminding us that we have to sum over all final states and integrate out their momenta). It is straightforward to use the finalstate cut in perturbative calculations: every particle crossing it is a real particle and thus has to be onshell. This can be incorporated by adding a δ(p2 − m2 ), matching the particle’s momentum squared to its mass squared.
Figure 5.12: The hadronic tensor is a squared amplitude defined with a sum over all possible external states. This sum, and the separation between the amplitude and its conjugate, is represented by the vertical finalstate cut line.
We have no information about the contents of the hadronic tensor, as it sits in the highly nonperturbative region of QCD; the proton constituents are strongly confined. But we can parameterize the hadronic tensor based on its mathematical structure. We will in this book only work with unpolarized hadron tensors, as polarization brings some technicalities with it which would distract us too much from our main topic of interest. The main course of calculations remains however the same for polarized hadrons.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
158  5 Wilson lines in highenergy QCD For an unpolarized proton, W μν will only exist in the vector space spanned by the orthonormal vectors we derived before. But as the electron momentum lμ does not have any physical significance inside the hadron tensor, we will use q̂ μ , t μ̂ , and their μν μν crossings (remember that the transversal plane can be described by g⊥ and ε⊥ , being μ ν̂ combinations of q̂ and t ). Thus we can expand it as: W μν = A g μν + B q̂ μ q̂ ν + C q̂ μ t ν̂ + D t μ̂ q̂ ν + E t μ̂ t ν̂ + i Fεμνρσ tρ̂ q̂ σ , where the scalar functions A, . . . , F only depend on m2p , Q2 and x (because there are no other invariants in the proton system). In the case of polarized hadrons, the spin vector Sμ and its combinations should be added to the basis. Next we impose current conservation, which requires 𝜕μ J μ = 0. Applying this to equation (5.63) we find q̂ μ W μν = W μν q̂ ν = 0. This condition gives: A≡B
C ≡ D ≡ 0.
W μν should also be Hermitian and timereversal invariant, and for the electromagnetic and the strong force it should be parity invariant as well. By using the transformation matrix
Λ μν
1 0 0 0 0 −1 0 0 ), =( 0 0 −1 0 0 0 0 −1
we can write out these conditions (adding spindependence for future reference): ∗ (q, P, S) Wμν
Hermiticity: parityreversal: timereversal:
ρ
σ
ρ
σ
≡ Wνμ (q, P, S)
(5.66a)
̃ ̃ −S) Λμ Λν Wμν (q, P, S) ≡ Wμν (q̃ , P,
(5.66b)
∗ Λμ Λν Wμν (q, P, S)
(5.66c)
̃ ̃ S) ≡ Wμν (q̃ , P,
where q̃ μ = δμ0 q0 − δμi qi . The effect of these conditions is that A, . . . , F should be real functions, and the parityreversal requirement sets F = 0. But parity is not conserved in weak interactions; in that case F is allowed. We can rewrite W μν as W μν = −
1 μν μν [g F (x, Q2 ) − t μ̂ t ν̂ FL (x, Q2 ) − i ε⊥ FA (x, Q2 )] 2x ⊥ T
where FT = −2x A,
FL = 2x (A + E),
FA = 2x F.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
(5.67)
5.2 Deep inelastic scattering
 159
These are called the transversal resp. longitudinal resp. axial structure functions of the proton. They are nonperturbative (and thus noncalculable) objects, which have to be extracted from the experiment. In parallel to these, a different notation is also used in literature: 1 F , 2x T 1 F2 = 2 (FL + FT , ) κ 1 F , F3 = x κ2 A F1 =
FT = 2x F1 ,
(5.68a)
FL = κ 2 F2 − 2x F1 ,
(5.68b)
FA = x κ 2 F3 .
(5.68c)
We can express the hadron tensor as a function of these as well: μν
W μν = −g⊥ F1 + t μ̂ t ν̂ (
κ2 κ 2 μν F2 − F1 ) + i ε⊥ F3 . 2x 2
(5.69)
The difference between FT , FL , FA and F1 , F2 , F3 is just a matter of historic convention. However, there exist different conventions for the normalization of the structure functions, if so often differing by a factor of 2 or 2x. We follow the same convention as, e. g., in [56], as we believe it to be the most commonly accepted one. The structure functions can be extracted from the hadronic tensor by projecting with appropriate tensors: 1 μν F1 = − g⊥ Wμν , 2 x μν F2 = 2 (2t μ̂ t ν̂ − g⊥ ) Wμν , κ i μν F3 = − 2 ε⊥ Wμν , κ
μν
FT = −x g⊥ Wμν ,
(5.70a)
FL = 2x t μ̂ t ν̂ Wμν ,
(5.70b)
μν
FA = −x i ε⊥ Wμν .
(5.70c)
For the rest of the book we will ignore weak interactions, dropping FA from the hadronic tensor. Combining the results from the leptonic and hadronic tensors, we get Lμν W μν =
2Q2 1 − κ2 2 1 + κ2 2 2 y ) F (x, Q ) + (1 − y + y ) FL (x, Q2 )] . [(1 − y + T 4 4 κ2 x y2
Plugging this result in equation (5.48) gives us the final expression for the unpolarized cross section for electronproton deep inelastic scattering (neglecting terms of order
m2p , Q2
hence taking the limit κ → 1): y2 4πα 2 s d2 σ [(1 − y + = ) FT (x, Q2 ) + (1 − y) FL (x, Q2 )] . 4 dx dy 2 Q
(5.71)
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
160  5 Wilson lines in highenergy QCD If we compare this with the result in equation (5.44), we find the following structure functions for the free parton model: FTFPM (x, Q2 ) =
1 x ∑ e2 3 q q
FLFPM (x, Q2 ) = 0.
(5.72a) (5.72b)
5.2.4 Parton distribution functions In Section 5.2.2 we succeeded in deriving a lowest order result for the cross section, starting from a static proton. On the other hand, in Section 5.2.3 we followed a more formal approach, without any assumptions about the proton structure but one: that we can separate the hard interaction from the proton contents. This is the concept of factorization: in any process containing hadrons we try to separate the perturbative hard part (the scattering Feynman diagram) from the nonperturbative part (the hadron contents). The latter is notcalculable, and consequently it has to be described by a probability density function (or parton distribution function, PDF for short) that gives the probability to find a parton with momentum fraction x in the parent hadron. However, one has to proceed with caution because factorization has not been proven except for a small number of processes, including e+ e− annihilation, DIS, SIDIS and Drell–Yan. The PDF is literally the object that describes the proton as a black box. You give it a fraction x and it returns the probability to hit a parton carrying this momentum fraction when you bombard the proton with a photon. It is commonly written as fq (ξ ), where q is the type of parton for which the PDF is defined. There are thus 7 PDFs, one for each quark and antiquark, and one for the gluon. A parton distribution function is not calculable; they have to be extracted by experiment. However, as we will see in Section 5.2.7, we can calculate its evolution equations, such that we can evolve an extracted PDF from a given kinematic region to a new kinematic region. It is a probability density, but it is also a distribution in momentum space; by plotting the PDF in function of x one gets a clear view of the distribution of the partons in the proton. Furthermore we assume that the PDF only depends on x, and not, e. g., on the parton’s transverse momentum. This does not mean that we automatically neglect the struck parton’s transverse momentum component! But because we do not identify any hadron in the final state, and because we have to sum over all final states and integrate out their momenta (the finalstate cut), any transverse momentum dependence in the PDF or the hard part is integrated out. Factorization in DIS, also called collinear factorization because of the collinearity of the quark to the proton, is a factorization over
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
5.2 Deep inelastic scattering
 161
x (and an energy scale). We can write this formally as dσ ̂ μ2 ), ∼ fq (x, μ2F ) ⊗ H(x, F dx which is just a schematic. We will treat the technical details soon, in Section 5.2.7. Whenever information on the transverse momentum is needed, e. g., when identifying a final hadron as in semiinclusive DIS, collinear factorization will not do, and k ⊥ factorization is needed instead, where a transverse momentum dependent PDF, or TMD for short, is convoluted with the hard part: dσ ̂ , k , μ2 ). ∼ fq (ξ , k ⊥ , μ2F ) ⊗ H(ξ ⊥ F dξ Formally, a PDF and a TMD should be related by integrating out the transverse momentum dependence: fq (ξ ) = ∫d2 k ⊥ fq (ξ , k ⊥ ), however, after QCD corrections this equality is no longer valid. In the parton model, the concept of (collinear) factorization can be painlessly implemented: PM x dσ ≡ ∑ ∫dξ fq (ξ ) d σ̂ q ( ) , ξ q N
= fq ⊗ d σ̂ q .
(5.73a) (5.73b)
Note that this is not a standard convolution the way you might know it, like ∫dτ f (τ)g(t − τ). This is because the latter is a convolution as defined in Fourier
space. In QCD, a lot of theoretical progress has been made by the use of Mellin moments. These form an advanced mathematical tool, which would take use too long to delve into. Just know that the type of convolution as in (5.73) is a convolution in Mellin space. If we now plug equation (5.71) and (5.43) in (5.73), we get 1
x FTPM (x, Q2 ) = ∑ ∫dξ fq (ξ ) F̂Tq ( ) , ξ q
(5.74a)
x
≡ ∑ eq2 xfq (x), q
FLPM (x, Q2 ) = 0,
(5.74b) (5.74c)
where F̂Tq (x) = x eq2 δ(1 − x)
(5.75)
is the structure function of the quark.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
162  5 Wilson lines in highenergy QCD Note that FTPM does not depend on Q2 ! This is called the “Bjorken scaling” prediction: the structure functions scale with x, independently of Q2 . Because this prediction is a direct result from the parton model, it should be clearly visible in leading order (up to firstorder QCD corrections, where the Bjorken scaling is broken). This is indeed confirmed by experiment. Also note that by comparing (5.74) to (5.72), we can easily find the quark PDFs in the free parton model: fqFPM (x) =
1 , 3
(5.76)
which is exactly what the initial assumption for the FPM is: the proton equals exactly three quarks, thus the probability of finding a quark is always one third, regardless the value of x. A note on the difference between structure functions and PDFs. A structure function emerges in the parameterization of the hadronic tensor, the latter being process dependent. If we have a look at its definition for DIS in equation (5.63), we see that the hadronic tensor contains information both on the proton content and the photon hitting it. This is illustrated in Figure 5.12, where the blob represents the hadronic tensor, describing the process of a photon hitting a (black box) proton. As a structure function is just a parameterization of the hadronic tensor, the same applies to it. If we change the process to, say, deep inelastic neutrino scattering, our structure functions change as well, because now they describe the process of a W ± or Z 0 boson hitting a proton. But the main idea behind factorization is that, inside the structure functions, we can somehow factorize out the proton content (which is process independent) from the process dependent part. This is shown in Figure 5.13, where the smaller blob now represents a quark PDF. The factorization of structure functions in the parton model is demonstrated in equations (5.74). The initial factorization ansatz, equation (5.73), is required to be valid for any cross section, given a unique set of PDFs, i. e., the PDFs are universal. We can extract these PDFs in one type of experiment, like electron DIS, and reuse them in another experiment like neutrino DIS. In contrast with the structure functions, PDFs emerge in the parameterization of the quark correlator, as we will see in the next subsection, which is universal by definition.
W μν
fq
Figure 5.13: Difference between structure functions and PDFs.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
5.2 Deep inelastic scattering
 163
5.2.5 Operator definition for PDFs As we have shown before, we can assume that the photon scatters off a quark with mass m inside the proton, if Q2 is sufficiently large. The final state can therefore be split in a quark with momentum p and the full remaining state with momentum pX . Constructing the (unpolarized) hadronic tensor for this setup is straightforward. First we remark that pulling a quark out of the proton at a spacetime point (0+ , 0− , 0⊥ ) is simply ψα (0) P⟩. Then we construct the diagram for the hadronic tensor, the socalled ‘handbag diagram’, stepbystep: k
k
k
ν
X
=
⟨X ψα (0) P⟩
X
=
uλβ (p) (γ ν ) ⟨X ψα (0) P⟩
∼
[γ μ (p/ + m) γ ν ] ⟨P ψβ (0) X⟩ ⟨X ψα (0) P⟩ ,
p
ν
p
βα
μ k
βα
where we omitted the prefactor, sums and integrations over X and p and the δfunction. Then the full hadronic tensor is given by W μν =
3 1 ̃∫ d p ∑ eq2 ∑ ∫d4 z ei (P+q−pX −p)⋅z 3 2p0 4π q (2π) X βα
× [γ μ (p/ + m) γ ν ] ⟨P ψβ (0) X⟩ ⟨X ψα (0) P⟩ ,
(5.77)
where we used the shorthand notation 3 ̃ =N ∑ ∫ d pX . ∑ (2π)3 2EX X X
(5.78)
Next we add an onshell condition to the integral over p: ∫
d3 p (2π)3 2p0
→
∫
d4 p 2π δ+ (p2 − m2 ) , (2π)4
where δ+ is defined in (B.47). We introduce the momentum k = p − q, giving W μν =
4 1 ̃ ∫ d k δ+ ((k + q)2 − m2 ) ∫d4 z ei (P−k−pX )⋅z ∑ eq2 ∑ 4π q (2π)3 X
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
164  5 Wilson lines in highenergy QCD βα
× [γ μ (k/ + q/ + m) γ ν ] ⟨P ψβ (0) X⟩ ⟨X ψα (0) P⟩ . Now the next steps are the same as in equation (5.63), using the translation operator and the completeness relation: W μν = q
N
1 ∑ e2 ∫d4 k δ+ ((k + q)2 ) Tr(Φq γ μ (/k + q/ ) γ ν ) 2 q q
Φαβ = ∫
d4 z e−i k⋅z ⟨P ψβ (z)ψα (0) P⟩ . (2π)4
(5.79a) (5.79b)
Φ is the quark correlator, which will be used as a basic building brick to construct PDFs. Note that its Dirac indices are defined in a reversed way, this is deliberate to set the trace right. This result is quite a general result, valid for a range of processes. Using equation (5.45) and neglecting terms of 𝒪( Q1 ), we can approximate the δfunction in (5.79a) as δ((k + q)2 ) ≈ P + δ(ξ − x) , which again sets ξ ≡ x as in the free parton model. This then gives W μν ≈
1 P+ ∑ eq2 Tr(Φq (x) γ μ (k/ + q/ ) γ ν ) , 4 q P⋅q
(5.80)
where the integrated quark correlator is defined as Φ(x) = ∫dk − d2 k ⊥ Φ(x, k − , k ⊥ ) =
+ − 1 ∫dz − e−i xP z ⟨P ψβ (0+ , z − , 0⊥ )ψα (0) P⟩ . 2π
(5.81)
A last simplification that we can make is to assume that the outgoing quark is moving largely in the minus direction; k μ + qμ ≈ k − + q− . This is easily understood in the infinite momentum frame, where the quark ricochets back after being struck headon by the photon. However, it is a valid simplification in any frame, which can be shown by making a Q1 expansion of W μν . With this assumption we get 2 2 P+ k + k⊥ P+ ( + q− ) (k/ + q/ ) ≈ γ + P⋅q P⋅q 2ξP +
≈ 1, giving the final result for the unpolarized hadron tensor in DIS at leading twist (this means up to 𝒪( Q1 )):
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
5.2 Deep inelastic scattering
W μν ≈
 165
1 ∑ e2 Tr(Φq (x) γ μ γ + γ ν ) . 4 q q
(5.82)
Now let us investigate the unintegrated quark correlator (5.79b) a bit deeper. Since it is a Dirac matrix, we can expand it in function of Lorentz vectors, pseudovectors and Dirac matrices. The variables on which it depends are pμ , P μ and Sμ (the latter is a pseudovector in the case of fermionic hadrons). Our basis is then (see (B.24)) spanned by pμ , P μ , S μ
1, γ 5 , γ μ , γ μ γ 5 , γ μν ,
where γ μν = γ [μ γ ν] . The next steps go completely analogously to our derivation of the structure functions from the hadron tensor in Section 5.2.3. The conditions to satisfy are Φ(p, P, S) ≡ γ 0 Φ† (p, P, S)γ 0
Hermiticity:
0
̃ 0. ̃ −S)γ ̃ , P, Φ(p, P, S) ≡ γ Φ(p
Parity:
(5.83a) (5.83b)
For instance the integrated quark correlator can be expanded up to leading twist as Φ(x, P, S) =
1 1 (f (x)γ − + g1L (x)SL γ 5 γ − + h1 (x) [S/ T , γ − ] γ 5 ) 2 1 2
(5.84)
where the three integrated PDFs f1 , g1L and h1 are the unpolarized resp. helicity resp. transversity distributions. They can be recovered from the quark correlator by projecting on the correct gamma matrix: 1 Tr(Φ γ + ) 2 1 g1L = Tr(Φ γ + γ 5 ) 2 1 h1 = Tr(Φ γ +i γ 5 ) . 2 f1 =
(5.85a) (5.85b) (5.85c)
5.2.6 Gauge invariant operator definition A general Dirac field transforms under a nonAbelian gauge transformation as ψ(x) → e−i α
a
(x)t a
ψ(x)
i αa (x)t a
ψ(x) → ψ(x) e
.
(5.86a) (5.86b)
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
166  5 Wilson lines in highenergy QCD As a result, the quark correlator is not gaugeinvariant: Φ→∫
a a a a d4 z −i k⋅z e ⟨P ψβ (z) ei α (z)t e−i α (0)t ψα (0) P⟩ . (2π)4
But as we saw in the Introduction, a Wilson line 𝒰[x ; y] from y to x transforms as 𝒰[x ; y] → e
−i αa (x)t a
i αa (y)t a
𝒰[x ; y] e
,
then the following definition for the quark correlator is gaugeinvariant: N
Φ=∫
d4 z −i k⋅z e ⟨P ψβ (z) 𝒰[z ; 0] ψα (0) P⟩ . (2π)4
(5.87)
Note that the gauge transformation of 𝒰 only depends on its endpoints. Although the latter are fully fixed by the quark correlator, there is still the freedom of the choice of the path, influencing the result. The gaugeinvariant correlator is thus path dependent! This will play a big role when working with the k ⊥ dependent correlator, which we will investigate further in Section 5.3. The integrated quark correlator on the other hand, see equation (5.81), has its quark fields only separated along the z − direction. This simplifies the Wilson line8 considerably: Φ(x) =
+ − 1 ∫dz − e−i xP z ⟨P ψβ (0+ , z − , 0⊥ ) 𝒰[z− ; 0] ψα (0) P⟩ 2π z−
−i g ∫ dλ n−⋅A(0+ , λ, 0⊥ )
𝒰[z ; 0] = 𝒫 e −
0
.
In the lightcone gauge, we have A+ = 0 and thus 𝒰 − = 1, reducing the quark correlator to the definition in (5.81). As long as one stays in the A+ = 0 gauge, it is valid to neglect the Wilson line inside the PDFs. Using the trick in equation (5.28), we can write the Wilson line as −
−
†
−
𝒰[z ; 0] = [𝒰[+∞ ; z] ] 𝒰[+∞ ; 0] .
(5.88)
We can associate the part of the Wilson line going to +∞ with an onshell line, because we can throw a complete set of final states between the two Wilson lines. We thus extend the finalstate cut through the Wilson line as well. This is illustrated in Figure 5.14. Remember from equation (5.33) that a quark dressed with a Wilson line can be considered an eikonal quark, essentially being a quark with soft and collinear gluon resummation. The physical interpretation for the quark correlator is not different: it represents all soft and collinear interactions between the struck quark and the proton. 8 In the context of PDFs, Wilson lines are commonly called gauge links. We do not use this terminology in our book.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
5.2 Deep inelastic scattering
(a)
 167
(b)
Figure 5.14: (a) The gaugeinvariant quark correlator function, with a cut Wilson line. (b) The Wilson lines inside the definition of the correlator account for the resummation of soft gluons.
p−l
p l
k−l
Figure 5.15: A first order correction to the PDF.
We inserted the Wilson line somewhat adhoc: we were looking for an object having the correct transformation properties to make the quark correlator gauge invariant, and the Wilson line happens to be such an object. It is not so difficult to prove this in a more formal way, using the eikonal approximation. Consider the diagram in Figure 5.15, where one soft gluon before the cut connects the struck quark with the blob. The hadronic tensor is then (see also (5.82)): p/ − /l + m 1 γν ) , W μν ∼ ∑ eq2 Tr(ΦAρ (k, k − l) γ μ γ + γ ρ 2 (p − l)2 − m2 + i ϵ q where the quarkquarkgluon correlator is given by ΦA (k, k − l) =
1 d4 z d4 u −i k⋅z −i l⋅(u−z) e e ∫ ⟨P ψβ (z) gAρ (u)ψα (0) P⟩ . 2 (2π)4 (2π)4
Remember that we have an onshell quark so that we can use the eikonal approximation. The γ + is what is left of the real quark, after making the sum over polarization states: ∑ us (p)us (p) = p/ + m
→
p− γ + ,
(5.89)
so we can use γ + as though it were an u(p) on which to perform the eikonal approximation (as in equation (5.30)). Then we can make the approximation γ+ γρ
ρ p/ − /l + m + −n ≈ γ . n⋅l − i ϵ (p − l)2 − m2 + i ϵ
(5.90)
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
168  5 Wilson lines in highenergy QCD This is indeed a Wilson line propagator for a line from z to ∞. An important remark: the † definition of 𝒰[+∞ ; z] also incorporates an exponential coming from the Feynman rule
for the external point. This exponential has been extracted from 𝒰 (it is e−i xP z ), but this remains valid by momentum conservation. The choice to extract the exponential from the Wilson line is by historic convention. It is straightforward to generalize this to any number of gluons, where gluons on the left of the cut will be associated with a line from z to ∞, and gluons on the right of the cut with a Hermitian conjugate line. In other words: + −
1 W μν ∼ ∑ eq2 Tr(Φρ (x) γ μ γ + γ ν ) , 2 q where now the quarkquarkgluon correlator is resummed to all orders: Φ=
+ − 1 −† − ∫dz − e−i xP z ⟨P ψβ (0+ , z − , 0⊥ ) 𝒰[+∞ ; z] 𝒰[+∞ ; 0] ψα (0) P⟩ . 2π
(5.91)
This is indeed the anticipated result. Using equation (5.85a), we can give a gaugeinvariant formulation of the unpolarized integrated quark parton density function: fq/p (x) =
+ − 1 −† + − ∫dz − e−i xP z ⟨P ψ(z − ) 𝒰[+∞ ; z] γ 𝒰[+∞ ; 0] ψ(0) P⟩ , 4π
(5.92)
where the subscript in fq/p is a common convention to denote “the integrated quark PDF for a quark with flavor q inside a proton”. But what about the gluon PDF? Until now we totally ignored the possibility of the photon hitting a gluon inside the proton, because it is a higher order interaction. But while we are moving towards a more realistic approach of QCD, we cannot ignore gluon densities any further. A photon can hit a gluon by interchanging a quark. This process is called “bosongluon fusion” and is illustrated in Figure 5.16. To construct the integrated gluon PDF, we start in the lightcone gauge A+ = 0 such that we can ignore Wilson lines for now. There is a constraint equation on A− relating it to the transverse gauge field, implying that the latter are the only independent fields. Following the same derivation as in Section 5.2.5, we find fg/p (ξ ) =
+ − 1 ∫dz − ξP + e−i ξP z ⟨P Aia (z − )Aia (0) P⟩ . 2π
X Figure 5.16: Bosongluon fusion in DIS.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
5.2 Deep inelastic scattering
 169
The factor ξP + is typical for fields with evenvalued spin. To make this gaugeinvariant, we cannot simply insert a Wilson line as before, because the gauge fields transform with an extra derivative term. However, the gauge field density F μν transforms without such a derivative. We can easily relate the two: a Fμν = 𝜕μ Aaν − 𝜕ν Aaμ + gf abc Abμ Acν a F+i = 𝜕+ Aai
⇒ Aai =
1 F , 𝜕+ +i
(A+ = 0) (5.93)
which we can use to redefine the gluon PDF. Inserting a Wilson line (in the adjoint representation, as it has to couple to gluons) then gives our final result for the integrated gluon PDF: fg/p (ξ ) =
1 dz − −i ξP+ z − e ∫ ⟨P F +i b (z − ) 𝒰[zA −ba; 0] F +i a (0) P⟩ . 2π ξP +
(5.94)
5.2.7 Collinear factorization and evolution of PDFs We started the idea of factorization in the parton model (see equation (5.73)). The integrated PDF fq (x) can be defined operatorwise by constructing it from the integrated quark correlator (see equation (5.85a)). By the demand of gaugeinvariance, we modified the quarkquark correlator by injecting Wilson lines, leading to a resummation of soft gluons inside the PDF (see equation (5.92)). Figure 5.17 shows the factorization in DIS as we have seen so far.
⊗
Figure 5.17: Factorization in DIS.
To get a better understanding of the PDFs and factorization in general, we investigate the process at first order αs , and see how that changes our factorization rules: FT (x, Q2 ) = ∑ eq2 xfq (x) + 𝒪(αs ) 2
q
FL (x, Q ) = 0 + 𝒪(αs ) .
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
170  5 Wilson lines in highenergy QCD In what follows we will continue by using F2 = FT + FL , to be in accordance with common literature. The correct approach to continue is as follows: we calculate Ŵ μν for a single quark up to first order in αs , then we extract F̂2 for a single quark using (5.70a). We compare the result with (5.75), plug it in (5.74a) and see how it changes the PDF. There are 3 types of real gluon exchanges at first order, where the exchanged gluon is onshell, and 3 types of virtual gluon exchanges, shown in Figure 5.18. We will calculate the real contributions, and label the momenta as shown in Figure 5.19. The corresponding amplitude for the initial state gluon radiation (Figure 5.19a) is (see also (B.52) for the QCD Feynman rules): a,λ,λ
ℳi
= uλ (p) (i eq γ μ )
l2
i /l (−i g ε/ t a ) uλ (k). + iϵ
We average over color and incoming spin states, and sum over final spin (see (B.22)) and gluon polarization states, ℳi 2 = N
1 1 ∗ b,λ,λ ℳi , ∑ ∑ ∑ ∑ ℳa,λ,λ i N a,b 2 λ λ pol
(5.95)
Figure 5.18: All types of first order corrections to the hard part. Real corrections are on the upper line; virtual on the lower line.
q
q p
l≈z k
k−l
k (a)
q+k
k
p z(q+k)
(b)
Figure 5.19: (a) Initial state gluon radiation. (b) Final state gluon radiation.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
5.2 Deep inelastic scattering
 171
we get for the complex squared amplitude 1 1 ℳ2 = CF eq2 g 2 4 ∑ tr(εε/k/ ε//l γ μ p/ γ ν /l ) , 2 l pol
(5.96)
where we used equation (C.4) to simplify the color generators. We can sum over the gluon polarization states by using (B.53b), this simplifies the trace into tr(. . .) = − tr(γρ k/ γ ρ /l γ μ p/ γ ν /l ) 1 + + + tr(γ + k/ (k/ − /l ) /l γ μ p/ γ ν /l ) k −l 1 + + + tr((/k − /l ) k/ γ + /l γ μ p/ γ ν /l ) . k −l The first term can be simplified using (B.26b), while the other two can be simplified by using (B.39): (k/ − /l ) k/ γ + = 2k + (k/ − /l ) − (k/ − /l ) γ + k/
= 2k + (k/ − /l ) − 2 (k + − l+ ) k/ + γ + (k/ − /l ) k/
= 2l+ k/ − 2k + /l − 2l⋅k γ + − γ + k/ (k/ − /l ) .
Next we move to a frame where the quark lies dominantly in the plus direction while having some transversal momentum k ⊥ , and where l carries a fraction z of the plusmomentum of the quark, while its transversal momentum l⊥ is zero (all transversal momentum is carried away by the radiated gluon). k = (k + ,
k 2⊥ ,k ), 2k + ⊥
l = (zk + ,
l2 ,0 ). 2zk + ⊥
(5.97)
Combining these gives us tr(. . .) =
2 2 (l + z 2 k 2⊥ ) tr(/l γ μ p/ γ ν ) . z
The next steps are straightforward but tedious; we will just give the result. After integrating over k μ and projecting out F̂2 using (5.70a), we find the divergent correction terms: α Q2 F̂2div = eq2 s x Pqq (x) ln 2 . 2π μ0
(5.98)
We did not list the finite terms, as they are easily calculable. The integration over k ⊥ led to an infrared divergence (the integral becomes infinite for k ⊥ → 0), which we regulated with a lower cutoff μ20 . Pqq (x) is the socalled splitting function: Pqq (z) = CF
1 + z2 . 1−z
(5.99)
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
172  5 Wilson lines in highenergy QCD This function is specific for the diagram in Figure 5.19(a). We use the notation Pij (z) to denote “the probability to get a parton of type i with a momentum fraction z from a parent parton of type j”. In this case, Pqq (z) represents the probability for a quark to split into a quark carrying a fraction z of its momentum and a gluon carrying a fraction 1 − z of its momentum. The other real diagrams do not add any divergences, only finite, calculable parts. So do the virtual diagrams, which can be easily calculated using standard loopintegral methods, as all ultraviolet divergences which appear in individual loop diagrams cancel out. So we can write the full result for F̂2 at leading order in αs : α Q2 F̂2 = eq2 x [ δ(1 − x) + s (Pqq (x) ln 2 + C(x))] , 2π μ0
(5.100)
where C(x) contains all finite parts. Bjorken scaling is, as expected, violated; F̂2 now depends on Q2 . The singularity which is regulated by μ20 appears when the gluon is emitted collinear to the quark (k ⊥ = 0), hence it is called a collinear divergence. Physically the limit k ⊥ → 0 corresponds to a longrange (soft) interaction, where QCD can no longer be calculated in a perturbative way. To extend our result to the proton structure, we convolute F̂2 with a PDF, as in equation (5.74a): 1
F2 = ∑ eq2 x ∫ q
x
α dξ q x x Q2 x f (ξ ) [ δ(1 − ) + s (Pqq ( ) ln 2 + C ( ))] . ξ ξ 2π ξ ξ μ0
However, care has to be taken as f q is the bare, unrenormalized PDF, exactly the same situation as for the renormalization of the coupling constant. From now on we will write it as f0q (ξ ) to make the distinction clear. We want to absorb the collinear divergence into the PDF and renormalize it up to an arbitrary scale. We choose such a scale μF , with μ20 < μ2F < Q2 , and we use it to split the logarithm: ln
2
μ Q2 Q2 + ln F2 , = ln 2 2 μ0 μF μ0
(5.101)
and define a renormalized PDF as: 1
f q (x, μ2F ) = ∫ x
μ2 α dξ q x x x f0 (ξ )[δ(1− ) + s (P( ) ln F2 + C ( ))] . ξ ξ 2π ξ ξ μ0
(5.102)
Then we can rewrite the factorization formula in terms of the renormalized PDF and the factorization scale:
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
5.2 Deep inelastic scattering
1
F2 = ∑ e2q x ∫ q
x
dξ q x f (ξ , μ2F ) Ĥ ( , Q2 , μ2F ) , ξ ξ
 173
(5.103a)
α x x Q2 ̃ x x ))] . Ĥ ( ) = [ δ(1 − ) + s (Pqq ( ) ln 2 + C( ξ ξ 2π ξ ξ μF
(5.103b)
In other words, we can retrieve the structure by convoluting the PDF f q with the partonic hard part H.̂ Note that we have divided the finite part into two parts: ̃ + C (x). C(x) = C(x)
(5.104)
̃ is what reC is subtracted from the hard part and gets absorbed by the PDF, while C mains in the factorization formula. The exact choice of how to do this is up to convention, and is called a factorization scheme. Two common schemes are the DIS scheme, ̃ = 0, i. e., everything is subtracted into the PDF, and the more common MS where C scheme, where C = ln 4π − γE only. It is very important to have a clear understanding of what is happening here. In the calculation of the correction to the hard part, we integrated out all k⊥2 dependence between μ20 and Q2 . The kinematics of the system make sure that k⊥ ≤ Q always, i. e., the upper border of the integration is justified. In the infrared region however, there is no such kinematic restriction. By cutting the lower border of the integration at μ20 we discarded gluon radiation with k⊥ < μ0 from the hard part. In order to avoid dropping these gluons entirely, we have to absorb them in the PDF, which we subsequently renormalize up to an arbitrary scale μF . By doing this, we hide the divergence from the process, inside an object that was not perturbative to begin with. The physical interpretation goes as follows: we choose an arbitrary energy scale μF that separates the process in two parts, namely a hard part with k⊥ larger than this scale, and a nonperturbative part (the PDF) with k⊥ smaller than this scale. This interpretation is illustrated in Figure 5.20, and is literally factorization as we have seen
k⊥ μF
μF k⊥
(a) k⊥ < μF
(b) k⊥ > μF
Figure 5.20: (a) The transverse momentum of the gluon is smaller than the factorization scale, so we absorb it in the PDF. (b) The transverse momentum of the gluon is larger than the factorization scale, so we add it to the hard part.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
174  5 Wilson lines in highenergy QCD it before, but now emerging in a natural way. For this reason we will call μF the factorization scale. Since F2 is a physical observable, it cannot depend on the factorization scale (which is merely an unphysical leftover of a mathematical tool). This implies 𝜕F2 ≡ 0, 𝜕ln μ2F
(5.105)
from which we can derive an evolution equation for f , the socalled DGLAP evolution equation: 1
α (μ2 ) dξ x 𝜕 Pqq ( , αs (μ2f )) f q (ξ , μ2F ) , f q (x, μ2F ) = s F ∫ 2 2π ξ ξ 𝜕ln μF
(5.106)
x
where we already incorporated the effect of the running coupling αs (μ2F ). Note that Pqq depends on the coupling because this is an allorder equation; corrections from higher order calculations will manifest themselves inside the splitting function. Everything we have derived so far was for quarks only. Adding gluons, we can now calculate the leadingorder contribution (in αs ) to F̂2 from the bosongluon fusion diagram in Figure 5.16, and convolute this with the gluon PDF. We find for the partonic structure function: α Q2 F̂2g = ∑ eq2 x s (Pqg (x) ln 2 + C q (x)) . 2π μ0 q
(5.107)
This is quite similar to equation (5.100), especially, there is again a singularity from the integration over k⊥2 . As we already knew, there is no gluon contribution to F̂2 when αs = 0. The splitting function is given by Pqg (z) =
1 2 (z + (1 − z)2 ) , 2
(5.108)
where Pqg is the probability to find a quark in a gluon. Note that in F̂ g we sum over quark flavor. We have to renormalize the gluon PDF as we did with the quark PDF, but we absorb the singularities in the quark PDF: f
q
(x, μ2F )
=
f0q (x)
1
μ2 α dξ q x x + s ∫ f0 (ξ )(Pqq ( ) ln F2 + C q ( )) 2π ξ ξ ξ μ0 1
+
x
μ2 αs dξ g x x f0 (ξ )(Pqg ( ) ln F2 + C g ( )) . ∫ 2π ξ ξ ξ μ0 x
On the other hand, higherorder calculations show that the renormalization of the gluon PDF is given by: f
g
(x, μ2F )
=
f0g (x)
1
μ2 α dξ g x x + s ∫ f0 (ξ )(Pgg ( ) ln F2 + C q ( )) 2π ξ ξ ξ μ0 x
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
5.2 Deep inelastic scattering
 175
1
μ2 α dξ q x x + s ∫ f0 (ξ )(Pgq ( ) ln F2 + C g ( )) . 2π ξ ξ ξ μ0 x
With these renormalization definitions, we can write the factorization formulae as: F2 = ∑ e2q x (f q ⊗ Ĥ q + f g ⊗ Ĥ g ) , q
α Q2 q Ĥ q (z) = δ(1 − z) + s (Pqq (z) ln 2 + C̃ (z)) , 2π μF
α x Q2 Ĥ g ( ) = s (Pqg (z) ln 2 + C̃ g(z)) , ξ 2π μF N
1
(f ⊗ H)(x) = ∫ x
dξ x f (ξ ) H ( ) . ξ ξ
(5.109a) (5.109b) (5.109c) (5.109d)
Of course, in order to fully validate collinear factorization, one needs to derive factorization formulae for F1 as well and verify if they agree with those for F2 . This has been done quite thoroughly, such that we can accept collinear factorization as a valid framework. Then finally, the full DGLAP evolution equations can be expressed in a matrix equation: 1
αs dξ Pqi qj q (x, μ2 ) 𝜕 ∫ ( ( i 2 )= 2 Pgqj g(x, μ ) 2π ξ 𝜕ln μ x
qj ( x , μ2 ) Pqi g ) ⋅ ( xξ 2 ) . g( ξ , μ ) Pgg x ξ
(5.110)
For the sake of completeness, we list all splitting functions at leading order: z 1−z z 1−z z 1−z z 1−z
Pqq (z) = CF Pqg (z) =
1 + z2 , 1−z
(5.111a)
1 2 (z + (1 − z)2 ) , 2
(5.111b)
1 + (1 − z)2 , z
(5.111c)
Pgq (z) = CF
Pgg (z) = 2 CA (
z 1−z + + z(1 − z)) . 1−z z
(5.111d)
This leads us to the end of this section on deep inelastic scattering. In the next section, we will investigate what changes when we can no longer integrate over transverse momentum.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
176  5 Wilson lines in highenergy QCD
5.3 Semiinclusive deep inelastic scattering Collinear factorization is a wellexplored and experimentally verified framework, but it only works when integrating out all final states. Keeping these final states, i. e., fully exclusive DIS, would maximally break factorization. In this section we investigate an intermediate solution, where we identify exactly one hadron in the final state, and integrate out all other states. This is called SemiInclusive DIS (or SIDIS for short). Because there is no restriction on the momentum of the final hadron, it can acquire a transversal part. To put it more formally: in DIS we were able to describe our process on a plane, because it only has two independent directions, viz. the direction of the incoming proton (which is parallel to the incoming electron) and the direction of the outgoing electron. We have chosen a frame where the plus and minus components of the momenta span this plane, such that the transversal components are zero. In SIDIS a third direction emerges from the momentum of the identified hadron, which does not necessarily lie in the plane spanned by the incoming and outgoing electron. In this frame, the final hadron will have a nonzero transverse momentum component. As we will discover in this section, the breaking of collinear factorization is not an insurmountable task to overcome; we can adapt our factorization framework to allow for k⊥ dependence, such that the convolution between the hard part and the PDF (now also dependent on k⊥ , and thus from now on called a Transverse Momentum Dependent PDF or TMD for short) is a convolution over k⊥ . In this book we will not delve into the technicalities for k⊥ factorization, as they are quite intricate and would lead us to far.
5.3.1 Conventions and kinematics Different conventions exist in the literature concerning the naming of the different TMDs and azimuthal angles. We will follow the “Trento conventions”, as defined in [7]. Furthermore, concerning the labeling of momenta, we will follow the same convention as used in [13]. In an SIDIS process, we have an electron with momentum l that collides with a proton with momentum P. The mediated photon has momentum q, and hits a parton with momentum k, that has a momentum p after scattering (i. e., p = k + q). The struck parton then fragments into a hadron with momentum Ph . This is shown in Figure 5.21. Note that we now have two density functions; one that represents the probability to find a parton in the proton (the TMD), and one that represents the probability for a parton to fragment in a specific hadron (the fragmentation function or FF). We will assume the final hadron to be a spin 0 hadron, like a pion. We will use x and y as defined in Section 5.2.1, and we will define a new Lorentz invariant z: z=
P⋅Ph . P⋅q
(5.112)
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
5.3 Semiinclusive deep inelastic scattering  177
electron
proton
l
l
q k
P
Ph
p X
Figure 5.21: Kinematics of semiinclusive deep inelastic electronproton scattering.
The value for z can be measured in experiment; it will approximate the fractional momentum of the detected hadron relative to its parent parton, in the same way x approximates the fractional momentum of the struck quark relative to the parent proton. Intuitively, we can add a fragmentation function Dq (z) to (5.74), giving a PM collinear estimate for F2 in SIDIS: F2PM = ∑ eq2 x f q (x) Dq (z), q
(5.113)
which gives us, using (5.71), a first estimate for the SIDIS cross section: d3 σ 4πα2 s y2 ≈ ) ∑ e2 x f q (x) Dq (z). (1 − y + dx dy dz 2 q q Q4
(5.114)
Another important variable is the azimuthal angle ϕh , which is defined as cos ϕh = −
̂ l⋅P h⊥ , Ph⊥ 
where Ph⊥  is the length of the transversal component of the momentum of the outgoing hadron: μ
Ph⊥  = √−g⊥ μν Ph Phν . The geometrical construction of the azimuthal angle is shown in Figure 5.22. It is straightforward to show that the cross section is given by α2 d6 σ = L W μν , 2z x s Q2 μν dx dy dz dϕh dP 2h⊥
(5.115)
where we approximated d3 P h ≈ dz d2 P h⊥
Eh . z
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
178  5 Wilson lines in highenergy QCD
Figure 5.22: In the rest frame of the proton, Ph⊥ is the projection of Ph onto the plane perpendicular to the photon momentum. The azimuthal angle ϕh is the angle between Ph⊥ and the lepton plane.
5.3.2 Structure functions The hadronic tensor is defined as (compare it to (5.63)): N ̃ δ(4) + q − p − P ν †μ W μν = 4π 3 ∑ (P X h ) ⟨P J (0) X, Ph ⟩ ⟨X, Ph  J (0) P⟩
X
1 = ∫d4 r ei q⋅r ⟨P J †μ (r) Ph ⟩ ⟨Ph  J ν (0) P⟩ . 4π
(5.116)
As we will see in Section 5.3.3, this is a bit simplistic as we cannot integrate out the X states without affecting Ph , but the general idea is correct. Note that because we do not integrate over Ph (we measure it in the final state), we cannot drop the state Ph ⟩ ⟨Ph . This leads to an important difference as compared to the hadronic tensor in DIS, viz. that we cannot naively impose the same constraints as in (5.66a), because timereversal invariance is not automatically satisfied. We can restore this invariance by changing it slightly, namely we require invariance under the simultaneous reversal of time and of initial and final states. For the parameterization of the hadronic tensor, we use the same orthonormal basis as before, viz. equations (5.58), but now we have an additional physical vector at our disposal, which we can use to construct the fourth basis vector:
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
5.3 Semiinclusive deep inelastic scattering  179
μν
P N g ĥ μ = ⊥ h ν . Ph⊥ 
(5.117)
ĥ μ ĥ μ = −1.
(5.118)
ĥ μ is a spacelike unit vector:
Watch out, as although we normalized this vector, it is not fully orthogonal! We have, as expected, h⋅̂ t ̂ = 0,
h⋅̂ q̂ = 0,
but it is not orthogonal to lμ̂ : h⋅̂ l ̂ = cos ϕh .
(5.119)
This is a deliberate choice, because now we have the azimuthal dependence hardcoded inside our new basis. Note that μ −l⊥̂ ε⊥ μν ĥ ν = sin ϕh ,
(5.120)
which implies that ϕh is fully defined in the region 0 . . . 2π. We can parameterize W μν in the same way as we did in (5.67), now with ĥ added. This gives (for the unpolarized case):9 W μν =
z μν ̂ ĥ ν) F cos ϕh [−g⊥ FUU,T + t μ̂ t ν̂ FUU,L + 2t (μ UU x cos 2ϕh μν ̂ ĥ ν] F sin ϕh ] . + (2ĥ μ ĥ ν + g ) F − 2i t [μ ⊥
UU
(5.121)
LU
The subscript UU denotes a structure function for an unpolarized beam on an unpolarized target, while the labeling in function of ϕh will be motivated by contracting with the lepton tensor (5.62): Lμν W μν =
4zs y2 cos ϕ [(1 − y + ) FUU,T + √1 − y (2 − y) cos ϕh FUU h y 2 cos 2ϕh
+ (1 − y)FUU,L + (1 − y) cos 2ϕh FUU
sin ϕh
+ λ y√1 − y sin ϕh FLU
].
9 The lepton sector did not change when going from DIS to SIDIS, implying we can use the same one again.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
180  5 Wilson lines in highenergy QCD cos ϕ
sin ϕ
As anticipated, FUU h has a factor cos ϕh in front, and so on. Note that FLU h is the structure function for a longitudinally polarized lepton beam (on an unpolarized proton target), which is confirmed by the factor λ in front (originating from the last term in the lepton tensor (5.62)). The cross section is then given by (5.115): y2 2α2 d6 σ [(1 − y + )FUU,T + (1 − y)FUU,L = 2 2 2 xyQ dx dy dz dϕh dP h⊥ sin ϕh
+ λ y√1 − y sin ϕh FLU
cos 2ϕh
+ (1 − y) cos 2ϕh FUU
cos ϕh
+ √1 − y (2 − y) cos ϕh FUU
],
4πα2 d3 σ y2 ̃ = )FUU,T + (1−y)F̃UU,L] , [(1−y + dx dy dz 2 x y Q2
(5.122a) (5.122b)
where we integrated over P h⊥ in the last step, which got rid of the ϕh dependence. The tilde structure functions are the integrated versions: F̃UU,T (x, z, Q2 ) = ∫d2 P h⊥ FUU,T (x, z, Q2 , P h⊥ ),
(5.123)
and similarly for F̃UU,L . From the logical demand ∑ ∫dz z h
d3 σSIDIS d2 σDIS ≡ dx dy dz dx dy
we can relate the SIDIS structure functions to the DIS structure functions: ∑ ∫dz z F̃UU,T (x, z, Q2 ) ≡ FT (x, Q2 )
(5.124a)
∑ ∫dz z F̃UU,L (x, z, Q2 ) ≡ FL (x, Q2 ).
(5.124b)
h
h
5.3.3 Transverse momentum dependent PDFs We can construct the diagram for the hadronic tensor following the same stepbystep procedure we used in DIS (Section 5.2.5), this time adding a fragmentation function, as is illustrated in Figure 5.23. Remember that the amplitude for extracting a quark from a proton with momentum P is ψα (0) P⟩. Then the amplitude for a quark fragmenting in a hadron with momentum Ph is of course ⟨Ph  ψα (0).
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
5.3 Semiinclusive deep inelastic scattering  181
Ph k
ν
Ph
p
p
μ
k Figure 5.23: Leading order diagram for the hadronic tensor in SIDIS.
So we simply have k
ν X
βα
=
(γ ν ) ⟨X ψα (0) P⟩
=
⟨Y, Ph  ψβ (0) 0⟩ (γ ν ) ⟨X ψα (0) P⟩ .
Ph k
ν
p
Y X
βα
The QEDvertex adds a δfunction, and making the finalstate cut adds two finalstate sums (using the notation defined in equation (5.78)) and two δfunctions: 1 ̃∑ ̃ δ(4) − k − p δ(4) W μν = ∑ eq2 ∫d4 k d4 p ∑ (Ph + pY − p) δ(4) (k + q − p) (P X) 2 q X Y × ⟨P ψ X⟩ γ μ ⟨0 ψ Y, Ph ⟩ ⟨Y, Ph  ψ 0⟩ γ ν ⟨X ψ P⟩ .
Next we will separate the proton content from the fragmenting hadron content, applying on each the same steps as before (expressing the δfunction as an exponential, using the translation operator and the completeness relation). Then we get the general leading order result: 1 2 4 4 (4) W μν = ∑ eq ∫d k d p δ (k +q−p) tr(Φ(k, P)γ μ Δ(p, Ph )γ ν ) , 2 q Φαβ (k, P) = ∫
d4 r −i k⋅r e ⟨P ψβ (r)ψα (0) P⟩ , 16π 4
Δαβ (p, Ph ) = ∫
d4 r −i p⋅r e ⟨0 ψα (0) Ph ⟩ ⟨Ph  ψβ (r) 0⟩ . 16π 4
(5.125a) (5.125b) (5.125c)
Next we choose a frame where the parton in the TMD carries a fraction ξ of the proton’s plus momentum, and where the final hadron carries a fraction ζ of the fragmenting
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
182  5 Wilson lines in highenergy QCD parton’s minus momentum, i. e., k μ = (ξP + ,
k 2 + k 2⊥ , k⊥) , 2 ξP +
pμ = (z
such that we can write (neglecting terms that are
1 Q
p2 + p2⊥ Ph− , p⊥ ) , , 2Ph− ζ
(5.126)
suppressed):
δ(4) (k + q − p) ≈ δ(k + + q+ ) δ(q− − p− ) δ(2) (k ⊥ + q⊥ − p⊥ ) 1 1 1 ≈ + − δ(ξ − x) δ( − ) δ(2) (k ⊥ + q⊥ − p⊥ ) , P Ph ζ z and we transform the integral measures as d4 k = P + dξ dk − d2 k ⊥ ,
d4 p = dp+ dζ
Ph− 2 d p⊥ . ζ2
Then we can rewrite the hadronic tensor as: W μν = ∑ eq2 ∫d2 k ⊥ z tr(Φ (x, k ⊥ , P) γ μ Δ (z, k ⊥ + q⊥ , Ph ) γ ν ) , q
(5.127)
where we defined the k ⊥ dependent correlators as: N
Φ(ξ , k ⊥ , P) = ∫ N
Δ(z, p⊥ , Ph ) =
d3 r −i xP+ r− +i k ⊥ ⋅r ⊥ e ⟨P ψ(0+ , r − , r ⊥ )ψ(0) P⟩ , 8π 3
3 1 d r −i Pzh r+ +i k ⊥ ⋅r ⊥ ∫ 3 e ⟨0 ψ(0) Ph ⟩ ⟨Ph  ψ(r + , 0− , r ⊥ ) 0⟩ . 2 z 8π −
(5.128a) (5.128b)
We can parameterize the quark correlator and fragmentator functions in terms of TMDs and FFs, precisely as we did with the quark correlator in the case of DIS. Keeping only the contributions at leadingtwist, we obtain the following unpolarized TMDs and FFs: k/ i 1 Φ(ξ , k ⊥ ) = f1 (ξ , k ⊥ ) γ − + h⊥ (ξ , k ⊥ ) ⊥ γ − , 2 2 1 mp
k/ 1 i Δ(ζ , k ⊥ ) = D1 (ζ , k ⊥ ) γ + + H1⊥ (ζ , k ⊥ ) ⊥ γ + . 2 2 mh
(5.129a) (5.129b)
If we plug this result in (5.115) and (5.122b), and use the approximation q⊥ ≈ −
P h⊥ , z
(5.130)
we get the factorization formula for the unpolarized transversal structure function in SIDIS:
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
5.3 Semiinclusive deep inelastic scattering  183
q
q
FUU,T = ∑ e2q x f1 ⊗ D1 ,
(5.131)
q
where we defined the convolution over transverse momentum as 1 f1q ⊗ Dq1 = ∫d2 k ⊥ d2 p⊥ δ(2) (k ⊥ −p⊥ − P h⊥)f1q (x, k ⊥ ) Dq1 (z, p⊥ ). z
(5.132)
5.3.4 Gaugeinvariant definition for TMDs Just as was the case in the previous section for DIS, our TMDs and FFs defined so far (equations (5.128)) are not gaugeinvariant, and are only valid in the lightcone gauge A+ = 0. Gauge invariance can be restored by inserting a Wilson line: Φ(ξ , k ⊥ , P) = ∫
d3 r −i ξP+ r− +i k ⊥ ⋅r ⊥ e ⟨P ψ(r) 𝒰[r ; 0] ψ(0) P⟩ , 8π 3
(5.133)
where now the spacetime point separation no longer lies on the lightcone, i. e., the Wilson line has to connect the point (0+ , 0− , 0⊥ ) with the point (0− , r + , r ⊥ ). But as we have seen in the Introduction, the Wilson line is path dependent, meaning different choices for the Wilson path give different results. How do we choose a path, or at least motivate our choice? In the collinear case we could interpret the Wilson line as a color rotation on the quark, making it an eikonal quark. We have split the Wilson line into two parts at infinity using equation (5.28). This splitting had two advantages, viz. we could associate a line with the quarks on each side of the cut diagram separately, and secondly that we could use easy Feynman rules (all Feynman rules we derived in Section 5.1.1 are in the function of Wilson lines from a point to ±∞). In the TMD definition, we would like to do something analogous. We add a lightlike line to each quark: −
𝒰[+∞− , 0
⊥
ψ(0 , r +
−
; 0− , 0⊥ ]
ψ(0+ , 0− , 0⊥ ),
−† , r ⊥ ) 𝒰[+∞ − , r ; r− , r ] . ⊥ ⊥
(5.134a) (5.134b)
But because of the transverse separation we now have −†
𝒰[r ; 0] ≠ 𝒰[+∞− , r
⊥
−
𝒰[+∞− , 0
; r− , r ⊥ ]
⊥
; 0− , 0⊥ ] .
So we need a Wilson line to connect the transverse ‘gap’, i. e., −†
⊥
𝒰[r ; 0] = 𝒰[+∞− ; r − ] 𝒰[r
⊥
; 0⊥ ]
−
𝒰[+∞− ; 0− ] .
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
184  5 Wilson lines in highenergy QCD We will split this line at +∞⊥ for the same reasons as before. Adding this to equations (5.134) gives ⊥
𝒰[+∞− , +∞
⊥
; +∞− , 0⊥ ]
−
𝒰[+∞− , 0
−† ψ(0+ , r − , r ⊥ ) 𝒰[+∞ −, r
⊥
⊥
; r− , r ⊥ ]
ψ(0+ , 0− , 0⊥ ),
; 0− , 0⊥ ] ⊥†
𝒰[+∞− , +∞
⊥
; +∞− , r ⊥ ] ,
(5.135a) (5.135b)
leading to the final definition for the gaugeinvariant TMD correlator (see Figure 5.24):
Φ=∫
d3 r
8π 3
e−i xP
r +i k ⊥ ⋅r ⊥
+ −
̃[+∞;0] = 𝒰 ⊥ − 𝒰 [+∞ , +∞ ̃† 𝒰 [+∞;r]
=
⊥
̃† ̃ ⟨P ψ(r) 𝒰 [+∞;r] 𝒰[+∞;0] ψ(0) P⟩ ,
; +∞− , 0⊥ ]
−† 𝒰[+∞ − , r ; r− , r ] ⊥ ⊥
− 𝒰[+∞ −, 0
⊥
(5.136a) (5.136b)
; 0− , 0⊥ ] ,
(5.136c)
⊥† 𝒰[+∞ − , +∞ ; +∞− , r ] . ⊥ ⊥
(0+ , +∞− , +∞⊥ )
(0+ , r − , r ⊥ )
n⊥ n−
(0+ , 0− , 0⊥ )
(0+ , r − , r ⊥ ) (0+ , 0− , 0⊥ )
(0+ , +∞− , 0⊥ )
Figure 5.24: Structure of the Wilson lines in the TMD definition.
What about the physical interpretation? Consider again the onegluon exchange as depicted in Figure 5.15. We saw in equation (5.90) that the net contribution for a soft or collinear gluon is a factor g∫
p/ − /l d4 l n⋅A d4 l A/ ≈ −g ∫ 4 2 16π 16π 4 n⋅l − i ε (p − l) + i ε
(5.137) μ
p the direction of the where lμ is the momentum of the exchanged photon and nμ = p outgoing quark. We were able to make this simplification because in the correlator this correction stands to the right of a factor u(p), such that we can make use of the fact u(p)p/ = 0:
/ = 2u(p)p⋅A. u(p)A/ p/ = u(p) (A/ p/ + p/ A) As we saw before, this contribution calculated to all orders leads to the lightlike Wilson line. Now in the collinear case this was the end of the story. But now that we are
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
5.3 Semiinclusive deep inelastic scattering
 185
in the TMD case, we cannot simply take the exchanged gluon to be collinear, instead, we need to add a term to equation (5.137): g∫
d2 l⊥ γ μ /l ⊥ μ + − d4 l A/ /l ≈ g A (0 , ∞ , l⊥ ). ∫ 16π 4 2p⋅l + l2⊥ − i ε 4π 2 l2⊥ − i ε ⊥
It is not so straightforward to prove (see, e. g., [11]), but these parts will sum up to a transversal Wilson line. So in the end, inside the TMD we have both a resummation of (soft) collinear gluons, coming from the line parts 𝒰 − and 𝒰 −† , and a resummation of soft transversal gluons, coming from the 𝒰 ⊥ and 𝒰 ⊥† parts. Note however that by choosing an appropriate gauge, it is possible to cancel the contributions of one type of these lines, e. g. in the LC gauge only the transversal parts remain. Of course, the same reasoning can be repeated for the fragmentation function, but then the lightlike Wilson lines will lie in the plus direction. This is illustrated in Figure 5.25a.
(a)
(b)
Figure 5.25: (a) In SIDIS, the longitudinal Wilson line inside the fragmentation function represents a resummation of soft and collinear gluons connected to the incoming quark. (b) In Drell–Yan, the longitudinal Wilson line inside one TMD represents a resummation of soft and collinear gluons connected to the parton extracted from the other TMD.
To end this chapter, we give an example for the use of Wilson lines in the Drell–Yan process. In this setup, two protons (or a proton and an antiproton) are collided and create a photon or weak boson by quarkantiquark annihilation. We thus need two TMDs, which are both in the initial state. The longitudinal part of the Wilson line used to make the TMD gaugeinvariant represents a resummation of gluons connected to the parton struck from the other TMD. This is illustrated in Figure 5.25b. Because of the fact that the Wilson line now represents initial state radiation, the line structure will be different. More specifically, the path will flow towards −∞ before returning, as shown in Figure 5.26. This has an important consequence: two out of the eight (unpolarized and polarized) TMDs are Todd and will have a sign change with this line structure as compared to SIDIS. This would imply that TMDs are processdependent, and not universal as they ought to be. However, so far it has not been experimentally verified
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
186  5 Wilson lines in highenergy QCD
(0+ , −∞− , +∞⊥ )
(0+ , r − , r ⊥ ) (0+ , −∞− , 0⊥ )
(0+ , 0− , 0⊥ )
Figure 5.26: Structure of the Wilson lines in the Drell–Yan TMD definition.
whether these PDFs have a nonzero value. These days, a lot of focus is aimed at finding or excluding them, for the sake of TMD universality.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
A Mathematical vocabulary A.1 General topology Definition A.1 (Topological space). Let X be a set and 𝒰 a collection of subsets of X . Then X is called a topological space if 1. empty set and X itself belong to 𝒰 ; 2. 𝒰 is closed with respect to finite intersections: U1 , . . . , UN ∈ 𝒰 , 3.
N
N ∈ ℕ → ⋂ Uk ∈ 𝒰 ; k=1
𝒰 is closed with respect to arbitrary (uncountably infinite as well) unions Uα ∈ 𝒰 , α ∈ A → ⋃ Uα ∈ 𝒰 . α∈A
The sets U∈𝒰 are open, their complements X − U are closed in X .
To define a topology means to say which subsets of X are called open. From a topology we derive the concept of a neighborhood of a point x ∈ X.
Definition A.2 (Neighborhood). Let x∈X be a point and U be an open set which contains it. Then U is called a neighborhood of x in X .
If different topologies are defined there on X, they can be compared with each other. A topology 𝒰1 is called stronger (finer) than a topology 𝒰2 , which then is weaker (coarser), if 𝒰2 ⊂ 𝒰1
as collections of subsets. Considering the subsets of a space X possessing a topology naturally induces a topology on its subsets, referred to as the induced topology.
https://doi.org/10.1515/9783110651690006
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
188  A Mathematical vocabulary
Definition A.3 (Induced topology). Let (X1 , 𝒰1 ), (X2 , 𝒰2 ) be topological spaces, such that X2 ⊂ X1 . The relative or subspace topology 𝒰1X2 induced on X2 is given if the sets U1 ∩ X2 ;
U1 ∈ 𝒰 1
are open. Then a topological inclusion reads X2 → X1 , given that the intrinsic topology 𝒰2 is stronger than the relative one (𝒰1X2 ⊂ 𝒰2 ).
One of the properties of topologies most relevant for our purposes is the socalled Hausdorff property: Definition A.4 (Hausdorff). A topological space X is said to be Hausdorff if and only if for any two nonequivalent points x1 ≠ x2 there exist disjoint neighborhoods U1 , U2 of x1 , x2 respectively.
This property allows one to separate points in a given topological space. It becomes highly relevant when considering limits.
A.2 Topology and basis Given we have a set X and a collection of its subsets 𝒰 , we wish to be able to define some operations like, e. g., differentiation. To this end, we have to introduce a number of properties, which are generated by choosing a topology on X. We start with the following lemma: Lemma A.5. Let X be a set and (𝒰β )β∈B be a collection of topologies on X. Then 𝒯 := ⋂ 𝒰β β∈B
is again a topology on X.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
A.2 Topology and basis  189
This topology can now be optimized by making use of the following proposition: Proposition A.6. Let X be a set and 𝒟 ⊆ 𝒫 (𝒳 )
be a collection of subsets of X. Then there exists a weakest topology on X, such that all subsets U∈𝒟 are open. That is to say, there exists a topology 𝒯 such that: 1. every U∈𝒟 2.
is open in 𝒯 ; if 𝒰 is a topology on X, such that each U∈𝒟 is open in 𝒰 , then 𝒰 is finer then 𝒯 .
Here 𝒫 (X) represents the power set of X, i. e., the set of all subsets of X. However, Proposition A.6 does not provide us with an explicit method to determine the topology 𝒯 . Let us first consider a simpler case where the collection of sets 𝒟 have an extra property. Definition A.7 (Topology basis). Let X be a set. A basis for a topology on X is a collection ℬ of subsets of X with the following properties: 1. For every x∈X there exists a B ∈ ℬ, such that x ∈ B. 2.
If B1 , B2 ∈ ℬ,
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
190  A Mathematical vocabulary
then there exists a B3 ∈ ℬ with x ∈ B3 and B3 ⊆ (B1 ∩ B2 ). If 𝒯 is the weakest topology on X , such that all B∈ℬ are open in 𝒯 , then we call ℬ a basis for 𝒯 or we call 𝒯 the by ℬ generated topology.
Proposition A.8. Let ℬ be a basis for a topology on a set X and let 𝒯 be the topology generated by this basis. If U⊆X then the following properties are equivalent: 1. U is open in 𝒯 ; 2. for each x∈U there exists a B ∈ ℬ, such that x∈B and B ⊆ U; 3.
U can be represented as a union of sets Bα from the collection ℬ.
It might happen that one needs to consider spaces which are equipped with a metric. Metric can be used to construct a topology to which the open balls form a basis. Let us first give the definition of a metric set.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
A.2 Topology and basis  191
Definition A.9 (Metric on a set). Let X be a set. Metric on X is a function d : X × X → ℝ≥0 , with the following properties: 1. d(x1 , x2 ) = 0 if and only if x1 = x2 ; 2. symmetry: d(x1 , x2 ) = d(x2 , x1 ), 3.
∀x1 , x2 ∈ X ;
triangle inequality: d(x1 , x3 ) ≤ d(x1 , x2 ) + d(x2 , x3 ),
∀x1 , x2 , x3 ∈ X .
A set with a metric is called a metric space. A metric is called an ultrametric if it satisfies the stronger version of the triangle inequality where points can never fall between other points: ∀x1 , x2 , x3 ∈ X,
d(x1 , x3 ) ≤ max (d(x1 , x2 ), d(x2 , x3 )).
A metric d on X is called intrinsic if any two points x1 , x2 ∈ X can be joined by a curve with length arbitrarily close to d(x1 , x2 ). For the sets on which the operation of addition ‘+’ is defined. d is called a translation invariant metric if ∀x1 , x2 , a ∈ X d(x1 , x2 ) = d(x1 + a, x2 + a). Let us now explicitly construct the topology induced by metric. Define an open ball B for x1 ∈ X and a real number R ≥ 0, B(x1 , R) := {x2 ∈ X  d(x1 , x2 ) < R},
(A.1)
ℬ := {B(x1 , R)  x1 ∈ X}.
(A.2)
and a collection
It is easy to see that the balls B obey the conditions of Definition (A.7) and thus form the basis of a topology. A topological space (X, 𝒯 ) is called metrizable if there exists a metric on X. Such a space is Hausdorff.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
192  A Mathematical vocabulary In order to be able to construct a topology starting from a given collection of subsets, which do not necessarily obey the properties of a basis, we need to introduce the concept of a subbasis Definition A.10 (Subbasis of a topology). Let X be a set. Then subbasis of a topology on X is a collection 𝒮 of subsets of X , such that ⋃ = X.
S∈𝒮
Subbases can be used to construct a basis: Proposition A.11. Let 𝒮 be a subbasis for a topology on X. Then define the collection ℬ of subsets B⊆X that can be presented as the intersection of a finite number of sets in the collection 𝒮 . That is to say, B∈ℬ if and only if there exists S1 , S2 , . . . , Sn ∈ 𝒮 such that B = S1 ∩ S2 ∩ ⋅ ⋅ ⋅ ∩ Sn . Then ℬ is a basis for a topology on X and the topology generated by ℬ is the weakest topology on X, and each S∈𝒮 is open in this topology. From Proposition A.11 it is now easy to construct a topology from a given collection of subsets. One just adds the set X to this given collection, so that this new collection becomes a subbasis for a topology on X. Proposition A.11 then shows how to construct a basis and the weakest topology for which the original collection of sets is open.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
A.3 Continuity  193
A.3 Continuity Definition A.12 (Continuity). A function F : X1 → X2 for the topological spaces X1 , X2 is continuous if the preimage F −1 (V ) of any set V ⊂ X2 which is open in X2 is open also in X1 .
The preimage is defined by F −1 (V) = {x ∈ X1 ; F(x) ∈ V}
(A.3)
and does not require F to be either an injection or a surjection. Definition A.13 (Homeomorphism). If F is a continuous bijection and also F −1 is continuous, then F is a homeomorphism or a topological isomorphism.
One can think of a homeomorphism as an isomorphism between topological spaces. Note that Definition A.12 for continuity is consistent with the usual definition of continuity in real calculus in the following way: Corollary A.14. Let (X1 , dX1 ) and (X2 , dX2 ) be metric spaces. Let F : X1 → X2 be a map. Then F is continuous with respect to the metric topologies on X1 and X2 if and only if ∀x ∈ X1 , ∀ϵ > 0, ∃δ > 0 : F(B(x, ϵ)) ⊆ B(F(x), δ). In other words, F is continuous if and only if ∀x ∈ X1 , ∀ϵ > 0, ∀ξ ∈ X1
∃δ > 0  dX1 (x, ξ ) < δ ⇒ dX2 (F(x), F(ξ )) < ϵ.
The property of continuity can be used to define a topology on products of sets. Definition A.15 (Product topology). Let {Xα }α∈A
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
194  A Mathematical vocabulary
be a collection of topological spaces. Consider the product set Y := ∏ Xα . α∈A
The projection on the factor Xα reads prα : Y → Xα . The product topology on Y is then the weakest topology on Y wherein each of the projections prα is continuous.
Imposing extra conditions on a map allows us to strengthen continuity to homeomorphism. Putting even more restrictions one can extend it further to define open and closed maps. Definition A.16 (Open and closed map). Let F : X1 → X2 be a map between two topological spaces. F is an open map if ∀U ⊆ X1 its image F (U) is open in X2 . On the other hand, F is a closed map if ∀U ⊆ X1 its image F (U) is closed in X2 .
When discussing manifolds, one needs to define a specific map referred to as an embedding. Definition A.17 (Embedding). A continuous map G : X1 → X2 is an embedding if G is injective and is a homeomorphism from X1 to its image G(X1 ) ⊂ X2 , where G(X1 ) is supplied with the subspace (induced) topology.
An embedding possesses the following three properties: 1. G is continuous; 2. G is injective; 3. ∀U ⊆ X1 , ∃V ⊆ X2 with U = G−1 (V).
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
A.4 Connectedness 
195
A.4 Connectedness Definition A.18 (Connected). A topological space X is named disconnected if there exist nonempty open subsets U1 , U2 ⊂ X , such that U1 ∩ U2 = 0 and U1 ∪ U2 = X . A topological space is connected if it is not disconnected.
Notice that according to this definition the empty set is connected. From considering maps between topological spaces and restricting to continuous maps we get a generalization of the mean value theorem. Proposition A.19 (Continuous image of a connected space is connected). Let F : X1 → X2 be a continuous map. If X1 is connected, then F(X1 ) ⊆ X2 is also connected. The link with the usual mean value theorem can be made visible by considering the map F : X → ℝ. Suppose that this map is continuous. From Proposition A.19 we obtain that F(X) ⊆ ℝ. In other words, we find that if x1 , x2 ∈ X,
such that F(x1 ) < F(x2 ),
and c is a real number, such that F(x1 ) < c < F(x2 ),
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
196  A Mathematical vocabulary then there exists a x3 ∈ X,
such that F(x3 ) = c.
Sometimes one needs to parameterize a space by its connected components, which can be considered as the classes of equivalence induced by connectedness. Definition A.20 (Connected components). Let X be a topological space. The equivalence classes for the equivalence relation introduced by connectedness are called connected components of X .
One can see now that X is the disjunct union of its connected components. To introduce pathconnectedness we need to first define a path in a topological space. Definition A.21 (Path and loop in a topological space). Let X be a topological space. A path in X is a continuous map γ : [0, 1] → X , with γ(0) being the initial point and γ(1) being the terminal or endpoint of the path. If γ(0) = γ(1), then γ is said to be a loop.
This definition can be used to define a pathconnected topological space and a new equivalence relation introducing pathconnected components. This resembles the naive idea that every two points in a pathconnected component can be connected by a path which completely belongs to this component. Definition A.22 (Pathconnected components). Consider a topological space. The equivalence classes introduced by the abovementioned equivalence relation are called its pathconnected components.
Definition A.23 (Pathconnected). A topological space is pathconnected if every two points of X are equivalent with respect to this equivalence relation.
Naturally we have the following relation between pathconnectedness and connectedness: Corollary A.24. A pathconnected space is connected. The concept of connectedness can be easily extended to more complicated spaces by means of the product topology (Definition A.15).
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
A.4 Connectedness 
197
Definition A.25 (Connected products). Consider the topological spaces X1 and X2 . Let X1 × X2 possess the product topology. Then: 1. if X1 and X2 are connected, then X1 × X2 is also connected; 2. if X1 and X2 are pathconnected, then X1 × X2 is also pathconnected.
Corollary A.26. Consider the topological spaces X1 , . . . , Xn . Therefore, 1. if each Xi is connected, then X1 × ⋅ ⋅ ⋅ × Xn 2.
is also connected; if each Xi is pathconnected, then X1 × ⋅ ⋅ ⋅ × Xn is also pathconnected. We need as well the concept of a topological group.
Definition A.27 (Topological group). A topological group is a group G with a topology, such that the maps G × G → G, (g1 , g2 ) → g1 g2 : multiplication G → G, g
→ g
−1
: inverse
(A.4) (A.5)
are continuous. The group elements are interpreted as points of the topological space.
It is now easy to prove the following statements: 1. Left translation on G defined by ta : G → G : ta (g) = ag, 2.
∀a ∈ G, is a homeomorphism on G. A topological group G is Hausdorff if and only if the unit element e∈G is a closed point.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
198  A Mathematical vocabulary 3.
Let G0 ⊂ G
be the connected component of G containing the unit element e. Then G0 is a subgroup of G. 4. If G ⊂ G is a subgroup that is open, then G is also closed. Example A.28. Define the group GL2 (ℝ) := { (
a1 a3
a2 ) a1 , a2 , a3 , a4 ∈ ℝ, a1 a4 − a2 a3 ≠ 0} . a4
(A.6)
We can consider GL2 (ℝ) as an open subset of ℝ4 with the Euclidian topology, which then induces a topology on GL2 (ℝ). This is indeed a topological group. It possesses, moreover, the structure of a C ∞ variety, making it an example of a Lie group. This group cannot be connected. To see this, let us consider the determinant map GL2 (ℝ) → ℝ∗ , which is continuous and surjective, while ℝ∗ = ℝ \ {0} is not connected. By contraposition we find that the group cannot be connected.
A.5 Local connectedness and local pathconnectedness Definition A.29 (Locally connectedness). A topological space X is locally connected if for all x ∈ X and for each open neighborhood U1 of x there exists a connected open neighborhood U2 of x, such that U2 ⊆ U1 .
Definition A.30 (Local pathconnectedness). A topological space X is locality pathconnected if for all x ∈ X and each open neighborhood U1 of x there exists a pathconnected open neighborhood U2 of x, such that U2 ⊆ U1 .
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
A.6 Compactness  199
The link between connectedness and pathconnectedness gets stronger in the local versions, which is stated in the following Proposition: Proposition A.31. If X is locally pathconnected, then X is also locally connected. It is worth noticing that if a space is locally (path)connected, it is not necessarily (path)connected. The inverse statement is also not always true.
A.6 Compactness Consider a topological space X and define an open cover of X as a collection 𝒰 = {Uα }α∈A
of open subsets of X, such that X = ⋃ Uα . α∈A
Given the cover 𝒰 and a set A ⊂ A, one defines an open subcover 𝒰 , which itself is an open cover of X. Such covers allow us to determine whether a topological space is compact or not. Definition A.32 (Compactness). A topological space X is called compact if each open cover 𝒰 of X (a collection of open sets of X whose union is all of X ) has a finite subcover.
Example A.33. A closed interval [a, b] ⊂ ℝ is compact in the Euclidean topology. An alternative definition of compactness, continuity and closed can be given by using nets. Nets will helps us to introduce the Tychonoff topology and Tychonoff’s theorem. Definition A.34 (Partially ordered set). A (nonstrict) partial order is a binary relation ≤ over a set P which is reflexive, antisymmetric, and transitive, i. e., which, for all a1 , a2 , a3 ∈ P, possesses the properties 1. reflexivity: a1 ≤ a1 ; 2. antisymmetry: if a1 ≤ a2 and a2 ≤ a1 , then a1 = a2 ; 3. transitivity: if a1 ≤ a2 and a2 ≤ a3 , then a1 ≤ a3 .
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
200  A Mathematical vocabulary
Definition A.35 (Directed set). A directed set (also a directed preorder or a filtered set) is a nonempty set A together with a reflexive and transitive binary relation ≤ having also the property that every pair of elements has an upper bound ∀a1 , a2 ∈ A,
∃ a3 ∈ A : a1 ≤ a3 ,
a2 ≤ a3 .
Definition A.36 (Net). 1. A net (x α ) in a topological space X is a map α → xα 2.
from a partially ordered and directed index set A (with relation ≥) to X . A net (x α ) converges to x, denoted by lim x α = x, α
if for every open neighborhood of x U⊂X there exists α(U) ∈ A such that for each α ≥ α(U) x α ∈ U. 3.
It is said then that (x α ) is eventually in U. A subnet (x α1 (α2 ) ) of a net (x1α ) is defined by means of a map A2 → A1 ,
α2 → α1 (α2 )
between partially ordered and directed index sets, such that for each α0 ∈ A there exists α2 (α0 ) ∈ A2 with α1 (α2 ) ≥ α0 4.
for any α2 ≥ α2 (α0 ) (one says that A2 is cofinal for A1 ). A net (x α ) in a topological space X is called universal if for any subset X2 ∈ X1 the net (x α ) is eventually either only in X2 or only in X1 − X2 .
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
(A.7)
A.6 Compactness  201
The notions of closedness, continuity and compactness can be reformulated in terms of nets. The fact that one uses nets instead of sequences is because Lemma A.37 does not hold when A=ℕ unless we are dealing with metric spaces. Lemma A.37 (Closedness, continuity and compactness using nets). 1. A subset X2 of a topological space X1 is closed if for all convergent nets (xα ) in X1 with xα ∈ X2 2.
for all α the limit belongs to X2 . A function F : X1 → X2
3.
between topological spaces is continuous if for all convergent nets (xα ) in X1 , the net (F(xα )) is convergent in X2 . A topological space X is compact if for all nets there is a convergent subnet. The limit point of the convergent subnet is called then a cluster (accumulation) point of the original net.
From the above it is seen that if a net converges in some topology, then it also converges in any weaker topology. Before continuing with the Tychonoff topology, we shall first make some useful statements. Proposition A.38. If F : X1 → X2 is a continuous map and X1 is compact, then the image F(X1 ) ⊆ X2 is also compact. Proposition A.39. If X1 is compact and X3 ⊆ X1 is closed in X1 , then X3 is compact.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
202  A Mathematical vocabulary Proposition A.40. If X1 is Hausdorff and X3 ⊆ X1 is compact, then X3 is closed in X1 . Definition A.41 (Tychonoff topology). Let Xl be topological spaces and ℒ be an index set. The Tychonoff topology on the direct product X∞ = ∏ Xl l∈ℒ
is the weakest topology, such that all the projections pl : X∞ → Xl ,
(xl )l ∈ℒ → xl
(A.8)
are continuous. In other words, a net x α = (xlα )l∈ℒ converges to x = (xl )l∈ℒ if and only if xlα → xl for all l ∈ ℒ which are pointwise (not necessarily uniformly) in ℒ. Equivalently, the sets p−1 l (Ul ) = [∏ Xl ] × Ul , l =l̸
are open and form a basis to the topology of X∞ . That is any open set can be obtained from these sets by arbitrary unions and finite intersections.
The definition of this topology is motivated by the following theorem. Theorem A.1 (Tychonoff). Let ℒ be an index set of arbitrary cardinality and suppose that for all l∈ℒ a compact topological space Xl is defined. Then the direct product space X∞ = ∏ Xl l∈ℒ
is a compact topological space in the Tychonoff topology. As a consequence we observe that Corollary A.42. A subset X ⊆ ℝn is compact if and only if X is closed and bounded.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
A.7 Countability axioms and Baire theorem
 203
A.7 Countability axioms and Baire theorem In order to study the separation properties of topological spaces, we need to define the notion of countability. Definition A.43 (Neighborhood basis). Let X be a topological space and x ∈ X. Let 𝒰 = {Uα }α∈A a collection of open neighborhoods of x. Then 𝒰 is a neighborhood basis of x if for each open neighborhood V of x there exists an α, such that Uα ⊆ V .
The first countability axiom defining A1spaces then reads: Definition A.44 (A1). A topological space X obeys the first countability axiom if all x∈X have a countable neighborhood basis. This topological space is called A1.
Note that every metric space is A1. This can be seen by considering open balls of radius 1 , N
N ∈ ℕ.
A stronger version of the above axiom is called the second countability axiom. Definition A.45 (A2). A topological space X obeys the second countability axiom if there exists a countable basis for the topology on X . Then X is said to be A2.
Proposition A.46. Let X be an A2 topological space. 1. Each open cover of X have a countable subcover. A space with such a property is called a Lindelöf space. 2. There exists a countable subset of X which is dense (see Definition A.47 below) in X. Notice that A2 ⇒ A1.
(A.9)
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
204  A Mathematical vocabulary
Definition A.47 (Denseness). Let A be a subset of a topological space X . It is said to be dense in X if all points x∈X either belong to A or appear to be limit points of A (see Definition A.51).
Definition A.48 (Meagre subset, first Baire category). Let X be a topological space. We say that a subset U⊆X is nowhere dense if the interior of the closure of U is empty. We call U meagre if U is a countable union of nowhere dense subsets.
Meagre subsets have several important properties. Proposition A.49. Let X be a topological space. 1. A subset U⊆X is nowhere dense if and only if the interior of the complement X\U 2. 3.
is dense in X. A finite union of nowhere dense subsets is again nowhere dense. A countable union of meagre subsets is again meagre.
Lemma A.50. The following properties of a topological space X are equivalent: 1. Every countable intersection of dense open sets is again dense in X. 2. If C1 , C2 , . . . are closed subsets of X with empty interiors, then their union ∞
⋃ Cl l=1
also has an empty interior. 3. If U1 ⊆ X is a nonempty open subset, then U1 is not meagre. 4. If U2 ⊆ X is a meagre subset, then the complement X \ U2 is dense in X. With interior of a set C we mean the points x ∈ C that have an open neighborhood x ∈ U such that U ⊂ C. Spaces that have the above properties is called Baire spaces, and with it comes a theorem:
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
A.8 Convergence  205
Theorem A.2 (Baire category theorem). Each compact Hausdorff space is a Baire space. ‘Baire’ referred to a meagre subset as a subset of the first Baire category and to a nonmeagre subset as being of the second Baire category. Then a Baire space is a space where all nonempty sets belong to the second category.1
A.8 Convergence We shall now define the property of convergence of sequences in a topological space. This allows us to give meaning to limits of sequences. Definition A.51 (Convergence and accumulation point). Let (xn )n∈ℕ be a sequence of elements in a topological space X and let ξ belong to X . 1. A sequence (xn ) converges to ξ , or ξ is the limit of the sequence (xn ), if for each open neighborhood U of ξ there exists an index N(U), such that for all n ≥ N(U) xn ∈ U. 2.
In other words, a sequence is called convergent if it has a limit. We call ξ an accumulation point of the sequence (xn ) if for each open neighborhood U of ξ there exists an infinite number of indices n, such that xn ∈ U.
One might expect that for a sequence to have a unique limit it suffices to prove that it converges, but in the general case this is not true. A space should also to be Hausdorff for a sequence to have at most one limit. Definition A.52 (Countable compactness). A topological space X is countable compact if each countable open cover X = ⋃ Uα α∈A
has a finite subcover.
Notice the difference with straight compactness, where each cover needs a finite subcover and not only the countable ones. Moreover, we see that if X is compact, then it is also countable compact, but the inverse is not always true. 1 Note that here the notion of category has nothing to do with category theory, we mention this old terminology since it still occurs in the literature.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
206  A Mathematical vocabulary Proposition A.53. A topological space X is countable compact if and only if each series (xn ),
n∈ℕ
possesses an accumulation point. Definition A.54 (Sequential compactness). A topological space X is sequentially compact if each sequence in X includes a converging subsequence.
Lemma A.55. Let X be a topological space which is A1, ξ ∈ X, and (xn ),
n∈ℕ
be a series in X. The following two statements are equivalent: 1. the series (xn ) has a subseries that converges to ξ ; 2. ξ is an accumulation point of the series (xn ). Theorem A.3. Let X be a topological space. 1. If X is sequentially compact, then X is countable compact. 2. If X is countable compact and A1, then X is also sequentially compact. 3. If X is countable compact and A2, then X is also compact. Graphically this can be represented as follows: +A2
+A1
⇐
⇒
compact ⇒ countable compact ⇐ sequentially compact
Definition A.56 (Cauchy sequence and complete space). Let (X , d) be a metric space. 1. A sequence (xn ),
n∈ℕ
in X is called Cauchy if for all ϵ > 0 there exists an index N, such that for all m, n ≥ N d(Xm , xn ) < ϵ. 2.
The metric space is called complete if each Cauchy sequence in X converges.
It follows from the above that if a Cauchy sequence has a convergent subsequence, then the former sequence is convergent. We also see that every metric space can be completed.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
A.9 Separation properties  207
Definition A.57 (Totally bounded). A metric space (X , d) is called totally bounded if for all ϵ > 0 there exists a finite cover of X with open balls of radius ϵ.
Lemma A.58. A totally bounded metric space (X, d) is A2. Theorem A.4. If X is a metric space, then X is compact for the metric induced topology if and only if X is complete and totally bounded. Definition A.59 (Uniform continuity). Let (X1 , dX1 ) and (X2 , dX2 ) be metric spaces. A map F : X1 → X2 is uniformly continuous if for all ϵ > 0, there exists δ > 0, such that for all x, x ∈ X1 with dX1 (x, x ) < δ it holds that dX2 (F (x), F (x )) < ϵ.
Theorem A.5. Let (X1 , dX1 ) and (X2 , dX2 ) be metric spaces. If X1 is compact, then every continuous map F : X1 → X2 is uniformly continuous.
A.9 Separation properties We begin with the separation axioms. Roughly speaking, they are supposed to determine which basic objects can be separated2 in a topological space. Definition A.60 (Separation axioms). Let X be a topological space. It is said that X is: 1. T1 : if all onepoint sets {x} are closed in X . 2. T2 : if X is Hausdorff 3. T3 : if for all points x ∈ X and for each closed subset X ⊂ X with x ∉ X , there exist open neighborhoods U of x and U of X , such that U ∩ U = 0.
2 ‘T’ below refers to the German Trennung, i. e., separation.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
208  A Mathematical vocabulary
4.
T4 : if for each pair of closed sets Y1 , Y2 ⊂ X where Y1 ∩ Y2 = 0, there exist open neighborhoods U1 of Y1 and U2 of Y2 , such that U1 ∩ U2 = 0.
Definition A.61 (Regular and normal). A topological space is called regular if it is T1 and T3 . It is called normal if it is T2 and T4 .
It is clear that some of the axioms induce others. Lemma A.62. Normal implies regular, regular implies Hausdorff, Hausdorff implies T1 : (T4 + T1 ) ⇒ (T3 + T1 ) ⇒ T2 ⇒ T1 . Proposition A.63. 1. Each metric space is normal: T1 + T3 + A2 ⇒ T1 + T4 . 2.
(A.10)
If a metric space is compact and Hausdorff, then it is normal.
Lemma A.64 (Urysohn). Let X be normal. If A1 and A2 are disjunct subsets of X, then there exists a continuous map F : X → ℝ, such that for all a1 ∈ A1 F(a) = 0, and for all a2 ∈ A2 F(a2 ) = 1. Theorem A.6 (Tietze). Let X be a normal space and X a closed subset in X. Suppose that there exists FX : X → ℝ.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
A.10 Local compactness and compactification
 209
Then there exists a continuous function F : X → ℝ, such that FX = FX . Theorem A.7 (Urysohn’s metrizability theorem). If a regular space is A2, then it is metrizable. In other words, one can define a metric on X, such that it induces a topology on X.
A.10 Local compactness and compactification The following statements will be important when considering Wilson lines which are allowed to go to infinity in the spacetime manifold. Because infinity, strictly speaking, is not a part of the spacetime manifold, one needs to introduce its compactification. Definition A.65 (Neighborhood of sets). Let A1 be a subset of a topological space X . Then a subset A2 ⊆ X is a neighborhood of A1 if A1 is contained in the interior of A2 .
Definition A.66. A topological space X is called locally compact if for all x ∈ X there is a compact neighborhood.
For a locally compact Hausdorff space the following compactification is most convenient: Theorem A.8 (Alexandroff compactification). Let X be a locally compact Hausdorff space. Then there exists a compact Hausdorff space X ∗ and a point x ∈ X ∗ , such that X is homeomorphic with X ∗ \ p. Moreover, the pair (X ∗ , p) is unique up to homeomorphism in the following sense: suppose that compact Hausdorff spaces X1∗ and X2∗ are given, together with points xi ∈ Xi∗ , and homeomorphisms F1 : X ∗ → X1∗ \ {x1 },
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
210  A Mathematical vocabulary and F2 : X ∗ → X2∗ \ {x2 }. Then there is a unique homeomorphism F3 : X1∗ → X2∗ , such that F3 (x1 ) = x2 ,
and
F3 ∘ F1 = F2 .
This compactification method is sometimes called the onepoint compactification since one adds one point at infinity.3 An alternative compactification approach is given by the Stone–Cech compactification, which we do not discuss here.
A.11 Quotient topology When we introduce an equivalence relationship on a topological space, the question arises if the set of the equivalence classes has a topological structure. Now we shall see that this set indeed has such a structure, the quotient topology. We start by investigating the connection between the following concepts: 1. equivalence relations on a set X; 2. partitions of a set X; 3. surjective maps X1 X2 . If there is an equivalence relation “∼” on X, then the equivalence classes form a partition of X. On the other hand, if there is a partition of X, then an equivalence relation is introduced by stating that two elements x1 , x2 ∈ X are equivalent if they belong to the same subset of the partition. We conclude that a bijection is given: equivalence relations on X ∼ partitions of X. In order to see the relationship with surjective maps, suppose that there is an equivalence relation on X and denote the set of equivalence classes by X/ ∼. Then we get a map Q : X X/ ∼, 3 Note that this adds also an extra symmetry to the space. This is best seen in two dimensions. Adding one point at infinity turns the ‘plane’ into a Riemann sphere, a projective space that has conformal symmetry. This simple example demonstrates clearly that one has to be very careful when applying compactifications, otherwise an extra structure can be introduced, which is not necessarily wanted.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
A.11 Quotient topology  211
which picks up an element x ∈ X to its equivalence class. We also refer to this map Q as dividing out the equivalence relation. Inversely, we get an equivalence relation on X from a surjective map F : X1 X2 by calling two elements x1 , x2 ∈ X1 equivalent if F(x1 ) = F(x2 ). Definition A.67 (Quotient topology). Let X be a topological space and let ∼ be an equivalence relation on X . Then the quotient topology on X / ∼ is defined as the finest topology for which the map Q : X → X/ ∼ is continuous.
Definition A.68 (Quotient map). 1. Let X1 , X2 be topological spaces and P : X1 → X2 a surjection. Given that V ⊂ X2 2.
is open in X2 , the map P is a quotient map if and only if P −1 (V ) is open in X1 . If X1 is a topological space, X2 a set and P : X1 → X2
3.
a surjection, then there exists a unique topology on X2 with respect to which P is a quotient map. Let X be a topological space and let [X ] be a partition of X . Let [x],
x∈X
be the subset of X in the partition of X which contains x. Let us supply [X ] with the quotient topology induced by the map [ ] : X → [X ],
x → [x].
Then [X ] is called the quotient space of X .
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
212  A Mathematical vocabulary It should be noticed that the requirement for P to be a quotient map is stronger than just being continuous. The latter would only require that P −1 (V) is open in X1 whenever V is open in X2 (but not the other way round). Quotient spaces naturally arise if a group action is given λ : G × X → X,
(g, x) → λg (x) := λ(g, x)
on a topological space X and define [x] := {λg (x), g ∈ G} to be the orbit of x. The orbits clearly define a partition of X. Lemma A.69. Let X1 be a compact topological space, X2 a set and P : X1 → X2 a surjection. Then X2 is compact in the quotient topology. Lemma A.70 (Hausdorff in quotient topology). Let X be a Hausdorff space and λ : G×X →X a continuous group action on X. Then the quotient space X/G := {[x], x ∈ X} defined by the orbits [x] = {λg (x), g ∈ G} is Hausdorff in the quotient topology. Theorem A.9 (Equivariance). Let X1 , X2 be topological spaces and let G be a group acting (not necessarily continuously) on them as λ, λ respectively. If F : X1 → X2 is a homeomorphism, so that the actions λ, λ are equivariant (that is, commuting with the group action), then F extends as a homeomorphism to the quotient spaces X1 /G, X2 /G in their quotient topologies.
A.12 Fundamental group Now we shall introduce the notion of a fundamental group, which is very relevant in the discussion of loops in a manifold. In what follows we define I as I = [0, 1] unless stated otherwise.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
A.12 Fundamental group
 213
Definition A.71 (Homotopy). Let F0 , F1 : X1 → X2 be two continuous maps between topological spaces. A homotopy from F0 to F1 is a continuous map F : X1 × I → X2 , such that F (x, 0) = F0 (x)
and F (x, 1) = F1 (x).
If such a map exists, then F0 and F1 are said to be homotopy equivalent F0 ≃ F1 .
Hence, there is a continuous deformation between the two maps.4 Lemma A.72. Homotopy is an equivalence relation on the set C(X1 , X2 ) of continuous maps X1 → X2 . We can extend the restrictions on homotopies by introducing extra conditions. Definition A.73 (Relative homotopy). Let X1 , X2 be topological spaces and A ⊆ X1 . Consider the two continuous maps F0 , F1 : X1 → X2 , such that (F0 )A = (F1 )A . Then a homotopy from F0 to F1 relative to A is a continuous map F : X1 × I → X2 , such that F (x, 0) = F0 (x)
and F (x, 1) = F1 (x)
for all x ∈ X1 , provided that for all a ∈ A, t ∈ I F (a, t) = F0 (a).
4 Note that homotopy also provides a set of intermediate functions, which occur in the study of renormalizationgroup flows and in relating vacuum expectation values or vacua in different frames or with different Hamiltonians.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
214  A Mathematical vocabulary Applying the definition of relative homotopy to loops with a fixed base point (the set A is then a single point) in a manifold, we find that two loops γ0 , γ1 : I → X are homotopy equivalent if there exists a homotopy F :I ×I →X relative {0, 1}. This leads to the definition of a fundamental group. Definition A.74 (Fundamental group). Let X be a topological space and x0 ∈ X a base point. Then we define π1 (X , x0 ) as homotopy classes of loops in X with base point x0 . We call it the fundamental group of X with base point x0 .
The group operation is defined as the composition of loops and can be proven to form a group structure on the homotopy equivalence classes. Moreover, it can be shown that Proposition A.75 (The fundamental group is independent of the base point). Let X be a pathconnected space and let x0 , x0 ∈ X. Then the groups π1 (X, x0 ) and π1 (X, x0 ) are isomorphic. Let us now investigate what happens to the fundamental groups if we consider maps between topological spaces. Suppose we have two topological space X, Y with base points x0 , y0 and a continuous map F : X1 → X2 , such that y0 = F(x0 ). If now γ is a loop in X at x0 , then F∘γ : I →Y is a loop in Y at y0 . Assuming that γ homotopy is equivalent to γ , we find that F∘γ is homotopy equivalent to F ∘ γ .
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
A.12 Fundamental group
 215
Thus we have a map F∗ between fundamental groups F∗ : π1 (X, x0 ) → π1 (Y, y0 ),
(A.11)
[γ] → [F ∘ γ].
Lemma A.76. Let X be a topological space with x0 ∈ X as a base point. Consider the continuous maps F1 : X → Y
and
F2 : Y → Z,
y0 := F1 (x0 ) and
z0 := F2 (y0 ).
such that
Then (F2 ∘ F1 )∗ = F2∗ ∘ F1∗ : π1 (X, x0 ) → π1 (Z, z0 ). Corollary A.77. Let X and Y be homeomorphic pathconnected spaces. Then for each choice of base points x0 and y0 π1 (X, x0 ) ≅ π1 (Y, y0 ). The following theorem supports the Corollary A.77: Theorem A.10. Let X be a topological space having x0 ∈ X as a base point. Let F1 , F2 : X → Y be homotopic continuous maps and y0 := F1 (x0 ),
y1 := F2 (x0 ).
Take a homotopy F : X×I →Y from F1 to F2 and consider the path α in Y from y0 to y1 which reads α(t) := F(x0 , t). Then if a : π1 (Y, y1 ) → π1 (Y, y0 ), ≅
is the isomorphism [γ] → [α−1 γα], then the homomorphisms a ∘ g∗ : π1 (X, x0 ) → π1 (Y, y0 ) and
F∗ : π1 (X, x0 ) → π1 (Y, y1 )
are equal.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
216  A Mathematical vocabulary
Definition A.78 (Homotopy equivalence of spaces). A continuous map F : X1 → X2 is a homotopy equivalence if there exists a continuous map F : X2 → X1 , such that F ∘ F is homotopic to idX1 and F ∘ F with idX2 .
It should be mentioned that homotopy equivalence is stronger than the homeomorphic one, and is closer to the notion of topological spaces as rubber objects. Corollary A.79. Let X, Y be two pathconnected spaces. If they are also homotopy equivalent, then the fundamental groups are isomorphic for every choice of the base points x0 and y0 : π1 (X, x0 ) ≅ π1 (Y, y0 ). Definition A.80 (Simple connectedness). A pathconnected space X is called simply connected if π1 (X , x0 ) = 0.
Definition A.81 (Contractibility). A topological space X is called contractible if it is homotopy equivalent to a point.
A.13 Manifolds Definition A.82 (Manifolds). 1. A topological space M is an mdimensional C k manifold if there is a family of pairs (UI , xI ),
I∈ℐ
consisting of an open cover of M and homeomorphisms xI : UI → xI (UI ) ⊂ ℝm , such that for all I, J ∈ ℐ with UI ∩ UJ ≠ 0
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
A.13 Manifolds  217
the map ϕIJ := xJ ∪ xI−1 : xI (UI ∩ UJ ) → xJ (UI ∩ UJ ), 2.
is a C k map between open subsets I of ℝm . The sets UI are called charts, the functions xI coordinates, the family of charts and coordinates form an atlas. Two atlases (UI , xI )
3.
and (VI , xJ )
for a topological space M are compatible if their union is again an atlas. Compatibility yields an equivalence relation on atlases with an equivalence class being a differentiable C k structure. A topological space M is called a manifold with a boundary 𝜕M if each of the UI is homeomorphic to an open subset of the negative halfspace H− = {x ∈ ℝm ; x 1 ≤ 0}. The smoothness condition now demand that the ϕIJ are C k on open subsets of ℝm including xI (UI ∩ UJ ). The boundary points belong to 𝜕H− = {x ∈ ℝm ; x 1 = 0}.
Definition A.83 (Diffeomorphism). A map between two C k manifolds ψ : M1 → M2 is called C k if for all pairs of charts UI , VJ of atlases for M1 , M2 , for which ψ(UI ) ∩ VJ ≠ 0, the maps ψIJ := xJ ∘ ψ ∘ xI−1 : xI (UI ) → xJ (VJ ) are C k maps between open subsets of ℝm , ℝn respectively. If all the ψIJ are invertible and the inverses are C k , then ψ is a C k diffeomorphism. The diffeomorphisms of a manifold form a group, denoted by Diff(M).
Definition A.84 (Paracompactness). One calls an atlas (UI , xI ) locally finite if all x∈M have open neighborhoods intersecting only a finite number of the charts. A manifold M is said to be paracompact if each atlas (UI , xI ) allows a locally finite refinement (VJ , yJ ) where every VJ is included in some UI .
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
218  A Mathematical vocabulary
Definition A.85 (Submanifold). Let N be a subset of an mdimensional manifold M. Let us equip N with a manifold structure by making use of the induced topology and an induced (subspace) differentiable structure, given an atlas (UI , xI ) for M, by the atlas (VI = N ∩ UI , yI = (xI )VI ) for N. We obtain thus a differentiable structure given that the maps ϕIJ = yJ ∘ yI−1 for VI ∩ VJ ≠ 0 have constant rank n.
Definition A.86 (Immersion and embedding). Let M be an ndimensional manifold and ψ : M → M is C k . Then ψ is called a local immersion if all x ∈ M possess open neighborhoods V , such that V → ψ(V ) is an injection. If ψ is a global immersion, i. e., M → ψ(M ) is an injection, then ψ is called an embedding. If now for every V open in M the set ψ(V ) is open in the subset topology induced from M, then ψ is said to be a regular embedding. In the latter case one says that M is an embedded submanifold of M. An embedded submanifold of dimension n = m − 1 is a hypersurface.
Definition A.87 (Orientability). A manifold M is orientable if there exists an atlas such that for all y ∈ [UI ∩ UJ ] det [
𝜕xJ (y) ] > 0. 𝜕xI (y)
(A.12)
If M has a boundary, then M generates an orientation on 𝜕M.
Definition A.88 (Smoothness and real analyticity). – A manifold is smooth if it is C ∞ . – A manifold is real analytic or C ω if the maps ϕIJ are real analytic. – A manifold of real dimension 2m is complex analytic or a holomorphic manifold of complex dimension m if the maps ϕIJ = zJ ∘ zI−1 : ℂm → ℂm satisfy the Cauchy–Riemann equations and (xI , yI ) → zI = xI + iyI is the standard isomorphism between ℝ2m and ℂm .
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
A.14 Differential calculus  219
A.14 Differential calculus Several differential objects can be defined on a manifold. Definition A.89 (Smooth function). A smooth function on a manifold M is a map F : M → ℂ, such that F ∘ xI−1 is smooth on xI (UI ) ⊂ ℝm .
Definition A.90 (Vector field). A smooth vector field on M is said to be a derivation on C ∞ (M). It corresponds to a linear map v : C ∞ (M) → C ∞ (M),
F → v(F )
which obeys the Leibniz rule v(F1 F2 ) = v(F1 )F2 + F1 v(F2 ), and annuls constant numbers. Given an atlas (UI , xI ), we can define special vector fields 𝜕μI on UI as obeying the condition (𝜕μI (xIν )) = δμν for p ∈ UI , where x(p) = (x1 (p), . . . , xm (p)) ∈ ℝm . This allows us to represent a vector field v in the form μ
I
v(p) = vI [xI (p)]𝜕μ (p), where the summation over the repeated indices μ is assumed. The Leibniz rule induces the chain rule, so that μ
I
μ
J vI [xI (p)]𝜕μ (p) = vJ [xJ (p)]𝜕μ (p),
(A.13)
if p ∈ [UI ∩ UJ ],
xJ (p) = ϕIJ (xI (p)).
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
220  A Mathematical vocabulary With the aid of this definition we can investigate the action of a vector field on a smooth function: μ
x = xI (p),
v(F) = vI 𝜕μ [FI (x)],
(A.14)
where FI = F ∘ xI−1 . In what follows, the space of smooth vector fields on M is denoted by T 1 (M). Definition A.91 (Contravariant vector). A tangent or contravariant vector X allocates to each coordinate patch p ∈ (U, x) an ntuple of real numbers (XUi ) = (XU1 , . . . , XUn ),
(A.15)
such that if p ∈ [U1 ∩ U2 ], then the coefficients of the contravariant vector transform as XUi 2 = ∑ j
𝜕xUi 2 j
𝜕xU
j
(p)XU
1
(A.16)
1
XU2 = CU2 U1 XU1 ,
(A.17)
where CU2 U1 is called the transition function. In a local coordinate system tangent vectors can be defined as firstorder differential operators Xp = ∑ X j j
𝜕 . 𝜕x j p
(A.18)
Definition A.92 (Linear functional covector). A real linear functional α on a vector space E is a real valued map α: E→ℝ from E to the onedimensional vector space ℝ. The condition of linearity holds for the real numbers a1 , a2 and vectors v1 , v2 : α(a1 v1 + a2 v2 ) = a1 α(v1 ) + a2 α(v2 ).
(A.19)
These linear functionals are called also covector, covariant vector, or oneform. With the aid of local coordinates and a basis one gets α = ∑ aj (x) dxj , j
(A.20)
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
A.14 Differential calculus  221
transforming as dxUi 2 = ∑ j
𝜕xUi 2 j
𝜕xU
j
(A.21)
dxU , 1
1
so that the coefficients transform as U
U
ai 2 = ∑ aj 1 j
j
𝜕xU
𝜕xUi
1
(A.22)
.
2
Notice the difference in this transformation rule with the transformation rule for contravariant vectors. This difference extends also to tensors, so that the transformation rule can be used to determine its contravariant and covariant rank. Definition A.93 (Dual space). The collection of all linear functionals α on a vector space E forms another vector space E ∗ , which is the dual space to E: (α1 + α2 )(v1 ) := α1 (v1 ) + α2 (v2 ), (cα1 )(v1 ) := cα1 (v1 ),
(A.23) (A.24)
where α1,2 ∈ E ∗ , v1 ∈ E, c ∈ ℝ.
Definition A.94 (Tangent bundle). The tangent bundle TM to a differentiable manifold M is defined as the collection of all tangent vectors at all points of M.
Now we are in a position to introduce the important concepts of interior and exterior products. Definition A.95 (Interior product). Let v be a vector and α be a pform. Their interior product (p−1)form iv α is defined as iv α 0 = 0
iv α 1 = α(v) p
if α is a 0form, if α is a 1form,
p
iv1 α (v2 , . . . , vp ) = α (v1 , v2 , . . . , vp )
if α is a pform.
(A.25)
It is evident that iv1 +v2 = iv1 + iv2 and iav = aiv .
Definition A.96 (Exterior or wedge product and exterior algebra). The exterior algebra ⋀(V ) over a vector space V over a field K is defined as the quotient algebra of the tensor algebra T (V ) by the twosided ideal I generated by all elements of the form x ⊗ x, such that x ∈ V : Λ(V ) := T (V )/I.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
222  A Mathematical vocabulary
The exterior product ∧ of two elements of ⋀(V ) is defined by: α1 ∧ α2 = α1 ⊗ α2 /I, or a1 ∧ α2 = α1 ⊗ α2 − α2 ⊗ α1 . The algebra associated with this product is the exterior algebra on M n
⋀ M. It is constructed from the vector space of oneforms on M.
THM: INTERIOR PRODUCT IS AN ANTIDERIVATION Theorem A.97 (Interior product is an antiderivation). We say that p
p−1
iv : ⋀ → ⋀ is an antiderivation. That is to say p
p
p
p
p
p
iv (α1 1 ∧ α2 2 ) = [iv α1 1 ] ∧ α1 2 + (−1)p α1 1 ∧ [iv α1 2 ].
(A.26)
Definition A.98 (Differential of a map). Let ϕ : M → M be a smooth map and ϕ(x) = x . The differential ϕ∗ is defined as the map between the tangent spaces ϕ∗ : Tx M → Tx M such that ϕ∗ (vx ) = vx , where vx ∈ Tx M and vx ∈ Tx M are elements of the tangent spaces at x and x . Definition A.99 (Pullback). Let ϕ : M → M
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
A.14 Differential calculus  223
be a smooth map and ϕ(x) = x . Let ϕ∗ : Tx M → Tx M be the differential of ϕ. The pullback ϕ∗ is the linear transformation turning covectors at x into covectors at x: ϕ∗ : M ∗ (x ) → M∗ (x), so that ϕ∗ (β)(v) := β(ϕ∗ (v)), for all covectors β at x and vectors v at x.
Definition A.100 (Pushforward). Let ϕ be a smooth map ϕ : M → M and a let X be a vector field on M. A section of ϕ∗ TN over M is called a vector field along ϕ, i. e., a section of TM. Then, applying the differential (A.98) pointwise to X yields the pushforward ϕ∗ X , which is a vector field along ϕ, i. e., a section of ϕ∗ TN over M.
Any vector field X on M defines a pullback section of ϕ∗ TM with . (ϕ∗ X )x = Xϕx
A vector field X on M and a vector field X on M are ϕrelated if ϕ∗ X = ϕ∗ X as vector fields along ϕ. In other words, for all x ∈ M, dϕx (X) = Xϕ(x) .
Using this language we can give an alternate definition for an immersion. Definition A.101 (Immersion). A smooth map of manifolds ϕ : M → M is an immersion and ϕ(M) is an immersed submanifold if ϕ∗ : Tx M → Tϕ(x) M , is onetoone, that is, for all x ∈ M ker ϕ∗ = 0.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
224  A Mathematical vocabulary
Definition A.102 (Support of a function). Suppose that F : M→ℝ is a realvalued continuous function. From the definition of continuity it follows that the inverse image of each open set of ℝ is open in M. The set of nonzero real numbers form an open subset of ℝ, such that the subset of M where F ≠ 0 is an open subset in M: F −1 (ℝ − 0). The closure of this set is called the support of F .
Definition A.103 (Bump function). Let p be a point p in ndimensional manifold M. One can construct an nform with the support contained in an open ϵball around p: ωn := f (‖x‖) dx 1 ∧ ⋅ ⋅ ⋅ ∧ dx n , ωn := 0,
inside the ball,
outside of the ball.
If n = 0, this nform is called a bump function.
Definition A.104 (Partition of unity). Let M be an ndimensional manifold that can be covered by a finite number of coordinate patches {Uα }. Then a partition of unity subordinate to this covering yields n realvalued differentiable functions Fα : M → ℝ such that for all x, α 1. Fα ≥ 0; 2. ∑α Fα (x) = 1; 3. the support of Fα is a closed subset of the patch Uα . One can see that such a partition always exists.
It should be noticed that if a manifold is compact, then for each cover of this manifold there exists a finite subcover permitting a partition of unity.
A.15 Stokes’ theorem A lot of derivations in loop space heavily depend on the use of Stokes’ theorem. Theorem A.105 (Stokes’ theorem). Let X be an oriented ndimensional C 2 manifold. Let ω be an C 1 (n − 1)form on X. Suppose that ω possesses compact support. Then ∫ dω = ∫ ω. X
𝜕X
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
A.16 Algebra: Rings and modules  225
Note that this theorem can be generalized to ω which have almost compact support. More relevant for our purpose is Stokes’ theorem for loops with derivative discontinuities (such as angles, intersections, etc.). Under certain circumstances Stokes’ theorem is still valid. In order to proceed, we need the concept of negligible subsets. Definition A.106 (Negligible subset). Let S be a closed subset of ℝn . We call S negligible for X if there exists an open neighborhood U of S in ℝn , a fundamental (Cauchy) sequence of open neighborhoods {Uk } of S in U, with the closure U k ⊂ U, and a sequence of C 1 functions {gk }, such that 1. 0 ≤ gk ≤ 1 and gk = 0 for x in some open neighborhood of S, and gk = 1 for x ∉ Uk ; 2. if ω is an (n − 1)form of class C 1 on U, and μk is the measure associated with dgk ∧ ω on X ∩ U, then μk is finite for large k, and lim μk (U ∩ X ) = 0.
k→∞
We can now formulate Stokes’ theorem with singularities. Theorem A.107 (A version of Stokes’ theorem). Let X be an oriented ndimensional C 3 submanifold without boundary in ℝn . Let ω be a C 1 (n − 1)form on X on an open neighborhood of X in ℝn , and with compact support. Suppose that: 1. if S is the set of singular points in the boundary X − X, then S ∩ supp ω 2.
is negligible for X; the measures associated with dω on X, and ω on 𝜕ω are finite.
Therefore, ∫ dω = ∫ ω. X
𝜕X
A.16 Algebra: Rings and modules Definition A.108 (Monoid). A monoid (semigroup with unit) is a pair containing a set M and a binary operation ‘⋅’ which satisfies the axioms for all a1 , a2 , a3 ∈ M:
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
226  A Mathematical vocabulary
1.
closure: a1 ⋅ a2 ∈ M;
2.
associativity: (a1 ⋅ a2 ) ⋅ a3 = a1 ⋅ (a2 ⋅ a3 );
3.
identity element: there exists an element e ∈ M, such that (a1 ⋅ e) = (e ⋅ a1 ) = a1 .
Definition A.109 (Ring). A ring is defined as a set R with two binary operations ‘+’ and ‘⋅’, which are called addition and multiplication, which map every pair of elements of R to a unique element of R. These operations satisfy the following properties for all a1 , a2 , a3 ∈ R: – Addition is Abelian: 1. associativity: (a1 + a2 ) + a3 = a1 + (a2 + a3 ); 2.
existence of zero 0 ∈ R: 0 + a1 = a1 ;
3.
commutativity: a1 + a2 = a2 + a1 ;
4.
existence of inverse element: ∃ − a1 ∈ R  a1 + (−a1 ) = (−a1 ) + a1 = 0.
–
Multiplication is associative: 1. (a1 ⋅ a2 ) ⋅ a3 = a1 ⋅ (a2 ⋅ a3 ).
–
Multiplication distributes over addition: 1. a1 ⋅ (a2 + a3 ) = (a1 ⋅ a2 ) + (a1 ⋅ a3 ), 2. (a1 + a2 ) ⋅ a3 = (a1 ⋅ a3 ) + (a2 ⋅ a3 ).
Definition A.110 (Field). A field F consists of a set and two composition operations: F ×F →F
a1 , a2 → a1 + a2 ,
(A.27)
F ×F →F
a1 , a2 → a1 a2 ,
(A.28)
+
×
called addition and multiplication. They have the following properties: 1. (F , +) is an Abelian group; 2. (F , ×) is associative and commutative, turning F \ {0} into a group. The identity element is denoted by 1;
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
A.16 Algebra: Rings and modules  227
3.
distributivity: (a1 + a2 )a3 = a1 a3 + a2 a3 .
Definition A.111 (Vector space). A vector space V over a field F consists of a set and two composition operations: V ×V →V
v1 , v2 → v1 + v2
(A.29)
F ×V →V
c, v1 → cv1
(A.30)
+
×
called addition and scalar multiplication. They have the following properties for all a1 , a2 ∈ F , v, v1 , v2 ∈ V : 1. (V , +) is an Abelian group; 2. scalar multiplication is associative with multiplication in F : (a1 a2 )v = a1 (a2 v); 3.
(A.31)
the element 1 is an identity 1v = v; .
4.
double distributivity: (a1 + a2 )v1 = a1 v1 + a2 v1 and a1 (v1 + v2 ) = a1 v1 + a1 v2 .
Sometimes the notion of a vector space does not suffice. A useful extension is provided by introducing the concept of modules. A module over a ring generalizes the notion of vector space over a field, with the scalars being now the elements of an arbitrary ring instead of a field. Modules generalize as well the concept of Abelian groups, which are modules over the ring of integers. Definition A.112 (Module). A left Kmodule M over a ring K is defined to contain an Abelian group (M, +) and an operation K × M → M, such that for all k, k ∈ K and x, x ∈ M 1. k(x + x ) = kx + kx ; 2. (k + k )x = kx + k x; 3. (kk )x = k(k x); 4. 1K x = x. A right module can be defined analogously.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
228  A Mathematical vocabulary
A.17 Algebra: Ideals Now that we have introduced the concept of a ring, we can define the ideal of a ring, which becomes relevant when considering homomorphisms and algebra morphisms. The definition of an ideal is given in the main text of the book. The following statements take place: Theorem A.113 (Kernel is a subring). Suppose that we have a ring homomorphism ϕ : K1 → K2 . Then the kernel of ϕ is a subring of K1 . Theorem A.114 (Kernel is an ideal). Suppose that we have a ring homomorphism ϕ : K1 → K2 . Then the kernel of ϕ is an ideal of K1 . Definition A.115 (Cokernel). Suppose that we have a Kmodule homomorphism F : A1 → A2 . The cokernel is then defined as the quotient group A2 /Im(F ). Hence, F is injective if and only if its kernel is 0, and surjective if and only if its cokernel is 0.
Definition A.116 (Prime ideal [58]). Let K be a ring. An ideal p of K is called prime if p ≠ K and a1 a2 ∈ p ⇒ a1 ∈ p
or a2 ∈ p.
Definition A.117 (Maximal ideal). An ideal pm in K is called maximal if it is maximal among the ideals which are strictly smaller than the ring itself (proper ideals). Hence, pm is maximal if and only if A/pm is nonzero and has no proper nonzero ideals. pm is a field.
Definition A.118 (Zero divisor). A zero divisor a1 ∈ K is an element of the ring which is left and right zero divisor: ∃ a2 ≠ 0 ∈ K : a1 ⋅ a2 = 0
Left zero divisor,
∃ a3 ≠ 0 ∈ K : a3 ⋅ a1 = 0
Right zero divisor.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
A.18 Algebras  229
Definition A.119 (Nonzero divisor). An element a1 ∈ K is a nonzero divisor if for all a2 ≠ 0 a1 ⋅ a2 ≠ 0. a1 is a unit if there exists another element a2 , such that a1 ⋅ a2 = 1.
Definition A.120 (Domain). A nonzero ring is called a domain if each nonzero element is a nonzero divisor. It is a field if every nonzero element is a unit. Obviously, a field is a domain.
Definition A.121 (Integral domain). An integral domain is a commutative ring with no zero divisors.
Definition A.122 (Local ring). A ring K is local if it contains exactly one maximal ideal pm .
A.18 Algebras Definition A.123 (Ring algebra). An algebra over a commutative ring is a extension of an algebra over a field, such that the base field is replaced by a commutative ring K. A Kalgebra is a pair consisting of a Kmodule M and a binary operation called the Mmultiplication [⋅, ⋅]: [⋅, ⋅] : M × M → M such that ∀a1 , a2 ∈ K, ∀x1 , x2 , x3 ∈ M [a1 x1 + a2 x2 , x3 ] = a1 [x1 , x3 ] + a2 [x2 , x3 ] and [x3 , a1 x1 + a2 x2 ] = a1 [x3 , x1 ] + a2 [x3 , x2 ].
The following definition is useful: Definition A.124 (Unital algebra). Let (AR , m) be an algebra over the ring R. Then (AR , m) is a unitary (unital) algebra if it contains an identity element 1A called a unit of algebra for m, such that for all a ∈ AR : m(a, 1A ) = m(1A , a) = a. Usually 1 stands for the unity.
Then an alternative definition of a ring algebra is possible that reads:
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
230  A Mathematical vocabulary
Definition A.125 (Kalgebra). A Kalgebra is a compound of a Kvector space A and two linear maps m : A ⊗K A → A
and u : K → A,
such that the maps are unital (Definition A.124) and associative, with the extra conditions that both diagrams in Figure A.1 are commutative (s stands for the scalar multiplication) and the unit element in A is given by 1A = u(1K ).
Figure A.1: Kalgebra commutative diagrams.
Definition A.126 (Graded ring). A graded ring K is a ring that has a decomposition into (Abelian) additive groups: K = ⨁ Kn = K0 ⊕ K1 ⊕ K2 ⊕ ⋅ ⋅ ⋅ n
such that the ring multiplication satisfies 1. x1 ∈ Ks1 , x2 ∈ Ks2 ⇒ x1 x2 ∈ Ks1 +s2 ; 2. Ks1 Ks2 ⊆ Ks1 +s2 . Elements of any term Kn of the decomposition are called homogeneous elements of degree n. A subset k is said to be homogeneous if each element k∈k is the sum of homogeneous elements that belong to k. For a given k the homogeneous elements are uniquely defined. If I is a homogeneous ideal in K, then K/I is also a graded ring, allowing decomposition K/I = ⨁(Ki + I)/I. i
Each nongraded ring K can be made graded by setting for positive i K0 = K,
and Ki = 0.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
A.19 Hopf algebra 
231
Definition A.127 (Graded module). A graded module is a left module M over a graded ring K, such that 1. M = ⨁i Mi ; 2. Ki Mj ⊆ Mi+j .
Definition A.128 (Graded algebra). An algebra A over a ring K is said to be a graded algebra if it is graded as a ring.
Definition A.129 (Kalgebra homomorphism). Let A1 , A2 be some Kalgebras. A Kalgebra homomorphism is defined as a Klinear map F : A1 → A2 , such that for all x1 , x2 ∈ A1 F (x1 x2 ) = F (x1 )F (x2 ). The space formed by all Kalgebra homomorphisms is denoted by HomK (A1 , A2 ).
Let U1 and U2 be two commutative unitary Kalgebras. We denote by Alg(U1 , U2 ) = AlgK (U1 , U2 ) the totality of Kalgebra morphisms from U1 to U2 which map the unit element of one algebra into the unit element of the other one. Definition A.130 (Associative coalgebra). A Kcoalgebra is a compound of a Kvector space V and two linear maps Δ:V →V ⊗V and ϵ : V → K, that is the comultiplication and counit, respectively. The following axioms stating coassociativity and counit yield the commutative diagrams visualized in Figure A.2: 1. (1 ⊗ Δ) ∘ Δ = (Δ ⊗ 1) ∘ Δ; 2. (1 ⊗ ϵ) ∘ Δ = (ϵ ⊗ 1) ∘ Δ.
A.19 Hopf algebra In order to introduce the notion of a Hopf algebra, we first merge an algebra and coalgebra into a bialgebra.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
232  A Mathematical vocabulary
Figure A.2: Kcoalgebra commutative diagrams.
Definition A.131 (Bialgebra). A bialgebra A is a Kvector space A = (A, m, u, Δ, ϵ), where (A, m, u) is an algebra and (A, Δ, ϵ) is a coalgebra, such that 1. m and u are coalgebra homomorphisms; 2. Δ and ϵ are algebra homomorphisms.
Let A1 , A2 be two Kalgebras. Then A1 ⊗ A2 is also a Kalgebra with the composition ⊗ (a1 ⊗ a2 )(a3 ⊗ a4 ) = a1 a3 ⊗ a2 a4 , such that A1 ⊗ A2 ⊗ A1 ⊗ A2
1A1 ⊗τ⊗1A2
→
A1 ⊗ A1 ⊗ A2 ⊗ A2
mA1 ⊗mA2
→
A1 ⊗ A2
where τ : A2 ⊗ A1 → A1 ⊗ A2 is called a flipping operation. The unit of A1 ⊗ A2 is to be obtained by the rules uA1 ⊗uA2
uA1 ⊗A2 : K ≅ K ⊗ K → A1 ⊗ A2
uA1 ⊗A2 (1K ) = uA1 ⊗A2 (1K ⊗ 1K ) = 1A1 ⊗ 1A2 = 1A1 ⊗A2 . Similarly, one gets the coalgebra ΔB1 ⊗ΔB2
B1 ⊗ B2 → B1 ⊗ B1 ⊗ B2 ⊗ B2
1B1 ⊗τ⊗1B2
→
B1 ⊗ B2 ⊗ B1 ⊗ B2 .
The counit is given by ϵB1 ⊗ϵB2
B1 ⊗ B2 → K ⊗ K ≅ K,
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
A.19 Hopf algebra 
233
ϵB1 ⊗ ϵB2 (1B1 ⊗ 1B2 ) = ϵ(1B1 ) ⊗ ϵ(1B2 ) = 1K ⊗ 1K = 1K . A bialgebra morphism is simultaneously an algebra and coalgebra homomorphism. Definition A.132 (Biideal). Let F : A1 → A2 be a bialgebra homomorphism. Then ker F is called a biideal, meaning that ker F is both an ideal and a coideal. I ⊂ A1 is a coideal if ϵ(I) = 0 and Δ(I) ⊆ C ⊗ I + I ⊗ C.
Definition A.133 (Hopf algebra). Let K be a commutative ring. A Kalgebra H is said to be a Hopf algebra if it possesses extra structure provided by Kalgebra homomorphisms: 1. comultiplication: Δ : H → H ⊗K H; 2.
counit: ϵ : H → K;
3.
antipode Kmodule homomorphism: λ : H → H,
which obey the conditions of 1. coassociativity: (I ⊗ Δ)Δ = (Δ ⊗ I)Δ : H → H ⊗ H ⊗ H; 2.
counitarity: m(I ⊗ ϵ)Δ = I = m(ϵ ⊗ I)Δ;
3.
antipode: m(I ⊗ λ)Δ = 𝚤ϵ = m(λ ⊗ I)Δ,
where I is the identity map on H, m: H⊗H →H is the multiplication in H, and 𝚤:K →H is the Kalgebra structure map for H, called the unit map.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
234  A Mathematical vocabulary Here ⊗K shows that the product is Kequivariant: for k ∈ K, h1 , h2 ∈ H Δ(kh1 , h2 ) = kΔ(h1 , h2 ). Lemma A.134. Let I be a biideal of a bialgebra A. The operations on A induce the structure of a bialgebra on A/I, such that the bialgebraic structure does not change under projection of A on A/I. As we have seen in the construction of generalized loop space, a specific ideal plays a prominent role there. One of the reasons for why this ideal turns out to be so important can be clarified by some observation on universal enveloping algebras of Lie algebras and their representations. Recall that a representation R assigns to all elements xi of a Lie algebra a linear operator R(xi ). Because of this linearity, the operators not only form a Lie algebra, but also an associative algebra that allows one to define the products R(x1 )R(x2 ). The result of this product depends, strictly speaking, on the chosen representation. Some of its properties, however, can be shown to hold for all representations. Given that the universal enveloping algebra is defined, one becomes able to find out these universal properties. It should now be clear that U(g) is a bialgebra for all Lie algebras g with Δ(x) = 1 ⊗ x + x ⊗ 1 and for all g ∈ g ϵ(x) = 0, U(g) = T(g)/I(g) and T(g) is a bialgebra. Definition A.135 (Opposite algebra and coalgebra). The opposite algebra Aop to a Kalgebra A is defined to be the same vector space as A but now with multiplication operation m defined for all elements in A as m (a1 , a2 ) := m(a2 , a1 ). Similarly, for the coalgebra B one defines the opposite Bop , such that ΔBop := τ ∘ ΔB where τ is again the flipping operation.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
A.19 Hopf algebra 
235
Definition A.136 (Cocommutativity). A co or bialgebra is cocommutative if its opposite is equal to itself.
For the sake of simplicity of notation, we introduce the socalled Sweedler notation. Let B be a coalgebra. For its elements b we have Δ(b) = ∑ b1 ⊗ b2 . Using associativity one obtains (1 ⊗ Δ) ∘ Δ(b) = (1 ⊗ Δ)(∑ b1 ⊗ b2 ) = ∑ b1 ⊗ b21 ⊗ b22
= ∑ b11 ⊗ b12 ⊗ b2 , which will be denoted as ∑ b1 ⊗ b2 ⊗ b3 . In general, Δn−1 : B → B⊗n . Then, with the aid of the right diagram in Figure A.2, one concludes that b = ∑ ϵ(b1 )b2 = ∑ b1 ϵ(b2 ) and that B is cocommutative if and only if for all b ∈ B Δ(b) = ∑ b1 ⊗ b2 . Definition A.137 (Antipode). Suppose we have the bialgebra A = (A, m, u, Δ, ϵ). A linear endomorphism S:A→A is called an antipode if the diagram in Figure A.3 commutes. In terms of the Sweedler notation ϵ(a) = ∑ a1 S(a2 ) = ∑ S(a1 )a2 .
(A.32)
A Hopf algebra is, therefore, a bialgebra with an antipode and Hopf algebra morphisms are antipode preserving bialgebra morphisms.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
236  A Mathematical vocabulary
Figure A.3: Hopf commutative diagram.
Definition A.138 (Convolution product). Let A = (A, m, u) be an algebra and C = (C, Δ, ϵ) a coalgebra. We define the convolution product ∗ over HomK (C, A) for elements f1 , f2 ∈ HomK (C, A) and c1 , c2 ∈ C as (f1 ∗ f2 )(c) = ∑ f1 (c1 )f2 (c2 ).
Proposition A.139. (HomK (C, A), ∗, u ∘ ϵ) is an algebra. The easiest way to see this is by noticing that m ≡ ∗ : HomK (C, A) →HomK (C, A), and u ∘ ϵ : HomK (C, A) →HomK (C, A), an identity map. Setting C = A, this becomes a bialgebra with EndK (A), ∗, u ∘ ϵ. We derive, therefore, that the antipode S for A is an inverse of 1A in Endk (A), ∗, u ∘ ϵ which is uniquely determined due to the uniqueness of inverses. Corollary A.140. Let C = (C, Δ, ϵ) be any Kcoalgebra. C ∗ = HomK (C, K) is an algebra with (f1 ∗ f2 )(c) = ∑ f1 (c1 )f2 (c2 ), so that C ∗ is commutative if and only if C is cocommutative (A.136).
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
A.19 Hopf algebra 
237
Theorem A.141. Let H = (H, m, Δ, u, ϵ, S) be a Hopf algebra, so that S is a bialgebra homomorphism H → H opcop , for all x1 , x2 ∈ H: 1. S(m(x1 , x2 )) = m[S(x2 ), S(x1 )], S(1) = 1 2. (S ⊗ S) ∘ Δ = S, ϵ ∘ S = ϵ ←→ S(x2 ) ⊗ S(x1 ) = ∑ (Sx1 )1 (Sx2 )2 .
Definition A.142 (Antihomomorphism). An antihomomorphism of rings is a map θ : K1 → K2 where K1 , K2 are rings, such that θ(k1 k2 ) = θ(k2 )θ(k1 ).
Example A.143 (Hopf algebra). Let g be a Lie Algebra. Given that U(g) := T(g)/I(g), we find for all x1 , x2 ∈ g S([x1 , x2 ] − x1 x2 + x2 x1 ) = −[x1 , x2 ] − (−x2 )(−x1 ) + (−x1 )(−x2 ) = −([x1 , x2 ] − x1 x2 + x2 x1 ) ∈ I(g).
(A.33)
S is an antiautomorphism of U(g) that makes it a Hopf algebra. Definition A.144 (Restricted dual of a Kalgebra). For a Kalgebra A the restricted dual is given by the set A∘ = {F ∈ A∗ : F (I) = 0 for some I ⊲ A, dimK (A/I) < ∞},
(A.34)
where I ⊲ A indicates that I is an ideal of A.
We can extend the concept of a module over a ring to a module over a Kalgebra and a Kcoalgebra generating also the concept of a comodule. Definition A.145 (Module over a Kalgebra). The left module M over a Kalgebra A includes a Kvector space M with a Klinear map λ : A ⊗ M → M, such that the diagrams shown in Figure A.4 commute.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
238  A Mathematical vocabulary
Figure A.4: Module over a Kalgebra.
Definition A.146 (Comodule). A right comodule M over a coalgebra C is a Kvector space M with a Klinear map ρ : M → M ⊗ C,
(A.35)
such that the diagrams in Figure A.5 commute.
Figure A.5: Comodule over a kcoalgebra.
Proposition A.147 (Duality). 1. Let M be a right comodule for the coalgebra C. Then M is a left module for C ∗ = Hom(C, K). 2.
Let A be an algebra and M a left Amodule. Then M is a right A∘ comodule if and only if for all m ∈ M dimK (Am) < ∞.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
A.20 Topological, C ∗ , and Banach algebras  239
Definition A.148 (Rational module). An Amodule M with dimK (Am) < ∞ is called rational.
We end this set of definitions with some comments on tensor products of modules and comodules as well as on homomorphisms between modules. – Tensor products of modules: Let A be a bialgebra, and V1 and V2 be left Amodules. Then V1 ⊗ V2 is a left Amodule a ⋅ (v1 ⊗ v2 ) = ∑ a1 v ⊗ a2 v2 . If we now consider a third Amodule V3 , then coassociativity assures that (V1 ⊗ V2 ) ⊗ V3 ≅ V1 ⊗ (V2 ⊗ V3 ). In the specific case of the trivial left Amodule K for a ∈ A, v ∈ K a ⋅ v = ϵ(a)v, it is clear that V1 ⊗ K ≅ V1 ≅ K ⊗ V1 , are left modules. If A is cocommutative, then we also have that V1 ⊗ V2 ≅ V2 ⊗ V1 , as left modules with the isomorphism given by the flip function τ : v1 ⊗ v2 → v2 ⊗ v1 . –
Tensor products of comodules: If B is a bialgebra and V1 , V2 are right Bcomodules, then V1 ⊗ V2 is a right comodule with v1 ⊗ v2 → ∑ v10 ⊗ v20 ⊗ v11 v21 .
–
Homomorphism of modules: Let H be a Hopf algebra and V1 , V2 be left Hmodules. Then HomK (V1 , V2 ) is a left Hmodule with the action for h ∈ H, F ∈ HomK (V1 , V2 ) (h ⋅ F)(v1 ) = ∑ h1 F((Sh2 )v1 ).
Now we shall introduce algebras that also carry topological structures.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
240  A Mathematical vocabulary
A.20 Topological, C ∗ , and Banach algebras Definition A.149 (Topological algebra). A topological algebra is an algebra supplied with a nontrivial topology τ which is compatible with its linear structure, such that the map X × X → X,
(x1 , x2 ) → x1 x2 ,
is continuous.
Definition A.150 (Normed algebra). An algebra A having a norm is a normed algebra if the norm is submultiplicative for all a1 , a2 ∈ A ‖a1 a2 ‖ ≤ ‖a1 ‖ ‖a2 ‖ .
(A.36)
Lemma A.151 (Continuity in normed algebras). If A is a normed algebra, then all the algebraic operations are continuous in the norm topology on A. Definition A.152 (Involution on an algebra). An involution on an algebra A is a map ∗ A → A,
a → a∗
such that the following condition are fulfilled: 1. conjugate linearity: (za1 + z a2 )∗ = za∗1 + z a∗2 ;
2.
order reversing: (a1 a2 )∗ = a∗2 a∗1 ;
3.
involution: (a∗1 )∗ = a1
for all a1 , a2 ∈ A,
z, z ∈ ℂ.
An algebra with involution is called an ∗ algebra. Since we consider here algebras over the complex numbers this is called a C ∗ algebra.
Definition A.153 (Banach space). A normed space X is said to be a Banach space if for each Cauchy sequence ∞ {xn }n=1 ⊂ X ,
there exists an element x ∈ X such that lim x n→∞ n
= x.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
A.21 Nuclear multiplicative convex Hausdorff algebras and the Gel’fand spectrum  241
Definition A.154 (Banach algebra). An algebra which is complete in the metric induced by its norm is a Banach algebra.
A.21 Nuclear multiplicative convex Hausdorff algebras and the Gel’fand spectrum A topological algebra isomorphism is an algebra morphism which is also a homeomorphism. This allows us to build a number of structures on such isomorphisms, of which we shall introduce some and briefly discuss their properties. Definition A.155 (Filter basis). A nonempty subset F of a partially ordered set (P, ≤) is said to be a filter if the following conditions are fulfilled: 1. for all x1 , x2 ∈ F there exists an element x3 ∈ F , such that
2.
x3 ≤ x1
and x3 ≤ x2 ;
x1 ∈ F
and x2 ∈ P
for all
the statement x1 ≤ x2 3.
implies that x2 ∈ F ; a filter is proper if it is not equal to the whole set P.
Definition A.156 (Bases for compatible topologies). The filter basis ℬ in an algebra A determines a basis at 0 for a compatible topology for A if and only if 1. ℬ is a neighborhood base at 0 for a topology which is compatible with linear structure of A; 2. for each B∈ℬ there exists a B ∈ ℬ such that the product of two B s is a subset of B.
The following type of induced topology is important in the topologization of the shuffle algebra.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
242  A Mathematical vocabulary
Definition A.157 (Initial topology). Let X1 be an algebra, X2 a topological algebra with neighborhood filter V (0). Let A : X1 → X2 be a homomorphism. Then the filter A−1 (V (0)) defines a topology compatible with linear structure of X1 . The topology determined by the filter is said to be the initial topology induced by A.
Definition A.158 (Final topology). Let X1 be a topological algebra with V (0) being the filter, X2 an algebra, and A1 : X1 → X2 a homomorphism. It is easy to see that the collection 𝒜 of subsets U of X2 , such that A−1 (U) ∈ V (0), forms a base at 0 for a topology compatible with the linear structure of X1 . For each U ∈ 𝒜 one can select A2 ∈ V (0), such that A2 A2 ∈ A−1 1 (U). Hence, A1 (A2 )A1 (A2 ) = A1 (A2 A2 ) ⊂ A1 (A−1 (U)) ⊂ U. Since A−1 1 (A1 (A2 )) ⊃ A2 ∈ V (0), then A−1 1 (A1 (A2 )) ∈ V (0), i. e., A1 (A2 ) ∈ 𝒜, so that 𝒜 is a base at 0 for a topology which is compatible with the algebraic structure of X2 . The topology generated by 0 is called the final topology for X2 determined by the homomorphism A1 .
Suppose V is a vector space over K, a field or a subfield of the complex numbers. Definition A.159 (Convex space via convex sets). A subset C in V is called 1. convex: if for each x1 , x2 ∈ C, for all α ∈ [0, 1 αx1 + (1 − α)x2 ∈ C;
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
A.21 Nuclear multiplicative convex Hausdorff algebras and the Gel’fand spectrum  243
2. 3.
circled: if for all x ∈ C, λx ∈ C, λ = 1. a cone: if for every x ∈ C and 0 ≤ λ ≤ 1 λx ∈ C.
4. 5. 6.
balanced: if for all x ∈ C, λx ∈ C if λ ≤ 1; absorbent: if the union of αC over all α > 0 is all of V , or equivalently for all x ∈ V , for some α > 0, αx ∈ C; absolutely convex: if it is balanced and convex.
A locally convex topological vector space is a topological vector space in which the origin has a local base of absolutely convex absorbent sets. Because translation is continuous, all translations are homeomorphisms, so each base for the neighborhoods of the origin can be translated into a base for the neighborhoods of any given vector. Definition A.160 (Convex space via seminorms). A seminorm on V is a map F : V → ℝ, such that 1. F is positive or positive semidefinite: F (x) ≥ 0; 2.
F is positive homogeneous or positive scalable: F (λx) = λF (x)
3.
for each number λ. In particular, F (0) = 0; F is subadditive, that is F (x1 + x2 ) ≤ F (x1 ) + F (x2 ).
If F obeys positive definiteness, that is if F (x) = 0, then x = 0, then F is a norm. A locally convex space is defined to be a vector space V with a family of seminorms {Fα },
α ∈ A.
The space possesses the initial topology of the seminorms. That is to say, it is the weakest topology for which all mappings for x0 ∈ V , a ∈ A x → Fα (x − x0 ) are continuous. A base of neighborhoods of x0 for this topology is obtained in the following way: for each finite subset A ∈ A and each ϵ > 0, we set UA ,ε (x0 ) = {x ∈ V : pα (x − x0 ) < ε, α ∈ A }. Continuity of the vector space operations follows from properties (ii) and (iii) above. The resulting topological vector space is locally convex because each UA ,ε (0) is absolutely convex and absorbent.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
244  A Mathematical vocabulary
Definition A.161 (Multiplicative convexity (mconvex)). A subset A of an algebra A is multiplicative (idempotent) if A 2 = A A ⊂ A . It is multiplicativelyconvex or mconvex if it is convex and multiplicative. It is absolutely mconvex if it is balanced and mconvex.
Definition A.162 (Multiplicative seminorm). A seminorm N on an algebra A is multiplicative if for all x1 , x2 ∈ A N(x1 x2 ) ≤ N(x1 )N(x2 ).
Definition A.163 (Locally mconvex algebras and Fréchet algebras). A topological algebra (X , τ) is a locally mconvex algebra (LMC algebra) if there is a base of mconvex sets for V (0). X is a locally convex algebra if it is a topological algebra with a locally convex linear space structure. If, in addition to being locally mconvex, topology τ is Hausdorff, we say that X is an LMCH algebra, and τ to be LMCH. An LMC algebra which is a complete metrizable topological space is a Fréchet algebra.
Proposition A.164. A topological algebra X is locally mconvex if and only if its topology is generated by a family of multiplicative seminorms. The following space is a generalization of a Banach space that is locally convex and complete with respect to a translation invariant metric. However, the metric does not need to arise from a norm (a seminorm suffices). Definition A.165 (Fréchet space I). A topological vector space X is called a Fréchet space if it possesses the following properties: 1. it is Hausdorff; 2. it is complete with respect to the family of seminorms; 3. the topology on X can be induced by a countable family of seminorms ‖.‖l ,
l = 0, 1, 2, . . . ,
i. e., a subset U ⊂ X is open if and only if for all elements u ∈ U there exist K ≥ 0, ϵ > 0, such that for all l ≤ K {ν : ‖ν − u‖l < ϵ} form a subset of U.
Definition A.166 (Hilbert space). One speaks of a Hilbert space ℋ if there are: 1. a positive definite inner product on the complex linear space X ⟨⋅, ⋅⟩ : X × X → ℂ, such that
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
A.21 Nuclear multiplicative convex Hausdorff algebras and the Gel’fand spectrum
(a) (b) (c) (d) 2.
 245
⟨x1 , x1 ⟩ ≥ 0 and ⟨x1 , x1 ⟩ = 0 if and only if x1 = 0, ⟨x1 , x2 + x3 ⟩ = ⟨x1 , x2 ⟩ + ⟨x1 , x3 ⟩, ⟨x1 , λx2 ⟩ = λ⟨x1 , x2 ⟩, ⟨x1 , x2 ⟩ = ⟨x2 , x1 ⟩.
Under these conditions, ℋ is said to be a preHilbert space; s collection of vectors (xn ) which are orthonormal given that ⟨xm , xn ⟩ = δmn.
Definition A.167 (Compact operator). An operator ℒ acting in a Hilbert space ℋ: ℒ:ℋ→ℋ is a compact operator if it can be presented as N
ℒ = ∑ ρn ⟨fn , ⋅⟩gn , n=1
1≤N≤∞
where f1 , . . . , fN
and g1 , . . . , gN
are (not necessarily complete) orthonormal sets. Here, ρ1 , . . . , ρN are a set of real numbers, the singular values of the operator, obeying ρn → 0
if N → ∞.
The bracket ⟨⋅, ⋅⟩ is the scalar product on the Hilbert space; the sum on the righthand side must converge in the norm.
Definition A.168 (Singular values of a compact operator). The singular values, or snumbers of a compact operator T : ℋ1 → ℋ2 acting between Hilbert spaces ℋ1 and ℋ2 , are the square roots of the eigenvalues of the nonnegative selfadjoint operator T ∗ T : ℋ1 → ℋ1 , where T ∗ stands for the adjoint of T .
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
246  A Mathematical vocabulary
Definition A.169 (Nuclear operator). A compact operator is nuclear or traceclass if ∞
∑ ρn < ∞.
n=1
The most important observation for our purposes is now that for a nuclear operator in a Hilbert space one can define the trace, which is finite and independent on the basis. Proposition A.170 (Finite trace of nuclear operator on Hilbert space). Given an orthonormal basis {ψn } for the Hilbert space, one defines the trace as tr ℒ = ∑⟨ψn , ℒψn ⟩ n
(A.37)
where the sum converges absolutely and is independent of the basis. Furthermore, this trace is identical to the sum over the eigenvalues of ℒ. Proposition A.171 (Trace of nuclear operator on Banach space). Let A1 , A2 be two Banach spaces, and A∗1 be the dual of A1 , that is the set of all continuous linear functionals on A1 with the usual norm. Then the operator ℒ: A→B
is said to be nuclear of order q if there exist sequences of vectors {gn } ∈ A2 with ‖gn ‖ ≤ 1, functionals {Fn∗ } ∈ A∗1 with ‖Fn∗ ‖ ≤ 1 and complex numbers {ρn } with inf {p ≥ 1 : ∑ ρn p < ∞} = q, n
such that the operator can be presented as ℒ = ∑ ρn Fn (⋅)gn ∗
n
with the sum converging in the operator norm.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
A.21 Nuclear multiplicative convex Hausdorff algebras and the Gel’fand spectrum  247
Definition A.172 (Weak topology). The collection of all unions of finite intersection of sets Fi−1 (Oi ) for F : X1 → X2 , where i ∈ I and Oi is an open set in X2i , is a topology. It is called the weak topology on X1 generated by the (Fi ), i ∈ I and is denoted by σ(X1 , (Fi ), Fi ∈ I).
Definition A.173 (Weak∗ topology). The weak∗ topology on X is the topology σ(X , (F ), F ∈ X ∗ ).
Definition A.174 (Vanish at infinity). If X is a locally compact Hausdorff space, then a continuous function F on X vanishes at infinity if {x ∈ X : F (x) ≥ ϵ} is compact for all ϵ > 0.
Definition A.175 (Gel’fand space or spectrum). Let A be a commutative Banach algebra. Then we denote the collection of nonzero complex homomorphisms H: A→ℂ by △(A). Elements of the Gel’fand space are called characters.
Theorem A.176. Let A be a commutative unital Banach algebra. Then 1. △ ≠ 0; 2. J is a maximal ideal in A if and only if J = ker H for some H ∈ △; 3. ‖H‖ = 1, ∀H ∈ △; 4. ∀a ∈ A : σ(a) = {H(a) = H ∈ △}. Lemma A.177. If A is a unital Banach algebra, then each proper ideal is included in a maximal ideal and each maximal ideal is closed. Theorem A.178 (Gelfand–Mazur theorem). A unital Banach Algebra wherein each nonzero element is invertible is isometrically isomorphic to ℂ.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
248  A Mathematical vocabulary
Definition A.179 (Gel’fand transform). Let A be a commutative Banach algebra with △(A) nonempty. The Gel’fand transform of a ∈ A is the function ̂ := H(a). â : △(A) → ℂH → a(H) The space △(A) is called the spectrum of A.
Definition A.180 (Gel’fand topology). The Gel’fand topology on △(A) is the smallest topology making each â continuous.
Lemma A.181 (Gel’fand topology). The Gel’fand topology on △(A) is the relative topology on △(A) viewed as a subset of A∗ with the weak∗ topology.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
B Notations and conventions in quantum field theory B.1 Vectors and tensors In general, we use the same conventions as in [60]. We will work in natural units: ℏ = c = 1.
(B.1)
For the Minkowski metric, we take the common convention g μν
1 0 =( 0 0
0 −1 0 0
0 0 −1 0
0 0 ), 0 −1
(B.2)
where Greek indices run over 0, 1, 2, 3 (for t, x, y, z). When we want to denote the spatial components only, we use Roman indices like i, j, etc. We use the Einstein notation convention throughout the whole book, meaning that repeated indices are to be summed over. We typically write a 4vector with its Minkowski indices, a 3vector without indices, and a 2vector (the transversal components) with a subscript ⊥: pμ = (p0 , p1 , p2 , p3 ) = (p0 , p) = (p0 , p⊥ , p3 ) ,
(B.3)
while a length is mostly denoted alike, be it a length of a 4, 3 or 2vector. The differ ence should be clear from context, but when needed for clarity, we use p and p⊥ . The scalar product is fully defined by the metric mentioned above: x⋅p = x0 p0 − x ⋅p.
(B.4)
This implies that we can define a vector with a lower index as pμ = (p0 , −p1 , −p2 , −p3 ) ,
(B.5)
x⋅p = xμ pμ .
(B.6)
such that
Note that the index moves up or down when placing the coordinate in the denominator, as is the case for the derivative: 𝜕 N = 𝜕μ . 𝜕xμ
(B.7)
The position fourvector combines time and threeposition, while the fourmomentum combines energy and threemomentum: xμ = (t, x ) ,
pμ = (E, p) .
(B.8)
https://doi.org/10.1515/9783110651690007
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
250  B Notations and conventions in quantum field theory A particle that sits on its massshell (onshell for short) has 2 p2 = E 2 − p = m2 .
(B.9)
Last we define the symmetrization (. . .) and antisymmetrization [. . .] of a tensor as 1 μν (A + Aνμ ) , 2 1 = (Aμν − Aνμ ) . 2
A(μν) =
(B.10a)
A[μν]
(B.10b)
Symmetrizing an antisymmetric tensor returns zero, this implies: A(μν) B[μν] = 0.
(B.11)
It is straightforward to generalize this definition to tensors of higher rank: 1 (Aμ1 ⋅⋅⋅μn + all permutations) , n! 1 = (Aμ1 ⋅⋅⋅μn − all odd perm. + all even perm.) . n!
A(μ1 ⋅⋅⋅μn ) =
(B.12a)
A[μ1 ⋅⋅⋅μn ]
(B.12b)
B.2 Spinors and gamma matrices Any field with halfinteger spin, i. e., a Dirac field, anticommutes: ψ(x)ψ(y) = −ψ(y)ψ(x).
(B.13)
We define gamma matrices by the anticommutation relations {γ μ , γ ν } ≡ 2 g μν 1n ,
(B.14)
with the following additional property: (γ μ ) = γ 0 γ μ γ 0 . †
(B.15)
Then we can define the Dirac equation for a particle field ψ: (i 𝜕/ − m) ψ = 0,
(B.16)
where the slash is a shortcut notation for p/ = γ μ pμ .
(B.17)
We can identify an antiparticle field with ψ if we define ψ = ψ† γ 0 ,
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
(B.18)
B.2 Spinors and gamma matrices  251
which satisfies a slightly adapted Dirac equation: i 𝜕μ ψγ μ + mψ = 0.
(B.19)
We can expand Dirac fields in function of a set of plane waves: ψ(x) = us (p) e−i p⋅x s
ψ(x) = v (p) e
+i p⋅x
(p2 = m2 , p0 > 0), 2
2
0
(p = m , p < 0),
(B.20a) (B.20b)
where s is a spinindex. If we define u = u† γ 0 ,
v = γ0 v† ,
(B.21)
we can find the completeness relations by summing over spin: ∑ us (p)us (p) = p/ + m,
(B.22a)
∑ vs (p)vs (p) = p/ − m.
(B.22b)
s
s
We will identify – u with an incoming fermion; – u with an outgoing fermion; – v with an incoming antifermion; – v with an outgoing antifermion. If we define γ5 = i γ0 γ1 γ2 γ3 = − N
γ μν = γ [μ γ ν] = N
i μνρσ ε γμ γν γρ γσ , 4!
1 μ ν (γ γ − γ ν γ μ ) , 2
(B.23a) (B.23b)
we can construct a complete Dirac basis: 1, γ μ , γ μν , γ μ γ 5 , γ 5 .
(B.24)
We will identify – 1 with a scalar; – γ μ with a vector; – γ μν with a tensor; – γ μ γ 5 with a pseudovector; – γ 5 with a pseudoscalar. Furthermore, γ 5 has the following properties: (γ 5 ) = γ 5 , †
2
(γ 5 ) = 1,
{γ 5 , γ μ } = 0.
(B.25)
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
252  B Notations and conventions in quantum field theory Let us list some contraction identities for gamma matrices in ω dimensions: γ μ γμ = ω,
(B.26a)
μ ν
ν
γ γ γμ = (2 − ω)γ ,
μ ν ρ
γ γ γ γμ = 4 g
νρ
(B.26b) ν ρ
+ (ω − 4)γ γ ,
(B.26c)
γ μ γ ν γ ρ γ σ γμ = −2γ σ γ ρ γ ν + (4 − ω)γ ν γ ρ γ σ ,
(B.26d)
and some trace identities: tr(1) = ω,
(B.27a)
tr(odd number of γ’s) = 0, μ ν
(B.27b)
μν
tr(γ γ ) = 4 g ,
μ ν ρ σ
μν ρσ
tr(γ γ γ γ ) = 4 (g g
μρ νσ
−g g
μσ νρ
+g g )
(B.27c) (B.27d)
B.3 Lightcone coordinates Lightcone coordinates form a useful basis to represent 4vectors. For a random vector k μ , they are defined by 1 (k 0 + k 3 ) , √2 1 k− = (k 0 − k 3 ) , √2 k ⊥ = (k 1 , k 2 ) . k+ =
(B.28a) (B.28b) (B.28c)
We will represent the pluscomponent first, i. e., k μ = (k + , k − , k ⊥ ) .
(B.29)
One often encounters in literature the notation (k − , k + , k ⊥ ), but this is merely a matter of convention. The factor √12 normalizes the transformation to unit Jacobian, such that d4 k = dk + dk − dk ⊥ .
(B.30)
It is straightforward to show that the scalar product has the form k⋅p = k + p− + k − p+ − k ⊥ ⋅p⊥ , 2
k = 2k k − + −
k 2⊥ .
(B.31a) (B.31b)
This implies that the metric becomes offdiagonal: μν gLC
0 1 =( 0 0
1 0 0 0
0 0 −1 0
0 0 ). 0 −1
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
(B.32)
B.3 Lightcone coordinates  253
We will drop the index LC when clear from context. Note that this basis is not orthonormal. Note also that g μν gνρ = δρμ ,
g μν gμν = 4,
(B.33)
just like the Cartesian metric. We can also define two lightlike basis vectors: μ
n+ = (1+ , 0− , 0⊥ ) ,
μ n−
(B.34a)
= (0 , 1 , 0⊥ ) . +
(B.34b)
−
They are lightlike vectors, and maximally nonorthogonal: n2+ = 0,
n2− = 0,
n+ ⋅n− = 1.
(B.35)
Watch out, as lowering the index switches the lightlike components because of the form of the metric: n+ μ = (0+ , 1− , 0⊥ ) ,
(B.36a)
n− μ = (1 , 0 , 0⊥ ) , +
(B.36b)
−
such that they project out the other lightlike component of a vector: k⋅n+ = k − ,
k⋅n− = k + .
(B.37)
In other words, we can write k = (k⋅n− ) n+ + (k⋅n+ ) n− − k 2⊥ .
(B.38)
The switching of plus and minus components is also apparent when using Dirac matrices in LCcoordinates, e. g., {γ + , k/ } = 2 g +μ kμ = 2k− = 2k +
⇒
γ + k/ = 2k + − k/ γ + .
(B.39)
Note that 2
2
(γ + ) = (γ − ) = 0,
1 + − {γ , γ } = 1, 2
such that equation (B.14) remains valid in lightcone coordinates. We can use the lightlike basis vectors to construct a metric for nothing but the transversal part: μν
g⊥ = g μν − 2 n+ nν) −
(B.40a)
(μ
0 0 =( 0 0
0 0 0 0
0 0 −1 0
0 0 ). 0 −1
(B.40b)
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
254  B Notations and conventions in quantum field theory Note that μ
μν
μ
μν
g⊥ g⊥ νρ = δρμ − n+ n− ρ − n− n+ ρ ,
g⊥ g⊥ μν = 2.
(B.41)
Last we can define an antisymmetric metric: μν
ε⊥ = ε+−μν
0 0 =( 0 0
(B.42a) 0 0 0 0
0 0 0 −1
0 0 ), 1 0
(B.42b)
where we adopt the convention ε0123 = ε+−12 = +1.
B.4 Fourier transforms and distributions The Heaviside step function is defined as 0 x0
(B.43)
and is undefined for x = 0. The Dirac δfunction is defined as its derivative: δ(x) =
d θ(x) , dx
⇒
∫dx δ(x) = 1,
(B.44)
and is zero everywhere, except at x = 0. A generalization to n dimensions is straightforward: ∫dn x δn (x) = 1.
(B.45)
The most important use of the Dirac δfunction is the sifting property, which follows straight from (B.45): ∫dn x f (x) δn (x − t) = f (t).
(B.46)
When dealing with onshell conditions, we often encounter the combination of a Heaviside θ and a Dirac δ function. To save space, we define the shorthand notation δ+ (p2 − m2 ) = δ(p2 − m2 ) θ(p0 ) .
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
(B.47)
B.5 Feynman rules for QCD
 255
When dealing with Fourier transforms, we will use the following conventions: f (x) = ∫
d4 k ̃ f (k) e−i k⋅x , 16π 4
̃f (k) = ∫d4 x f (x) ei k⋅x .
(B.48a) (B.48b)
The tilde will always be omitted, as the function argument specifies clearly enough whether we are dealing with the coordinate or momentum representation. Note that due to the Minkowki metric, Fourier transforms over spatial components have the signs in their exponents flipped: d3 k ̃ f (k) ei k ⋅x , 8π 3
(B.49a)
̃f (k) = ∫d3 x f (x ) e−i k ⋅x ,
(B.49b)
f (x ) = ∫
and the same for twodimensional Fourier transforms. An ‘empty’ Fourier transform gives a δfunction: ∫
dn k e−i k⋅x = δ(n) (x) , (2π)n
(B.50a)
∫dn x ei k⋅x = (2π)n δ(n) (x) .
(B.50b)
B.5 Feynman rules for QCD The full Lagrangian for QCD is given by: 2 1 / (𝜕 Aa − 𝜕ν Aaμ ) − g ψAψ 4 μ ν 1 + g f abc (𝜕μ Aaν ) Aμb Aνc − g 2 f abx f xcd Aaμ Abν Aμc Aνd , 2
ℒ = ψ (i 𝜕/ − m) ψ −
(B.51)
where A/ = Aaμ γ μ t a . This gives rise to the following Feynman rules: N
i, s
p p
i, s
p
i, s
=
usi (p) (initial)
(B.52a)
=
usi (p) (final)
(B.52b)
=
vsi (p) (initial)
(B.52c)
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
256  B Notations and conventions in quantum field theory p
=
vis (p)
(final)
(B.52d)
=
εμa (p)
(initial)
(B.52e)
μ, a
=
εaμ (p)
(final)
(B.52f)
j
=
i δij
p/ + m p2 − m2 + i ϵ
(B.52g)
b
=
b
=
i, s
k
μ, a
k i a a
p k k
j
i μ, a
=
−i δab kμ kν μν [g − − ξ ] (Lorentz) (1 ) k2 + i ε k2 k(μ nν) k 2 kμ kν −i δab μν [g − 2 + ξ ] (LC) k⋅n k⋅n k2 + i ε −i gγ μ (t a )ji
(B.52h) (B.52i) (B.52j)
ρ, c
ν, b
μ, a
=
−gf abc [ g μν (k − p)ρ + g νρ (p − q)μ +g ρμ (q − k)ν ]
(B.52k)
=
−i g 2 [f abx f xcd (g μρ g νσ − g μσ g νρ ) −f acx f xbd (g μσ g νρ − g μν g ρσ ) +f adx f xbc (g μν g ρσ − g μρ g νσ ) ].
(B.52l)
ρ, c
ν, b μ, a
σ, d
The sum over gluon polarization states depends on the gauge, and equals ∑ εμ (k)εν (k) = −g μν
pol
∑ εμ (k)εν (k) = −g μν +
pol
2k (μ nν) n⋅k
(Lorentz)
(B.53a)
(LC)
(B.53b)
where the lightcone gauge is defined by n− ⋅A = A+ = 0.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
C Color algebra C.1 Basics C.1.1 Representations Let us revise some basic color algebra. As is well known, the group which governs QCD is SU(3), but for the sake of generality we list some basic rules and derive some properties for SU(N). The latter is fully defined by dA = N 2 − 1 linear independent Hermitian generators t a and their commutation relations [t a , t b ] = i f abc t c ,
(C.1)
where the f abc are real and fully antisymmetric constants (the socalled structure constants). In practice we will work with representations of the algebra, where the generators are represented by dR × dR Hermitian matrices, with dR the dimension of the representation. Two representations of particular interest are the fundamental representation, which has dimension dF = N and forms a complete basis for the algebra if complemented with the identity matrix: (1, t a ) . The second significant representation is the adjoint representation, which is constructed from the structure constants: (T a )bc = −i f abc and has dimension dA = N 2 − 1. We will make the distinction in notation by writing the fundamental with lowercase t and the adjoint with uppercase T. Note that in literature several different notations exist (for instance tF and tA ). C.1.2 Properties All matrices are traceless in every representation: tr(tRa ) = 0. The trace of two matrices is zero if they are different: tr(tRa tRb ) = DR δab .
(C.2)
DR is a constant depending on the representation. By convention DR = 21 , almost always. Summing all squared matrices gives an operator that commutes with all others, the socalled Casimir operator: tRa tRa = CR 1.
(C.3)
https://doi.org/10.1515/9783110651690008
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
258  C Color algebra Again, CR is a constant depending on the representation. Both constants can be easily related tRa tRa = CR 1 ⇒ tr(tRa tRa ) = CR tr(1) = CR dR
tr(tRa tRb ) = DR δab ⇒ tr(tRa tRa ) = DR δaa = DR dA
(C.4)
C D ⇒ R = R. dA dR
Let us list the constants for the fundamental and the adjoint representation: 1 DF = , 2 CF = DF dF = N,
DA = 2DF dF = N, dA N 2 − 1 , = dF 2N
CA = DA = N, dA = dF2 − 1 = N 2 − 1.
These are the only properties that are representation independent. Because the fundamental representation forms a complete basis, we can derive additional properties that are not valid in other representations. First of all, the anticommutator has to be an element of the algebra, and thus a linear combination of the identity and the generators: {t a , t b } =
1 ab δ 1 + dabc t c . N
(C.5)
The constant in front of the identity was calculated by taking the trace and comparing to equation (C.2), while dabc can be retrieved, as well as f abc , from f abc = −
i tr [tRa , tRb ]tRc , DR
dabc = 2 tr {t a , t b }t c .
(C.6a) (C.6b)
It is easy to check that the dabc are fully symmetric and that they vanish when contracting any two indices: daab = dbaa = daba = 0. By combining the commutation rules with the anticommutation rules we can find another useful property: tatb =
{t a , t b } + [t a , t b ]
2 1 ab 1 = δ 1 + habc t c , 2N 2
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
(C.7)
C.2 Advanced topics 
259
where we defined habc = dabc + i f abc .
(C.8)
habc is Hermitian and cyclic in its indices: habc = h h h
abc
aab
bac
=h
bca
=h
baa
=h =h
cba
cab
=h
=h
acb
,
,
aba
= 0.
A last useful property is the Fierz identity (t a )αβ (t a )γδ =
1 1 δ δ − δ δ . 2 αδ βγ 2N αβ γδ
(C.9)
It is straightforward to prove this identity; first we write a general element of the fundamental representation as X = c0 1 + i ca t a ,
(C.10)
where c0 and ca are easily calculated 1 tr(X) , N ca = −2i tr(Xt a ) .
c0 =
𝜕(X)αβ 𝜕(X)γδ
= δαγ δβδ . The Fierz identity is especially handy to rearrange traces containing contractions:
We then get the requested result by calculating
1 1 tr(AC) tr(B) − tr(ABC) , 2 2N tr(t a B t a ) = CF tr(B) , 1 1 tr(AB) tr(CD) , tr(A t a B) tr(C t a D) = tr(ADCB) − 2 2N tr(A t a B t a C) =
(C.11a) (C.11b) (C.11c)
where A, B, C, D are expressions built from tFa ’s.
C.2 Advanced topics C.2.1 Calculating products of fundamental generators We would like to find an expression for a general product of fundamental generators like t a1 t a2 ⋅ ⋅ ⋅ t an .
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
260  C Color algebra Like we did with the Fierz identity in (C.10), we can write this product in function of the basis for the fundamental representation: t a1 t a2 ⋅ ⋅ ⋅ t an = Aa1 a2 ⋅⋅⋅an 1 + Ba1 a2 ⋅⋅⋅an b t b , and we can calculate the color factors by tracing: tr(t a1 t a2 ⋅ ⋅ ⋅ t an ) = NAa1 a2 ⋅⋅⋅an , 1 tr(t a1 t a2 ⋅ ⋅ ⋅ t an t c ) = Ba1 a2 ⋅⋅⋅an c . 2 But the latter can also be calculated as one order higher, giving tr(t a1 t a2 ⋅ ⋅ ⋅ t an t c ) = NAa1 a2 ⋅⋅⋅an c , 1 a1 a2 ⋅⋅⋅an ⇒ Aa1 a2 ⋅⋅⋅an ≡ B . 2N Only one of these is linearly independent. We will adopt the notation Aa1 a2 ⋅⋅⋅an = N
1 a1 a2 ⋅⋅⋅an C , N
(C.12)
with C standing for ‘color factor’. t a1 ⋅ ⋅ ⋅ t
an
C a1 ⋅⋅⋅an
= C a1 ⋅⋅⋅an
1 + 2 C a1 ⋅⋅⋅an b t b N a a = tr(t 1 ⋅ ⋅ ⋅ t n )
(C.13a) (C.13b)
The color factor has the same properties as the trace, namely cyclicity and Hermiticity: C a1 a2 ⋅⋅⋅an = C a2 a3 ⋅⋅⋅an a1 = ⋅ ⋅ ⋅ ,
(C.14a)
C a1 a2 ⋅⋅⋅an = C
(C.14b)
an ⋅⋅⋅a2 a1
.
The first color factors are straightforward to calculate: C0 = N,
C1a C2ab C3abc
(C.15a)
= 0, 1 = δab , 2 1 = habc . 4
(C.15b) (C.15c) (C.15d)
To calculate the higher orders, we use equation (C.7) to deduce a recursion formula for traces (and thus color factors) in the fundamental representation: tr(t a1 ⋅ ⋅ ⋅ t an ) =
δan−1 an han−1 an b tr(t a1 ⋅ ⋅ ⋅ t an−2 ) + tr(t a1 ⋅ ⋅ ⋅ t an−2 t b ) , 2N 2
(C.16a)
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
C.2 Advanced topics  261
C a1 ⋅⋅⋅an =
δan−1 an a1 ⋅⋅⋅an−2 han−1 an b a1 ⋅⋅⋅an−2 b C C . + 2N 2
(C.16b)
This gives, for instance, a a2 a3 a4
C4 1
a a2 a3 a4 a5
C5 1
a a2 a3 a4 a5 a6
C6 1
1 a1 a2 a3 a4 1 a1 a2 b ba3 a4 + h , δ δ h 4N 8 1 1 (ha1 a2 a3 δa4 a5 + δa1 a2 ha3 a4 a5 ) + ha1 a2 b1 hb1 a3 b2 hb2 a4 a5 , = 8N 16 1 a1 a2 a3 a4 a5 a6 1 a1 a2 b1 b1 a3 b2 b2 a4 b3 b3 a5 a6 = δ δ δ + h h h h , 32 8N 2 1 (ha1 a2 b hb a3 a4 δa5 a6 + ha1 a2 a3 ha4 a5 a6 + δa1 a2 ha3 a4 b hb a5 a6 ) . + 16N =
One extremely useful property is the fact that inner summations only appear between consecutive h’s, and never with a δ. This allows us to define the following shorthand notation: δ = δai ai+1 , N
h = hai ai+1 ai+2 , N
hh = hai ai+1 b hbai+2 ai+3 , N
hhh = hai ai+1 b1 hb1 ai+2 b2 hb2 ai+3 ai+5 , N
⋅⋅⋅ which allows us to rewrite the former as (note that the order of the δ’s and h’s is significant because of the indices): C2 = δ, 1 C3 = h, 4 1 1 δδ + hh, C4 = 4N 8 1 1 C5 = (hδ + δh) + hhh, 8N 16 1 1 1 C6 = (hhδ + h h + δhh) + hhhh. δδδ + 16N 32 8N 2
(C.17a) (C.17b) (C.17c) (C.17d) (C.17e)
If we generalize this to an nth order trace, we get from equation (C.16): n 2 −1
n even:
C a1 ⋅⋅⋅an = ∑
n odd:
C a1 ⋅⋅⋅an = ∑
i=0
n
1
n−3 2
i=0
n
2 2 +i N 2 −i−1 2
n+1 2 +i
1 N
(
n−1 2 −i−1
all allowed δ − h combinations ) built from 2i h’s (
(C.18)
all allowed δ − h combinations ) built from 2i + 1 h’s
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
262  C Color algebra where the δ − h combinations need to have n open indices, using δ
2 open indices,
h
3 open indices,
hh
4 open indices,
hhh
5 open indices,
⋅⋅⋅ and where it is forbidden to put any δ or h inbetween two contracted h’s. Thus for instance hδh and hhh are not allowed. With this in mind, we can tackle any trace without the need for recursive calculations. For instance: C a1 ⋅⋅⋅a10 =
1 1 δδδδδ + (hhδδδ + δhhδδ + δδhhδ + δδδhh 4 32N 64N 3
1 (hhhhδδ 128N 2 + δhhhhδ + δδhhhh + hhh hδ + hhhδh + δhhh h + h hhhδ+ + h hδδ + hδhδ + hδδh + δh hδ + δhδh + δδh h) +
+ hδhhh + δh hhh + hh hhδ + hhδhh + δhh hh + hh h h + h hh h 1 (hhhhhhδ + δhhhhhh + hhhhhh + hhhhhh 256N 1 + hhhh hh + hh hhhh + hhh hhh) + hhhhhhhh. 512 + h h hh) +
As a result from (C.16) we can use a trick to double check our result, namely that the total number of terms should equal the (n − 1)th Fibonacci number (counting 0 as the zeroth Fibonacci number). Thus indeed, for the 10th order trace we have 34 terms.
C.2.2 Calculating traces in the adjoint representation It is not so straightforward to calculate general nth order traces in the adjoint representation, because we cannot easily calculate the anticommutation relations, which we need to get a recursion relation as in (C.16). We will not attempt to do so, instead we will relate traces in the adjoint representation to traces in the fundamental using a nifty trick. First, note that in general F ⊗ F ≃ A ⊕ 1,
(C.19)
from which we can derive (UA denotes ‘the group element U expressed in the adjoint representation’): tr(UA ) = tr(UF ) tr(UF ) − 1.
(C.20)
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
C.2 Advanced topics  263
Indeed, if we take U = 1, we get dA = dF2 − 1. To calculate the nth order trace, it is a a sufficient to take U = ∏ni etR αi , expand it and compare terms of the same order in αi . Furthermore we can use UF = UF† = UF−1 , et
a1 αa1 1
⋅ ⋅ ⋅ et
an αan n
= e−t
an an αn
⋅ ⋅ ⋅ e−t
a1 a1 α1
.
For example, the fourthorder trace in the adjoint can be calculated as follows: tr(eT
a a α1
eT
α1a α2b α3c α4d
b b α2
eT
c c α3
eT
d d α4
) = tr(et
a b c d
tr(T T T T ) =
a a α1
et
α1a α2b α3c α4d
b b α2
et
c c α3
et
d d α4
) tr(e−t
d d α4
a b c d
e−t
c c α3
e−t
b b α2
e−t
a a α1
) − 1,
d c b a
[ tr(t t t t ) N + N tr(t t t t )
a b
+ 2 tr(t t ) tr(t c t d ) + 2 tr(t a t c ) tr(t b t d ) + 2 tr(t a t d ) tr(t b t c )] .
Using this trick we can calculate any trace in the adjoint representation in function of traces in the fundamental representation. Also note that we can derive equations similar to (C.20) using different representation combinations. For example in SU(3) we have 3 ⊗ 3 ≃ 6 ⊕ 3, implying tr(U2F ) = tr(UF ) tr(UF ) − tr(UF ) . Now back to the adjoint generators. We could generalize their trace as tr(T a1 ⋅ ⋅ ⋅ T
an
a ) = N( tr(t a1 ⋅ ⋅ ⋅ t n ) + (−)n tr(t a1 ⋅ ⋅ ⋅ t an ) ) n−2
+ ∑ (−)n−m m=2
(C.21)
n! a  tr(t (a1 ⋅ ⋅ ⋅ t m ) tr(t am+1 ⋅ ⋅ ⋅ t an )o ) . m! (n − m)!
We introduced two new notations: first we have the ‘conjugated’ trace, which is simply the trace in reversed order: tr(t a1 ⋅ ⋅ ⋅ t an ) = tr(t an ⋅ ⋅ ⋅ t a1 ) . The only thing that changes when reversing a trace of fundamental generators is that every h gets replaced by its complex conjugate h (hence the notation tr). The result can be simplified further using relations like h − h = 2i f , hh + hh = 2 (dd − ff ), etc. The second notation we introduced, (  )o , is an ‘ordered’ symmetrization which for a general tensor M is defined as
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
264  C Color algebra
M(a1 ⋅⋅⋅am  am+1 ⋅⋅⋅an )o =
all permutations for which both the first m m! (n − m)! (Ma1 ⋅⋅⋅an + indices and the last n − m indices are ). (C.22) n! ordered with respect to (a1 ⋅ ⋅ ⋅ an )
For instance, M (ab  N cd)o =
1 (M ab N cd + M ac N bd + M ad N bc + M bc N ad + M bd N ac + M cd N ab ) . 6
One handy property is that when A and B are commutative, we have A(a1 ⋅⋅⋅am  Bam+1 ⋅⋅⋅an )o = B(a1 ⋅⋅⋅an−m  Aan−m+1 ⋅⋅⋅an )o , or (in our shorthand notation) for instance ( δ h)o = ( h δ)o . To conclude, let us list some traces:1 tr(T a1 T a2 ) = Nδ,
(C.23a)
N (h − h) , 4 N 1 tr(T a1 T a2 T a3 T a4 ) = (δδ + 3( δδ) ) + (hh + hh) , 2 8 (  )o 1 a1 a2 a3 a4 a5 tr(T T T T T ) = [(h − h) δ + δ (h − h) + 10 δ (h − h) ] 8 N + (hhh − hhh) , 16 1 1 (δδδ + 15 ( δδδ) ) + [(hh + hh) δ tr(T a1 T a2 T a3 T a4 T a5 T a6 ) = 4N 16 tr(T a1 T a2 T a3 ) =
(

+ (hh + hh) + δ (hh + hh) + 15 δ (hh + hh) (
)o
− 20 h h ] +
N (hhhh + hhhh) . 32
(C.23b) (C.23c)
(C.23d)
)o
(C.23e)
1 Note that ( δ δ)o = ( δδ) , for any number of δ’s.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
D Brief literature guide We do not intend to give any kind of a review of the (huge) existing literature. Our list of references by no means pretends to be complete, it certainly misses a number of important (or even crucial) works on the subject. We list, however, all research papers, reviews and books, whose results we directly used in our exposition. Below we give a very brief guide to these works according to the main issues considered in them. 1. General questions: [15, 16, 27, 30, 34, 36, 45, 57, 60, 64] 2. Gauge theory and the principal fiber bundle approach: [27, 37, 40, 44, 59, 70] 3. Product integrals: [35, 66, 72] 4. Topology: [37, 59, 73] 5. Manifolds: [37, 41, 44, 48, 59, 70, 71] 6. Algebra: [1, 5, 10, 21, 58, 74] 7. Topological algebra: [2, 10, 63] 8. Algebraic paths: [23–26, 69] 9. Loop space: [3, 4, 19, 20, 22, 33, 38, 39, 42, 46, 47, 49–53, 61, 62, 69, 76] 10. Mandelstam constraints: [19, 20, 39, 54, 55] 11. Gauge invariance in particle physics: [6–9, 11–14, 17, 18, 27–32, 43, 56, 65, 67, 68, 75]
https://doi.org/10.1515/9783110651690009
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
Bibliography [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13]
[14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27]
E. Abe, Hopf Algebras, Cambridge, Cambridge University Press, 1977. W. Ambrose and I. M Singer, A theorem on holonomy, Trans. Am. Math. Soc. 75 (1953), 428–43. I. Arefeva, NonAbelian Stokes formula, Theor. Math. Phys. 43 (1980), 353. I. Y. Arefeva. Quantum contour field equations, Phys. Lett. B 93 (1980) 347–53. M. Artin, Algebra, New Jersey, Prentice Hall, 1991. A. Bacchetta, Transverse Momentum Distributions, Lecture Notes for the Doctoral Training Programme, Trento, ECT, Trento, 2010. A. Bacchetta, U. D’Alesio, M. Diehl, and C. A. Miller, Singlespin asymmetries: The Trento conventions, Phys. Rev. D 70 (2004), 117504. I. I. Balitsky and V. M. Braun, Evolution equations for QCD string operators, Nucl. Phys. B 311 (1989), 541. V. Barone and E. Predazzi, HighEnergy Particle Diffraction, Berlin, Springer, 2002. E. Beckenstein, L. Narici and C. Suffel, Topological algebras, Notas de matemática 24, New York, Elsevier Science, 1977. A. V. Belitsky, X.D. Ji, and F. Yuan, Final state interactions and gauge invariant parton distributions, Nucl. Phys. B 656 (2003), 165–98. A. V. Belitsky and A. V. Radyushkin, Unraveling hadron structure with generalized parton distributions, Phys. Rep. 418 (2005), 1. D. Boer, M. Diehl, R. Milner, R. Venugopalan, W. Vogelsang, D. Kaplan, H. Montgomery, S. Vigdor, et al., Gluons and the quark sea at high energies: Distributions, polarization, tomography, arXiv:1108.1713 [nuclth]. D. Boer, P. J. Mulders, and F. Pijlman, Universality of Todd effects in single spin and azimuthal asymmetries, Nucl. Phys. B 667 (2003) 201–41. N. N. Bogoliubov, A. A. Logunov, A. I. Oksak, and I. T. Todorov, General Principles of Quantum Field Theory, Dordrecht Boston, Kluwer Academic Publishers, 1990. N. N. Bogolyubov and D. V. Shirkov, Introduction to the theory of quantized fields, Intersci. Monogr. Phys. Astron. 3, 1 (1959). C. J. Bomhof and P. J. Mulders, Nonuniversality of transverse momentum dependent parton distribution functions, Nucl. Phys. B 795 (2008), 409–27. N. Brambilla, et al., QCD and strongly coupled gauge theories: challenges and perspectives, arXiv:1404.3723 [hepph]. R. A. Brandt, A. Gocksch, M.A. Sato, and F. Neri, Loop space, Phys. Rev. D 26 (1982), 3611. R. A. Brandt, F. Neri, and M.A. Sato, Renormalization of loop functions for all loops, Phys. Rev. D 24 (1981), 879. K. Brown, Hopf Algebras, Lecture Notes, University of Glasgow. H.M. Chan and S. T. Tsou, Gauge theories in loop space, Acta Phys. Pol. B 17 (1986), 259. K. T. Chen, Iterated integrals and exponential homomorphisms, Proc. Lond. Math. Soc. s34 (1954), 502–12. K. T. Chen, Integration of paths: A faithful representation of paths by noncommutative formal power series, Trans. Am. Math. Soc. 89 (1958), 395–407. K. T. Chen, Algebraic paths, J. Algebra 9 (1968) 8–36. K. T. Chen, Algebras of iterated path integrals and fundamental groups, Trans. Am. Math. Soc. 156 (1971), 359–79. T. P. Cheng and L. F. Li, Gauge Theory of Elementary Particle Physics, Oxford Science Publications, Oxford New York, Clarendon Press, 1984.
https://doi.org/10.1515/9783110651690010
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
Bibliography  267
[28] I. O. Cherednikov, A. I. Karanikas, and N. G. Stefanis, Wilson lines in transversemomentum dependent parton distribution functions with spin degrees of freedom, Nucl. Phys. B 840 (2010), 379–404. [29] I. O. Cherednikov and N. G. Stefanis, Wilson lines and transversemomentum dependent parton distribution functions: A renormalizationgroup analysis, Nucl. Phys. B 802 (2008), 146–79. [30] J. Collins, Foundations of Perturbative QCD, Cambridge monographs on particle physics, nuclear physics and cosmology 32, Cambridge, Cambridge University Press, 2011. [31] J. C. Collins, What exactly is a parton density?, Acta Phys. Pol. B 34 (2003), 3103. [32] U. D’Alesio and F. Murgia, Azimuthal and single spin asymmetries in hard scattering processes, Prog. Part. Nucl. Phys. 61 (2008), 394. [33] C. Di Bartolo, R. Gambini, and J. Griego, The extended loop group: An infinite dimensional manifold associated with the loop space, Commun. Math. Phys. 158 (1993), 217–40. [34] L. D. Faddeev and A. A. Slavnov, Gauge Fields: Introduction to Quantum Theory, Westview Press, 1991. [35] R. P. Feynman, An operator calculus having applications in quantum electrodynamics, Phys. Rev. 84 (1951), 108–28. [36] R. C. Field, Applications of Perturbative QCD, Advanced Book Classics, Redwood City, California, AddisonWesley Publishing Company, 1989. [37] T. Frankel, The Geometry of Physics: An Introduction, Cambridge, Cambridge University Press, 2011. [38] R. Gambini and J. Pullin, Loops, Knots, Gauge Theories and Quantum Gravity, Cambridge Monographs on Mathematical Physics, Cambridge, Cambridge University Press, 1996. [39] R. Giles, Reconstruction of gauge potentials from Wilson loops, Phys. Rev. D 24 (1981), 2160–8. [40] A. Guay, Geometrical aspects of local gauge symmetry, http://philsciarchive.pit.edu/id/ eprint/2133, Pittsburg, 2004. [41] R. Hermann, Differential Geometry and the Calculus of Variations, Interdisciplinary mathematics, New York, Math. Sci. Press, 1977. [42] S. V. Ivanov, G. P. Korchemsky, and A. V. Radyushkin, Infrared asymptotics of perturbative QCD: Contour gauges, Yad. Fiz. 44 (1986), 230–40. [43] X.D. Ji and F. Yuan, Parton distributions in light cone gauge: Where are the final state interactions?, Phys. Lett. B 543 (2002) 66–72. [44] S. Kobayashi and K. Nomizu, Foundations of Differential Geometry, Interscience Tracts in Pure and Applied Mathematics, Interscience Publishers, 1963. [45] N. P. Konopleva and V. N. Popov, Gauge Fields, Chur, Harwood, 1981. [46] G. P. Korchemsky and A. V. Radyushkin, Loop space formalism and renormalization group for the infrared asymptotics of QCD, Phys. Lett. B 171 (1986) 459–67. [47] G. P. Korchemsky and A. V. Radyushkin. Renormalization of the Wilson loops beyond the leading order, Nucl. Phys. B 283 (1987) 342–64. [48] S. Lang, Introduction to Differentiable Manifolds, Universitext Series, New Haven, Springer, 2002. [49] R. Loll, Loop approaches to gauge field theory, Theor. Math. Phys. 93 (1992), 1415. [50] Y. M. Makeenko, Methods of Contemporary Gauge Theory, Cambridge, Cambridge University Press, 2002. [51] Y. M. Makeenko and A. A. Migdal, Exact equation for the loop average in multicolor QCD, Phys. Lett. B 88 (1979) 135. [52] Y. M. Makeenko and A. A. Migdal, Selfconsistent areas law in QCD, Phys. Lett. B 97 (1980), 253. [53] Y. M. Makeenko and A. A. Migdal, Quantum chromodynamics as dynamics of loops, Nucl. Phys. B 188 (1981), 269. [54] S. Mandelstam, Quantum electrodynamics without potentials, Ann. Phys. 19 (1962), 1–24.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
268  Bibliography
[55] S. Mandelstam, Feynman rules for electromagnetic and Yang–Mills fields from the gaugeindependent fieldtheoretic formalism, Phys. Rev. 175 (1968), 1580–603. [56] A. D. Martin, Proton structure, partons, QCD, DGLAP and beyond, Acta Phys. Pol. B 39 (2008), 2025. [57] M. B. Mensky, Group of Paths: Measurements, Fields, Particles, Moscow, Nauka, 1983 [in Russian]. [58] J. S. Milne. Algebraic Geometry: V5.0, Taiaroa Publishing, 2005. [59] M. Nakahara, Geometry, Topology and Physics, Graduate Student Series in Physics, Oxon, Taylor & Francis, 2003. [60] M. E. Peskin and D. V. Schroeder, An Introduction to Quantum Field Theory, Advanced Book Classics, Boulder, Colorado, Westview Press, 1995. [61] A. M. Polyakov, String representations and hidden symmetries for gauge fields, Phys. Lett. B 82 (1979), 247–50. [62] A. M. Polyakov, Gauge fields as rings of glue, Nucl. Phys. B 164 (1980), 171. [63] U. Schreiber, Quantization via Linear homotopy types, arXiv:1402.7041 [mathph]. [64] A. S. Schwarz, Mathematical Foundations of Quantum Field Theory, Moscow, Atomizdat, 1975 [in Russian]. [65] J. S. Schwinger, Gauge invariance and mass, 2, Phys. Rev. 128 (1962), 2425. [66] A. Slavík, Product Integration, Its History and Applications, History of mathematics, Prague, Matfyzpress, 2007. [67] N. G. Stefanis, Gauge invariant quark twopoint green’s function through connector insertion to O(αs ), Nuovo Cimento A 83 (1984), 205. [68] N. G. Stefanis, Worldline techniques and QCD observables, Acta Phys. Pol. Supp. 6 (2013) 71–80. [69] J. N. Tavares, Chen integrals, generalized loops and loop calculus, Int. J. Mod. Phys. A 9 (1994), 4511–48. [70] T. Thiemann, Modern Canonical Quantum General Relativity, Cambridge Monographs on Mathematical Physics, Cambridge, Cambridge University Press, 2007. [71] L. W. Tu, An Introduction to Manifolds, Universitext Series, New York, Springer, 2010. [72] V. Volterra, Sui fondamenti della teoria delle equazioni differenziali lineari, I, Mem. Soc. Ital. Sci. 6(3), (1877), 1–104. [73] S. Willard, General Topology, AddisonWesley Series in Mathematics, New York, AddisonWesley, 2004. [74] D. P. Williams, Lecture Notes on C ∗ algebras, Department of Mathematics, Dartmouth College, 2011. [75] K. G. Wilson, Confinement of quarks, Phys. Rev. D 10 (1974), 2445. [76] I. P. Zois, On Polyakov’s basic variational formula for loop spaces, Rept. Math. Phys. 42 (1988), 373–84.
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
Index accumulation point 201, 205 algebra 229 – ∗ 219 – antipode 235 – Banach 94, 240, 241 – bialgebra 232 – C ∗ 240 – cocommutative 235 – cokernel 228 – comodule 238 – commutative 94 – convex – multiplicative 241 – duality 238 – graded 231 – Hausdorff 241 – homomorphism 231 – Hopf 231, 233 – involution 219, 240 – module 237 – normed 240 – nuclear 96, 241 – opposite 234 – restricted dual 237 – unital 229 algebraic paths 22 Ambrose–Singer theorem 74, 100 anticommutation relations – for gamma matrices 250 antiderivation 222 antihomomorphism 237 antipode 10, 87 antisymmetrization 250 azimuthal angle 177, 178 Baire – first category 204 – second category 205 basis – neighborhood 203 Bjorken scaling 162, 172 Bjorkenx 150, 154 bosongluon fusion 168, 174 bounded – totally 207 bounded linear operator 77, 81
category 13 Cauchy sequence 206 centerofmass energy 149 Chen iterated integrals 26 – intermediate points 42 – multiplication 27 – reparametrization 43 – separation property 47 – without coordinates 26 cluster 201 cofinal 200 comultiplication 10 counit 10 collinear – divergence 172 – Parton 153 color algebra 257–264 – anticommutation relations 258 – Casimir operator 257 – color factors 260 – commutation relations 257 – d abc 258 – f abc 258 – Fierz identity 259 – habc 259 compact – countable 205 compactification – Alexandroff 209 compactness 199 complete 206 completeness relation – for spinors 251 – for states 157 connected 195 – components 196 – locally 198 – path 196 – locally 198 connectedness – simply 216 continuity 193 – uniform 207 contractible 216 convergence 92, 96, 205 convolution 161
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
270  Index
countability 203 – A1 203 – A2 203 cross section – for unpolarized DIS 159 – for unpolarized SIDIS 180 – in the FPM 152 curvature 73 – Cartan’s structure equation 74 – field strength tensor 74 – twoform 73 dconnected 37 ddiscrete point 38 dloop 40, 88 – Shc(d, p) 41 dpath 30, 88 dreduced 40 deep inelastic scattering see DIS δ+ 254 dense 204 derivation 219 derivative – area 108, 116 – parallel transporter 126 – covariant 73 – endpoint – terminal 109 – Fréchet 77, 79, 108, 130 – left/right invariant 101 – Lie 128, 130 – path 108 – Leibniz 111 – Polyakov 108 diffeomorphism 127, 217 differential of a map 222 differentiation – category of pointed differentiations 16 – dclosed 25 – kalgebra 15 – kmodule 15 – splitting pointed 17 – splitting pointed differentiation homomorphisms 19 – surjective shuffle module 18 differentiations – pointed 104 Dirac – basis 251
– δfunction 254 – equation 250, 251 Dirac map 97 DIS 148–175 – orthonormal basis 154–156 eikonal – approximation 137, 146 – quark 147, 166 embedding 194 evolution equations 160 – DGLAP 174, 175 exact sequence 17 factorization 160 – collinear 160, 172, 175 – in PM 161 – scale 174 – scheme 173 – TMD 161, 182 Feynman rules – for QCD see QCD, Feynman rules – for Wilson lines see Wilson line, Feynman rules fiber bundle – connection 53 – Ehresmann connection 56 – horizontal lift 59 – parallel transport 59, 70 – principal 48 field 226 filter – basis 241 finalstate cut 157, 166 flipping operation 11 form – oneform 220 Fourier transform 255 FPM see Parton Model, Free fractional energy loss 150 fragmentation function 176, 177 function – bump function 224 – support 224 functional – linear 220 functor 14 – covariant functor to 𝒮𝒫𝒟 20 – forgetful 15
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
Index  271
gamma matrices 250 – contraction identities 252 – trace identities 252 gauge potential 53, 58 – compatibility condition 57 Gel’fand – character 247 – space 95 – spectrum 95, 247 – topology 248 – transform 248 group – fundamental 212 – topological 197 hadronic tensor 157, 158, 164, 178–180, 182 – constraints 158 – expansion 158 hard part 151 – partonic 173 Hausdorff 188, 191, 205, 207, 212 Heaviside step function 254 holonomy 71 homeomorphism 193 homotopy 213 – equivalent spaces 216 – relative 213 Hopf algebra 11 ideal 22, 228 – bi 233 – least δclosed 31 – maximal 228 – prime 228 IMF see infinite momentum frame infinitemomentum frame 153 infrared divergence 171 invariant mass 149 κ 154 μ
l⊥̂ see DIS, orthonormal basis Lμν see lepton tensor Loperator 12 largedistance process 151 leading twist 164 left translation 197 lepton tensor 156
Lie – adjoint action 55 ?p 101 – algebra of Lℳ – Fréchet 85, 101 – generalized 85 – Infinitedimensional 85 – left action 54 – right action 54 lightcone – coordinates 252–254 – gauge 166, 185, 256 Lindelöf 203 loop 196 loops – derivative – Fréchet 130 – generalized 96, 100 – group 94 – Lie algebra 101 – multiplication 97 – topological group 98 – loopgroup Lℳp 94 manifold – algebra – exterior 221 – Banach 77, 80 – boundary 216 – bundle – tangent 221 – dual space 221 – embedding 218 – Fréchet 82, 86, 89 – function – smooth 219 – immersion 218, 223 – orientability 218 – product – exterior 221 – interior 221 – real 216 – smooth 218 – submanifold 218 manifoldreal analytic 218 map – closed 194 – equivariant 212 – open 194 – quotient 211
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
272  Index
massshell condition 163 matrix function – continuity 66 – derivatives 62 – differentiability 61 – Integrals 64 meagre 204 metric 191, 249 – intrinsic 191 – LC coordinates 252 – set 191 – space 191 – translation invariant 191 – transversal 155 μν – ε⊥ 155, 254 μν – g⊥ 154, 253 – ultra 191 module 225, 227 – graded 231 – left 227 – right 227 momentum fraction 152 monoid 225 multiplication – on Alg(Sh(Ω), K) 11 negligible subset 225 net 200 norm – seminorm – multiplicative 244 – submultiplicative 240 normal 208 onshell see massshell condition operator – compact 245 – singular value 245 – nuclear 246 – traceclass 246 Ptrivial 39 paracompact 217 parity 158 partition of unity 224 parton 150 parton distribution function see PDF Parton model 150 – free 151, 152
path 196 path dependence 166 pathordering 69 paths – elementary equivalent 45 – piecewise regular 45 – reduced 45 PDF 160 – full definition 168 – gauge invariant definition 165–169 – in FPM 162 – operator definition 163–165 – renormalized 172, 174 Φq see quark correlator PM see Parton Model powerset 189 preimage 193 probability density function see PDF product integrals 65 projection tensor see metric, transversal pullback 222 pushforward 223 q̂ μ see DIS, orthonormal basis QCD – Feynman rules 255 – gluon polarization sum 256 – Lagrangian 255 quark correlator 164, 184 – expansion 165 regular 208 ring 225 – algebra 229 – antihomomorphism 237 – domain 229 – graded 230 – homogeneous elements 230 – integral domain 229 – local 229 – nonzero divisor 229 – trivial gradation 230 – unit 229 – zero divisor 228 running coupling 174 s see centreofmass energy separation axioms 207
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
Index  273
sequence – Cauchy 240 set – absolutely mconvex 244 – directed 200 – mconvex 244 – multiplicative 244 – partially ordered 199 short exact 17 shortdistance process 151 shuffle – ideal 23, 87 – (k, l) 8 – kalgebra 9 – multiplication 8 SIDIS 176–185 – orthonormal basis 178, 179 sifting property 254 σ see cross section soft part 151 space – absolutely convex 242 – absorbent 242 – balanced 242 – Banach 240 – circled 242 – cone 242 – convex 242 – Fréchet 244 – Hilbert 244 – preHilbert 244 splitting function 171, 174, 175 Stokes 224 – nonAbelian 136 structure function 159 – difference with PDF 162 – factorization 170 – for quark 161, 171, 174 – in FPM 160 – in PM 161 – in SIDIS 180 surjective pointed differentiation 16 symmetrization 250 – ordered 263 t μ̂ see DIS, orthonormal basis t see transferred momentum theorem – Gel’fand–Mazur 247
– mean value 195 – Tychonov 202 timereversal 158, 178 TMD 161, 176 topological vector space 78 – locally convex 92 – nuclear 92 topology 187 – basis 189 – final 242 – Gel’fand 248 – induced 188 – initial 242 – neighborhood 187 – product 193 – quotient 210 – subbasis 192 – Tychonov 202 – weak 247 – weak∗ 247 trace 96 transferred momentum 149 transition function 220 translation operator 157 transverse – convolution 183 – momentum 153, 160, 161, 173, 176, 177 – separation 183 transverse momentum dependent PDF see TMD Urysohn 208 vector – contravariant 220 – covariant 220 vector field – fundamental 56 – leftinvariant 54 vector space 227 W μν see hadronic tensor Wilson – loop functional 75 Wilson line – cut propagator 143, 145 – external point 138, 141, 145, 168 – Feynman rules 145 – Hermitian conjugate 141
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
274  Index
– on a linear path – from −∞ to bμ 137–139 – from aμ to +∞ 139 – from aμ to bμ 143–145 – from−∞ to +∞ 142, 143 – propagator 138, 141, 145 – reversed path 142
– transversal 185 – vertex 138, 141, 145 x see Bjorkenx ξ see momentum fraction y see fractional energy loss
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
De Gruyter Studies in Mathematical Physics Volume 53 Vladimir K. Dobrev Invariant Differential Operators: Volume 4: AdS/CFT, (Super)Virasoro, Affine (Super)Algebras, 2019 ISBN 9783110609684, eISBN (PDF) 9783110611403, eISBN (EPUB) 9783110609714 Volume 52 Alexey V. Borisov, Ivan S. Mamaev Rigid Body Dynamics, 2018 ISBN 9783110542790, eISBN (PDF) 9783110544442, eISBN (EPUB) 9783110542974 Volume 51 Peter Galenko, Vladimir Ankudinov, Ilya Starodumov PhaseField Crystals: Fast Interface Dynamics, 2019 ISBN 9783110585971, eISBN (PDF) 9783110588095, eISBN (EPUB) 9783110586534 Volume 50 Sergey Lychev, Konstantin Koifman Geometry of Incompatible Deformations: Differential Geometry in Continuum Mechanics, 2018 ISBN 9783110562019, eISBN (PDF) 9783110563214, eISBN (EPUB) 9783110562279 Volume 49 Vladimir K. Dobrev Invariant Differential Operators: Volume 3: Supersymmetry, 2018 ISBN 9783110526639, eISBN (PDF) 9783110527490, eISBN (EPUB) 9783110526691 Volume 48 Jared Maruskin Dynamical Systems and Geometric Mechanics: An Introduction, 2018 ISBN 9783110597295, eISBN (PDF) 9783110597806, eISBN (EPUB) 9783110598032 www.degruyter.com
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM
Brought to you by  provisional account Unauthenticated Download Date  1/8/20 7:39 PM