Computational Nanoscience: Applications for Molecules, Clusters, and Solids [Illustrated] 1107001706, 9781107001701

Computer simulation is an indispensable research tool in modeling, understanding and predicting nanoscale phenomena. How

428 65 13MB

English Pages 444 Year 2011

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Computational Nanoscience: Applications for Molecules, Clusters, and Solids [Illustrated]
 1107001706, 9781107001701

Table of contents :
Cover
Contents
Preface
Part I One-dimensional problems
1 Variational solution of the Schrödinger equation
2 Solution of bound state problems using a grid
3 Solution of the Schrödinger equation for scattering states
4 Periodic potentials: band structure in one dimension
5 Solution of time-dependent problems in quantum mechanics
6 Solution of Poisson’s equation
Part II Two- and three-dimensional systems
7 Three-dimensional real-space approach: from quantum dots to Bose–Einstein condensates
8 Variational calculations in two dimensions: quantum dots
9 Variational calculations in three dimensions: atoms and molecules
10 Monte Carlo calculations
11 Molecular dynamics simulations
12 Tight-binding approach to electronic structure calculations
13 Plane wave density functional calculations
14 Density functional calculations with atomic orbitals
15 Real-space density functional calculations
16 Time-dependent density functional calculations
17 Scattering and transport in nanostructures
18 Numerical linear algebra
Appendix Code descriptions
References
Index

Citation preview

Computational Nanoscience Applications for Molecules, Clusters, and Solids Computer simulation is an indispensable research tool in modeling, understanding, and predicting nanoscale phenomena. However, the advanced computer codes used by researchers are sometimes too complex for graduate students wanting to understand computer simulations of physical systems. This book gives students the tools to develop their own codes. Describing advanced algorithms, the book is ideal for students in computational physics, quantum mechanics, atomic and molecular physics, and condensed matter theory. It contains a wide variety of practical examples of varying complexity to help readers at all levels of experience. An algorithm library in Fortran 90, available online at www.cambridge.org/9781107001701, implements the advanced computational approaches described in the text to solve physical problems. Kálmán Varga is an Assistant Professor in the Department of Physics and Astronomy, Vanderbilt University. His main research interest is computational nanoscience, focusing on developing novel computational methods for electronic structure calculations. Joseph A. Driscoll is a Research Assistant in the Department of Physics and Astronomy, Vanderbilt University, where he researches in theoretical and computational physics, mostly in the area of nanoscale phenomena.

Computational Nanoscience Applications for Molecules, Clusters, and Solids K Á L M Á N VAR G A AN D J O S E P H A . D R I S C O L L Vanderbilt University, Tennessee

cambridge university press Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo, Delhi, Tokyo, Mexico City Cambridge University Press The Edinburgh Building, Cambridge CB2 8RU, UK Published in the United States of America by Cambridge University Press, New York www.cambridge.org Information on this title: www.cambridge.org/9781107001701

© Kálmán Varga and Joseph A. Driscoll 2011 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2011 Printed in the United Kingdom at the University Press, Cambridge A catalog record for this publication is available from the British Library Library of Congress Cataloging-in-Publication Data Varga, Kálmán, 1963Computational nanoscience : applications for molecules, clusters, and solids / Kálmán Varga, Joseph A. Driscoll. p. cm. ISBN 978-1-107-00170-1 (Hardback) 1. Nanostructures–Data processing. 2. Physics–Data processing. 3. Computer algorithms. I. Driscoll, Joseph Andrew, 1974- II. Title. QC176.8.N35V37 2011 530.0285–dc22 2010046409 ISBN 978-1-107-00170-1 hardback Additional resources for this publication at www.cambridge.org/9781107001701 Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.

To our wives, Timea and Alice

Contents

Preface

page xi

Part I One-dimensional problems

1

1

Variational solution of the Schrödinger equation

3

1.1 Variational principle 1.2 Variational calculations with Gaussian basis functions

3 5

2

3

4

Solution of bound state problems using a grid

10

2.1 2.2 2.3 2.4 2.5

10 11 15 17 21

Discretization in space Finite differences Solution of the Schrödinger equation using three-point finite differences Fourier grid approach: position and momentum representations Lagrange functions

Solution of the Schrödinger equation for scattering states

32

3.1 3.2 3.3 3.4 3.5 3.6 3.7

34 38 39 51 60 79 83

Green’s functions The transfer matrix method The complex-absorbing-potential approach R-matrix approach to scattering Green’s functions Spectral projection Appendix: One-dimensional scattering states

Periodic potentials: band structure in one dimension 4.1 4.2 4.3 4.4 4.5 4.6

Periodic potentials; Bloch’s theorem Finite difference approach Periodic cardinals R-matrix calculation of Bloch states Green’s function of a periodic system Calculation of the Green’s function by continued fractions

85 85 86 87 89 93 103

viii

Contents

5

Solution of time-dependent problems in quantum mechanics

115

5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8

115 117 119 120 121 128 134

6

The Schrödinger, Heisenberg, and interaction pictures Floquet theory Time-dependent variational method Time propagation by numerical integration Time propagation using the evolution operator Examples Photoionization of atoms in intense laser fields Calculation of scattering wave functions by wave packet propagation 5.9 Steady state evolution from a point source 5.10 Calculation of bound states by imaginary time propagation 5.11 Appendix

142 147 151 155

Solution of Poisson’s equation

160

6.1 Finite difference approach 6.2 Fourier transformation

160 167

Part II Two- and three-dimensional systems 7

8

9

171

Three-dimensional real-space approach: from quantum dots to Bose–Einstein condensates

173

7.1 7.2 7.3 7.4 7.5 7.6

173 175 179 184 189 191

Three-dimensional grid Bound state problems on the 3D grid Solution of the Poisson equation Harmonic quantum dots Gross–Pitaevskii equation for Bose–Einstein condensates Time propagation of a Gaussian wave packet

Variational calculations in two dimensions: quantum dots

196

8.1 8.2 8.3 8.4 8.5 8.6

196 197 200 202 203 208

Introduction Formalism Code description Examples Few-electron quantum dots Appendix

Variational calculations in three dimensions: atoms and molecules

214

9.1 Three-dimensional trial functions 9.2 Small atoms and molecules 9.3 Quantum dots

214 216 217

Contents

10

11

12

13

14

15

ix

9.4 Appendix: Matrix elements 9.5 Appendix: Symmetrization

220 223

Monte Carlo calculations

225

10.1 10.2 10.3 10.4 10.5 10.6

225 229 231 239 250 255

Monte Carlo simulations Classical interacting many-particle system Kinetic Monte Carlo Two-dimensional Ising model Variational Monte Carlo Diffusion Monte Carlo

Molecular dynamics simulations

263

11.1 11.2 11.3 11.4 11.5 11.6 11.7

263 264 265 265 266 270 270

Introduction Integration of the equation of motions Lennard–Jones system Molecular dynamics with three-body interactions Thermostats Physical quantities Implementation and examples

Tight-binding approach to electronic structure calculations

274

12.1 12.2 12.3 12.4

274 281 289 291

Tight-binding calculations Electronic structure of carbon nanotubes Tight-binding model with Slater-type orbitals Appendix: Matrix elements of Slater-type orbitals

Plane wave density functional calculations

295

13.1 Density functional theory 13.2 Description of the plane wave code and examples

295 304

Density functional calculations with atomic orbitals

317

14.1 14.2 14.3 14.4

317 319 324 326

Atomic orbitals Matrix elements for numerical atomic orbitals Examples Appendix: Three-center matrix elements

Real-space density functional calculations

332

15.1 Ground state energy and the Kohn–Sham equation 15.2 Real-space approach 15.3 Examples

332 334 337

x

Contents

16

Time-dependent density functional calculations

339

16.1 16.2 16.3 16.4 16.5

340 343 346 347 350

17

18

Linear response Linear optical response Solution of the time-dependent Kohn–Sham equation Simulation of the Coulomb explosion of H2 Calculation of the dielectric function in real time and real space

Scattering and transport in nanostructures

356

17.1 17.2 17.3 17.4 17.5 17.6

358 362 362 372 377 385

Landauer formalism R-matrix approach to scattering in three dimensions Transfer matrix approach Quantum constriction Nonequilibrium Green’s function method Simulation of transport in nanostructures

Numerical linear algebra

390

18.1 18.2 18.3 18.4 18.5

390 392 394 396 398

Conjugate gradient method Conjugate gradient diagonalization The Lanczos algorithm Diagonalization with subspace iteration Solving linear block tridiagonal equations

Appendix Code descriptions References Index

407 409 428

Preface

Computer simulation is an indispensible research tool for modeling, understanding, and predicting nanoscale phenomena. There is a huge gap between the complexity of the programs and algorithms used in computational physics courses and and those used in research for computer simulations of nanoscale systems. The advanced computer codes used by researchers are often too complicated for students who want to develop their own codes, want to understand the essential details of computer simulations, or want to improve existing programs. The aim of this book is to provide a comprehensive program library and description of advanced algorithms to help students and researchers learn novel methods and develop their own approaches. An important contribution of this book is that it is accompanied by an algorithm library in Fortran 90 that implements the computational approaches described in the text. The physical problems are solved at various levels of sophistication using methods based on classical molecular dynamics, tight binding, density functional approaches, or fully correlated wave functions. Various basis functions including finite differences, Lagrange functions, plane waves, and Gaussians are introduced to solve bound state and scattering problems and to describe electronic structure and transport properties of materials. Different methods of solving the same problem are introduced and compared. The book is divided into two parts. In the first part we concentrate on onedimensional problems. The solution of these problems is obviously simpler and this part serves as an introduction to the second, more advanced, part, in which we describe simulations in three-dimensional problems. The first part can be used in undergraduate computational physics education. The second part is more appropriate for graduate and higher-level undergraduate classes. The problems in the first part are sufficiently simple that the essential parts of the codes can be presented and explained in the text. The second part contains more elaborate codes, often requiring hundreds of lines and sets of different algorithms. Here only the main structure of the codes is explained. We do not try to teach computer programming, as there are excellent books available for that purpose. The codes are written to be simple and easy to follow, sacrificing speed and efficiency for clarity. The reader is encouraged to rewrite these codes to tailor them to his or her own needs.

xii

Preface

The computer codes and examples used in this book are available from the book’s website; see www.cambridge.org/9781107001701. The codes are grouped corresponding to the sections of the book where they appear. A short description of how to use the code and example inputs and outputs is provided. We are continuing work on upgrading the codes and refreshing the program library with new examples and novel algorithms. We would like to thank all our friends who contributed and helped us in this project. Special thanks are due to Professor Yasuyuki Suzuki (Niigata, Japan), Professor Daniel Baye (Brussels, Belgium) and Professor Kazuhiro Yabana (Tsukuba, Japan).

Part I

One-dimensional problems

1

Variational solution of the Schrödinger equation

In this chapter we solve the eigenvalue problem for the quantum mechanical Hamiltonian H=−

2 d 2 + V (x) 2m dx2

(1.1)

using the variational method. We are interested in finding the discrete eigenvalues Ei and eigenfunctions i that satisfy Hi (x) = Ei i (x),

(1.2)

which is the time-independent Schrödinger equation (TISE). For now, we restrict ourselves to bound state problems. The bound state wave function is confined to a finite region and is zero on the region’s boundary. Scattering problems are presented in later chapters. Here we will describe variational solutions using analytical basis functions. Numerical grid approaches for the solution of the Schrödinger equation will be presented in the next chapter.

1.1

Variational principle The variational method is one of the most powerful approaches for solving quantum mechanical problems. The basic idea is to guess a “trial” wave function for the problem, which consists of some adjustable parameters called variational parameters. These parameters are adjusted until the energy of the trial wave function is minimized. The resulting wave function and its corresponding energy are then the variational-method approximations to the exact wave function and energy. The variational approach is based on the following theorems [307]: theorem 1.1 Ritz theorem: For an arbitrary function , the expectation value of H is given by E=

|H| ≥ E1 , |

(1.3)

where the equality holds if and only if  is the exact ground state wave function of H with eigenvalue E1 .

4

Variational solution of the Schrödinger equation

This theorem gives us an upper bound for the ground state energy. The theorem can be generalized to excited states: theorem 1.2 Generalized Ritz theorem: The expectation value of the Hamiltonian is stationary in the neighborhood of its discrete eigenvalues. The proofs of these theorems can be found in many quantum mechanics textbooks (see e.g. [218]). Minimizing the energy corresponding to the trial wave function gives an approximation to the true ground state energy, as long as the ground state wave function can be represented by the trial wave function. To use this method for the first excited state we need to choose a trial wave function that is orthogonal to the ground state. For the second excited state we use a trial wave function orthogonal to both the ground state and first excited state wave functions and repeat the minimization. This can be continued for higher states. A procedure such as the Gram–Schmidt algorithm [105] can be used to generate orthogonal wave functions. To use the variational principle one has to choose a suitable trial function. In most cases a linear combination of independent “basis” functions is used. The trial wave function  is expanded into a set of N basis functions as (x) =

N 

ci φi (x)

(1.4)

i=1

where the φi are basis functions and the ci are linear-combination coefficients. According to the Ritz variational principle, the correct expectation value E of the Hamiltonian with this trial function is stationary against infinitesimal changes of the linear combination coefficients ci . This condition leads to the generalized eigenvalue problem HCk = k OCk ,

(1.5)

which in detailed form reads as N  i=1

Hij cki = k

N 

Oij cki

( j = 1, . . . , N)

(1.6)

i=1

where Hij = φi |H|φj 

(1.7)

are the matrix elements of the Hamiltonian, Oij = φi |φj 

(1.8)

1.2 Variational calculations with Gaussian basis functions

5

are the overlap matrix elements of the basis functions, and Ck is a vector of linear combination coefficients for the kth eigenvector: ⎞ ⎛ ck1 ⎟ ⎜ (1.9) Ck = ⎝ ... ⎠ . ckN Solving the eigenproblem Eq. (1.5) gives the variational approximation to the true eigensolutions. The mini-max theorem [307] relates the (exact) eigenvalues Ei of the Hamiltonian and the (approximate) eigenvalues i of Eq. (1.5): theorem 1.3 Mini-max theorem: Let H be a Hermitian Hamiltonian with discrete eigenvalues E1 ≤ E2 ≤ E3 ≤ · · · and let 1 ≤ 2 ≤ 3 ≤ · · · ≤ N be the eigenvalues of Eq. (1.5), then E1 ≤ 1 ,

E2 ≤ 2 ,

···

EN ≤ N .

(1.10)

The variational principle gives an upper bound to the true eigenenergy. To improve this approximation one has to decrease the upper bound and this can be done by increasing the number of basis functions in the expansion in Eq. (1.4), as shown by the following theorem [307]: theorem 1.4 Let 1 ≤ 2 ≤ 3 ≤ · · · ≤ N be the solution of Eq. (1.5) using the basis     functions φi (x)N i=1 and let 1 ≤ 2 ≤ 3 ≤ · · · ≤ N+1 be the solution of Eq. (1.5) using the basis functions φi (x)N i=1 and also φN+1 (x). Then   1 ≤ 1 ≤ 2 ≤ 2 ≤ · · · ≤ N ≤ N ≤ N+1 .

(1.11)

Using these theorems one can calculate the approximate eigenvalues and eigenvectors of the Hamiltonian operator using a suitable set of basis functions.

1.2

Variational calculations with Gaussian basis functions In a variational calculation many different basis functions can be used, but, depending on the nature of the problem, certain basis functions might be more appropriate than others. In this section we will use Gaussian basis functions as an example. The main reason for this choice is that these basis functions are simple and so their matrix elements can be calculated analytically in one, two, and three dimensions. The Gaussian basis functions are defined as ν 1/2 2 i e−νi (x−si ) . (1.12) φi (x) = π This function has two variational parameters: νi , the width of the Gaussian, and si , the center of the Gaussian. For simplicity we vary only one of these parameters at a time and so perform calculations with either fixed widths or fixed centers. The nonorthogonality of these basis functions is another reason to avoid varying

6

Variational solution of the Schrödinger equation

0.8

0.6

0.4

0.2

0

–4

–2

0

2

4

Figure 1.1 Shifted Gaussian basis functions.

8

6

4

2

0

–4

–2

0

2

4

Figure 1.2 Gaussian basis functions with widths chosen as a geometric progression.

both parameters at the same time: the basis functions overlap and their overlap depends on both the relative positions and the widths of the Gaussians. A large overlap between the basis functions can cause linear dependencies in the basis states, leading to spurious solutions of the eigenvalue problem. By varying only one parameter at a time these linear dependencies can be easily avoided. With the first choice (i.e., keeping the width fixed) we can place the Gaussians so that they are centered at different locations within a uniformly spaced grid of points in space. An example of these “shifted Gaussians” is shown in Fig. 1.1. The free parameters are the locations of the curve centers and the common width νi = ν. The second possibility is to keep the center fixed, e.g. at the origin, and vary the width of the Gaussians (see Fig. 1.2). A popular choice is to use a geometric progression a0 b0i−1 to define the widths as 1 νi = 2 , a0 b0i−1

(1.13)

1.2 Variational calculations with Gaussian basis functions

7

where a0 is the starting value of the geometric progression and b0 = 0 is the progression’s common ratio. This results in three free parameters: the location of the common center and the values of a0 and b0 . The matrix elements of the Gaussian basis functions can be easily calculated. The overlap of two basis functions is  



2 νi νj 1/2 νi νj 2 (1.14) exp − (si − sj ) . φi |φj  = νi + νj νi + νj The kinetic energy matrix elements are given by    

  2 d 2  2νi νj 2 2νi νj   2 φ φi − (s − s ) = 1 − φi |φj .  i j  2m dx2  j 2m νi + νj νi + νj

(1.15)

For some potentials, e.g. for a harmonic oscillator, with potential V (x) = 12 mω2 x2

(1.16)

where m is the particle’s mass and ω is the frequency of the oscillator, or for a Gaussian potential V (x) = V0 e−μx

2

(1.17)

where V0 and μ are the parameters of the potential, the matrix elements can be calculated analytically. In other cases one has to use numerical integration, or one can expand the potential as a sum of Gaussian potentials. The matrix elements for a harmonic oscillator potential are   

 1  1 νi si + νj sj 2 1 2 2 2 + φi |φj , φi | 2 mω x |φj = 2 mω νi + νj νi + νj and the matrix elements for a Gaussian potential are   1/2

√ νi νj (si − sj )2 + μνi s2i + μνj s2j   2 νi νj −μx2 φi |e |φj = exp − . νi + νj + μ νi + νj + μ Two simple programs will now be presented to solve the TISE for a onedimensional (1D) harmonic oscillator potential using the two types of Gaussian basis discussed above. For shifted Gaussians the matrix elements are simplified (νi = ν) and can be calculated using the simple code gauss_1d_c.f90 (see Listing 1.1). After diagonalization the eigensolutions are obtained as listed in Table 1.1. The eigenenergies are very accurate not only for the ground state but for the excited states as well. For Gaussians centered at the origin si = 0 but having varying width, a similar code, gauss_1d_w.f90 (see Listing 1.2) can be used. Again the results are shown in Table 1.1. This basis function is even and so we get only the even eigensolutions of the TISE. The accuracy is somewhat less than in the previous case but more careful optimization may improve the results.

8

Variational solution of the Schrödinger equation

Table 1.1 The eigenenergies of the harmonic oscillator potential obtained by using shifted or variable-width Gaussian bases Exact

Shifted

Variable-width

0.5 1.5 2.5 3.5 4.5 5.5 6.5 7.5 8.5 9.5

0.500 000 000 000 1.500 000 000 000 2.500 000 000 000 3.500 000 000 009 4.500 000 000 059 5.500 000 001 890 6.500 000 005 356 7.500 000 161 985 8.500 000 213 739 9.500 007 221 469

0.500 000 000 000 2.499 999 999 954 4.499 999 998 287 6.499 999 967 019 8.499 997 966 523

Listing 1.1 Solution of the Schrödinger equation for a 1D harmonic oscillator potential with a shifted Gaussian basis 1 2 3 4 5 6 7 8

9

PROGRAM gauss_1d_c ! Basis of 1D Gaussians, with varying centers, ! all with same width implicit none integer,parameter :: n=101 real*8,parameter :: nu=1.d0,h2m=0.5d0 integer :: i,j real*8 :: h(n,n),o(n,n),s(n),eigenvalues(n), eigenvectors(n,n) real*8 :: t,p,ss

10 11 12 13 14

! Calculate the centers for the Gaussians do i=1,n s(i)=-25.d0+(i-1)*0.5d0 end do

15 16 17 18 19 20 21 22

23 24 25

! Setup the Hamiltonian do i=1,n do j=1,n ss=(s(i)-s(j))**2 o(i,j)=exp(-0.5d0*nu*ss) t=exp(-0.5d0*nu*ss)*nu*h2m*(1.d0-nu*ss) p=0.5d0*exp(-0.5d0*nu*ss)*0.25d0*(1.d0/nu+(s(i)+s(j)) **2) h(i,j)=t+p end do end do

26 27 28 29

! Diagonalize call diag1(h,o,n,eigenvalues,eigenvectors) END PROGRAM gauss_1d_c

1.2 Variational calculations with Gaussian basis functions

Listing 1.2 Solution of the Schrödinger equation of the 1D harmonic oscillator potential with a variable-width Gaussian basis 1 2 3 4 5 6 7 8 9 10

PROGRAM gauss_1d_w ! Basis of 1D Gaussians, with varying widths, ! all centered at origin implicit none integer,parameter :: n=101 real*8,parameter :: h2m=0.5d0 integer :: i,j real*8 :: h(n,n),o(n,n),nu(n) real*8 :: eigenvalues(n),eigenvectors(n,n) real*8 :: t,p,ss,x0,a0,w

11 12 13 14 15 16 17

! Calculate the widths for the Gaussians x0=1.14d0 a0=0.01d0 do i=1,n nu(i)=1.d0/(a0*x0**(i-1))**2 end do

18 19 20 21 22 23 24 25 26 27 28

! Set up the Hamiltonian do i=1,n do j=1,n o(i,j)=sqrt(2.d0*sqrt(nu(i)*nu(j))/(nu(i)+nu(j))) w=nu(i)*nu(j)/(nu(i)+nu(j)) t=h2m*2.d0*w*o(i,j) p=0.5d0/(2.d0*(nu(i)+nu(j)))*o(i,j) h(i,j)=t+p end do end do

29 30 31 32

! Diagonalize call diag1(h,o,n,eigenvalues,eigenvectors) END PROGRAM gauss_1d_w

9

2

Solution of bound state problems using a grid

In the previous chapter we showed that the one-dimensional time-independent Schrödinger equation (TISE) −

2 d 2  + V (x)(x) = E(x) 2m dx2

(2.1)

can be solved using the variational method with a suitable set of basis functions. In this chapter we will use simple basis functions based on a numerical grid. The advantage of this family of approaches is that one does not have to calculate matrix elements and it is easy to extend the approach to two and three dimensions. The simplest version of the grid-type approaches is called the finite difference method and it can be considered a limiting case of the Gaussian basis, where an infinitesimally small width is used for the Gaussians. Another grid-based family of basis functions is the Lagrange function basis. These functions keep the simplicity of the finite difference approach while enhancing the accuracy of the solution.

2.1

Discretization in space While a mathematical function may be defined at an infinite number of points, a computer can only store a finite number of these values. To perform numerical computations we must therefore find a way to approximate a function, to some desired accuracy, by a finite set of values. A grid consists of a finite set of locations in space and/or time and is the discrete analog of a continuous coordinate system. The various quantities that we will be working with will be defined only at these points. In this way the grid provides a method to obtain a discrete sampling of continuous quantities. Grids can be used for problems with any number of dimensions, but here we introduce concepts using the simple case of one dimension; the extension to more dimensions is straightforward and will be demonstrated later, in Part II. To set up a one-dimensional grid, we need to know the start coordinate a, the ending coordinate b, and the “step size” (i.e., the distance between points) h. Given these values, the total number of grid points N is N =1+

b−a h

(2.2)

2.2 Finite differences

11

Notice that (b − a)/h gives the number of intervals of size h that will fit into the space between a and b. The grid points are defined at the borders of these intervals, and so the number of grid points is one more than the number of these intervals. For example, in the simple case of a single interval there are two grid points (the two ends of the interval). The location of the ith grid point is xi = a + (i − 1)h

(2.3)

where i runs from 1 to N. For some applications it is more convenient to specify the total number of grid points N rather than the step size. In this case, one obtains h as h=

b−a . N−1

(2.4)

Now that we have defined a grid, we can use it to represent various quantities. Potentials are simply sampled at the grid locations, but it is less clear how to handle the derivative in the kinetic energy term in Eq. (2.1). In the next section we show how to represent derivatives on a grid using finite differences.

2.2

Finite differences The finite difference method replaces the derivatives in a differential equation by approximations. These approximations are made up of weighted sums of function values. This results in a large system of equations to be solved in place of the differential equation. Suppose that we want to calculate the first derivative of some function ϕ(x). The obvious choice, using the definition of the first derivative, is ϕ  (x) =

ϕ(x + h) − ϕ(x) h

(2.5)

for some suitably small h. For a given expression, smaller values of h lead to more accurate approximations. This is a “one-sided” and “forward” approximation because ϕ  (x) is calculated only at values that are larger than or equal to x. Another one-sided possibility is the “backward” difference ϕ  (x) =

ϕ(x) − ϕ(x − h) . h

(2.6)

Each of these finite difference formulas gives an approximation to ϕ  (x) that is accurate to first order, meaning that the size of the error is roughly proportional to h itself. Another possibility is to use a centered approximation, for which ϕ  (x) =

ϕ(x + h) − ϕ(x − h) . 2h

(2.7)

12

Solution of bound state problems using a grid

This expression is the average of the two one-sided approximations, and it gives a better approximation to ϕ  (x). The error is proportional to h2 , which (for h < 1) is smaller than for the one-sided case. The one-sided expressions involve two points (counting x itself), while the centered version uses three. For this reason they are called “two-point” and “threepoint” expressions, respectively. By using expressions with even more points one can continue to improve accuracy. Notice that these expressions have the same basic structure: a linear combination of function values from different positions. The Taylor expansion of ϕ(x) gives the coefficients for these linear combinations, as we now show. As an example, suppose that we want to find expressions involving five points. From a fourth-order Taylor expansion we get ϕ(x ± h) = ϕ(x) ± ϕ  (x)h + 12 ϕ  (x)h2 ± 16 ϕ  (x)h3 +

4 1  24 ϕ (x)h

and ϕ(x ± 2h) = ϕ(x) ± ϕ  (x)2h + 12 ϕ  (x)(2h)2 ± 16 ϕ  (x)(2h)3 +

4 1  24 ϕ (x)(2h)

This gives four equations for ϕ and its derivatives. Using these, along with the identity ϕ(x) = ϕ(x), we have five equations. In matrix form this system of equations is Ax = b,

(2.8)

where ⎛ 1 −2 ⎜ ⎜1 −1 ⎜ ⎜ A = ⎜1 0 ⎜ ⎜1 1 ⎝ 1 −2 ⎛

ϕ(x)



⎜ ⎟ ⎜ ϕ(x) ⎟ ⎜ ⎟ ⎜ ⎟ x = ⎜ ϕ(x) ⎟ , ⎜ ⎟ ⎜ ϕ(x) ⎟ ⎝ ⎠ ϕ(x)

2

− 43

1 2

− 16

0

0

1 2

1 6

2

− 43

2 ⎞ 3 1 ⎟ ⎟ 24 ⎟

⎟ 0 ⎟, ⎟ 1 ⎟ 24 ⎠

(2.9)

2 3

⎞ ⎛ ϕ(x − 2h) ⎟ ⎜ ⎜ ϕ(x − h) ⎟ ⎟ ⎜ ⎟ ⎜ b = ⎜ ϕ(x) ⎟ . ⎟ ⎜ ⎜ ϕ(x + h) ⎟ ⎠ ⎝

(2.10)

ϕ(x + 2h)

Equation (2.8) is a system of linear equations that can be solved to find an expression for the derivatives. For the second derivative, for example, we have 1 ϕ  (x) = − 12 [ϕ(x + 2h) + ϕ(x − 2h)] + 43 [ϕ(x + h) + ϕ(x − h)] − 52 ϕ(x)

2.2 Finite differences

13

Table 2.1 Higher-order finite difference coefficients for the first derivative. Note that Cik−n = −Cik+n k

Cik

k Ci+1

k Ci+2

k Ci+3

k Ci+4

k Ci+5

1

0

1 2

2

0

2 3

1 − 12

3

0

3 4

3 − 20

1 60

4

0

4 5

− 15

4 105

1 − 280

5

0

5 6

5 − 21

5 84

5 − 504

1 1260

6

0

6 7

− 15 56

5 63

1 − 56

1 385

k Ci+6

1 − 5544

One can easily generalize the above example to higher orders. The starting point is the general Taylor expansion ϕ(x + nh) =

∞  1 (i) ϕ (x)(nh)i , i!

n = −m, . . . , m,

(2.11)

i=0

where ϕ (i) is the ith derivative of ϕ and we truncate the series at mth order. Using the notation xj = ϕ ( j−1) (x),

bk = ϕ(x + (k − m − 1)h)

( j, k = 1, . . . , 2m + 1),

and Akj =

(k − m − 1)j−1 , ( j − 1)!

(2.12)

we can now construct the terms in Eq. (2.8). By inverting the matrix A one obtains the desired derivatives: ϕ

( j−1)

(x) =

2m+1 

j

Ck ϕ(x + (k − m − 1)h),

(2.13)

k=1

where j

Ck =

1 hj−1

A−1 jk .

(2.14) j

The fortran program fd_coeff.f90 calculates the coefficients Ck for expressions involving an arbitrary number of points. Tables 2.1 and 2.2 list some of these coefficients.

14

Solution of bound state problems using a grid

Table 2.2 Higher-order finite difference coefficients for the second derivative

2.2.1

k

Cik

k Ci±1

k Ci±2

k Ci±3

k Ci±4

k Ci±5

1 2

−2 − 52

1 4 3

1 − 12

3

− 49 18

3 2

3 − 20

1 90

4

− 205 72

8 5

− 15

8 315

1 − 560

5

− 5269 1800

5 3

5 − 21

5 126

5 − 1008

1 3150

6

− 5369 1800

12 7

− 15 56

10 189

1 − 112

2 1925

k Ci±6

− 16 1632

Infinite-order finite differences One can generalize the finite difference approximation to infinite order. The (2N + 1)th-order Lagrangian interpolation for f (x), xk = kh, k = 0, ±1, ±2, . . . , ±N is given by f (x) =

N 

f (xk )

N  x − xi . xk − xi

(2.15)

i=−N i=k

k=−N

Differentiating this twice and evaluating the result at x = 0 we obtain ⎧ ⎫ ⎪ ⎪ ⎪ ⎪ N N N   1  1  i2 ⎬ 1 ⎨  . f (0) = − 2 2f (0) f (xk ) + f (x−k ) 2 − h ⎪ i2 k i 2 − k2 ⎪ ⎪ ⎪ ⎩ ⎭ i=1 i=1 k=1

(2.16)

i=k

For a few particular values of N, we get: for N = 1, for N = 2,

!  1 2f (0) − f (x1 ) + f (x−1 ) ; 2 h   1 " f  (0) = − 2 52 f (0) − 43 f (x1 ) + f (x−1 ) h  # 1 f (x2 ) + f (x−2 ) . + 12

f  (0) = −

(2.17) (2.18)

As N → ∞, N  1 π2 = N→∞ 6 i2

lim

i=1

(2.19)

2.3 Solution of the Schrödinger equation using three-point finite differences

15

and lim

N→∞

N  i=1 i=k

  πx i2 x2 = −2(−1)k , = lim 1 − 2 2 2 sin π x x→k i −k k

and so we have 1 f (0) = − 2 h 

2.3

$

% ∞    2(−1)k π2 f (0) + f (xk ) + f (x−k ) . 3 k2

(2.20)

(2.21)

k=1

Solution of the Schrödinger equation using three-point finite differences Now we are ready to put these expressions together and use them to solve the 1D Schrödinger equation. Using three-point finite differences, the second derivative of a function can be written as φ  (x) =

φ(x + h) + φ(x − h) − 2φ(x) . h2

(2.22)

Defining φ(i) = φ(xi ) = φ(a + i × h),

V (i) = V (xi ),

(2.23)

the Schrödinger equation can be written as −

2 φ(i + 1) + φ(i − 1) − 2φ(i) + V (i)φ(i) = Eφ(i) 2m h2

(2.24)

for i = 0, . . . , N − 1. Notice that two exterior points, x−1 and xN , appear in this equation. For bound states, the boundary conditions for these points are assumed to be φ(−1) = φ(N) = 0.

(2.25)

Equation (2.24) can be rewritten as the matrix eigenvalue problem HC = EC, where

⎧ 2  2 ⎪ ⎪ ⎪ + V (i) ⎪ 2 ⎪ ⎪ ⎨ 2m h Hij = 2 1 ⎪ − ⎪ ⎪ 2m h2 ⎪ ⎪ ⎪ ⎩0

(2.26)

if i = j, if i = j ± 1, otherwise

(2.27)

16

Solution of bound state problems using a grid

Listing 2.1 Solution of the Schrödinger equation for the harmonic oscillator potential using three-point finite differences 1 2 3 4 5

6 7

PROGRAM fd1d implicit none integer,parameter :: n=201 real*8,parameter :: h2m=0.5d0 real*8 :: h,hamiltonian(n,n),x(n),eigenvalues (n),eigenvectors(n,n) real*8 :: t,p,ss,hh,omega,a,b,w integer :: i,j

8 9 10 11 12 13 14 15 16

! Setup lattice a=-10.d0 b=10.d0 h=(b-a)/dfloat(n-1) do i=0,n-1 x(i)=a+i*h end do w=h2m/h**2

17 18 19 20 21 22 23 24 25 26 27

! Setup the Hamiltonian omega=1.d0 hamiltonian=0.d0 do i=1,n hamiltonian(i,i)=2.d0*w+0.5d0*omega*x(i)**2 if(i/=n) then hamiltonian(i,i+1)=-w hamiltonian(i+1,i)=-w endif end do

28 29 30 31

! Diagonalize call diag(hamiltonian,n,eigenvalues,eigenvectors) END PROGRAM fd1d

and ⎛

φ(0)



⎟ ⎜ ⎜ φ(1) ⎟ ⎟ ⎜ ⎟ ⎜ . ⎟. ⎜ . C=⎜ . ⎟ ⎟ ⎜ ⎜φ(N − 2)⎟ ⎠ ⎝

(2.28)

φ(N − 1) The program fd1d.f90 implements this for a harmonic oscillator potential and is shown in Listing 2.1. The results for various grid spacings may be compared

2.4 Fourier grid approach: position and momentum representations

17

Table 2.3 Solution of the Schrödinger equation with the harmonic oscillator potential using three-point finite differences Exact

h = 0.1

h = 0.05

0.5 1.5 2.5 3.5 4.5 5.5 6.5 7.5 8.5 9.5

0.499 687 3 1.498 435 7 2.495 930 6 3.492 169 6 4.487 150 3 5.480 870 3 6.473 327 1 7.464 518 4 8.454 441 7 9.443 094 6

0.499 921 8 1.499 609 2 2.498 983 9 3.498 045 7 4.496 794 5 5.495 230 2 6.493 352 5 7.491 161 4 8.488 656 6 9.485 838 1

with the exact values in Table 2.3. The accuracy can be improved by increasing the number of grid points (i.e. decreasing h) or by using a higher-order finite difference representation of the kinetic energy operator (fd4d.f90).

2.4

Fourier grid approach: position and momentum representations The eigenstates of position and momentum each provide a complete orthonormal basis suitable for expanding functions [210]. The orthogonality and completeness relations in these representations are given in Table 2.4. In the previous section we saw that the potential energy part of the Hamiltonian is easily expressed in the position basis, since it is diagonal in that representation. The kinetic energy part is not diagonal in the position basis, and so we had to use finite differences as an approximation. Another approach to handling the kinetic energy operator is the Fourier grid method [210], which we discuss in this section. The Hamiltonian operator is the sum of kinetic and potential energy operators H = T + V.

(2.29)

The potential is diagonal in the position representation, x |V |x = V (x)δ(x − x ),

(2.30)

while the kinetic energy operator is diagonal in the momentum representation, k |T|k =

2 k 2 δ(k − k ). 2m

(2.31)

18

Solution of bound state problems using a grid

Table 2.4 Position and momentum representations Representation

Operator

Orthogonality

Position

ˆ x|x = x|x

x |x = δ(x − x )

Momentum

pˆ |k = k|k

k |k = δ(k − k )

Completeness &∞ Ix ≡ −∞ |xx|dx &∞ Ik ≡ −∞ |kk|dk

Table 2.5 Discretized position and momentum representations Representation

Orthogonality

Completeness

Position

xxi |xj  = δij

Ix ≡

Momentum

kki |kj  = δij

Ik ≡

'N

i=1 |xi  xxj |

'(N−1)/2

i=−(N−1)/2 |ki  kkj |

The transformation matrix elements between the position and momentum representations are 1 k|x = √ e−ikx . 2π

(2.32)

Together, Eqs. (2.31) and (2.32) allow us to write the kinetic energy in the position representation. Now we can write the matrix elements of the Hamiltonian in the position representation as ( ∞ 2 2  k −ik(x−x ) 1 e dk + V (x)δ(x − x ). (2.33) x |H|x = 2π −∞ 2m A uniform discrete grid in position space is defined as xi = i x

(i = 1, . . . , N),

(2.34)

where x is the grid spacing and we assume that N is odd. In momentum space we have k =

2π , N x

(2.35)

 N−1 N−1 ,..., ; 2 2

(2.36)

kj = j k

j=−

see Table 2.5. The wave function is represented as ψ=

N  i=1

x|xi ψi ,

ψi = xi |ψ.

(2.37)

2.4 Fourier grid approach: position and momentum representations

19

Table 2.6 Exact and calculated eigenvalues for the Morse potential Exact

N = 129

0.009 869 22 0.028 745 35 0.046 471 72 0.063 048 33 0.078 475 18 0.092 752 27 0.105 879 60

0.009 873 39 0.028 757 11 0.046 490 08 0.063 072 31 0.785 037 93 0.927 845 33 0.105 914 53

0.2 0.1 0 – 0.1 – 0.2

0

1

2

3

4

5

Figure 2.1 Morse potential (broken line) and the lowest three calculated eigenfunctions.

The matrix elements of the Hamiltonian in the Fourier grid representation are Hij =

2 N

(N−1)/2  k=1

cos

2π(i − j)k Tk + V (xi )δij N

(2.38)

where 2 Tk = 4 2m

πk N x

2 .

(2.39)

As an application of the Fourier grid approach we will calculate the vibrational states of H2 . These states can be described by the Morse potential (see Fig. 2.1)

2 V (x) = D 1 − e−β(x−xe ) (2.40) where D is the depth of the minimum. The Schrödinger equation with this potential is analytically solvable, and the first few bound state eigenenergies are listed in Table 2.6. Note that the calculation of the eigenvalues for the Morse potential is much more difficult than that for the harmonic oscillator, and a large number

20

Solution of bound state problems using a grid

Listing 2.2 Solution of the Schrödinger equation for the Morse potential using the Fourier grid method 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

PROGRAM fgh1d implicit none integer,parameter real*8,parameter real*8,parameter real*8,parameter real*8,parameter real*8,parameter real*8,parameter real*8,parameter real*8 integer real*8 real*8 real*8

:: :: :: :: :: :: :: :: :: :: :: :: ::

N_grid=401 ! num. grid points grid=10.d0 ! grid size dx=grid/N_grid ! grid spacing redm=917.70493d0 ! reduced mass h2m=0.5d0/redm ! hbar/(2*m) pi=3.1415926535897932384d0 D=0.1744d0,beta=1.02764d0 xe=1.40201d0 pc,tc,x,t i,j,l,np1,np h(N_grid,N_grid),p(N_grid) eigenvectors(N_grid,N_grid) eigenvalues(N_grid)

16 17 18 19 20

np=(N_grid-1)/2 np1=np+1 pc=2.d0*pi/N_grid tc=2.d0/N_grid*4.d0*h2m*(pi/grid)**2

21 22 23 24 25 26 27

! Calculation of the potential at the grid points do l=1,N_grid x=(l-1-np)*dx p(l)=D*(1.-exp(-beta*(x-xe)))**2 write(1,*)x,p(l) end do

28 29 30 31 32 33 34 35 36 37 38 39 40

! Set up the (symmetric) Hamiltonian matrix do i=-np,np do j=-np,np ! Kinetic energy t=0.d0 do l=1,np t=t+cos(pc*l*(i-j))*l**2 end do H(np1+i,np1+j)=t*tc end do H(np1+i,np1+i)=H(np1+i,np1+i)+p(np1+i) end do

41 42 43

! Diagonalize call diag(H,N_grid,eigenvalues,eigenvectors)

44 45

END PROGRAM fgh1d

of basis functions is required to get accurate solutions. The program fgh1d.f90 (see Listing 2.2) calculates the eigensolution for the Morse potential (see Table 2.6). For this calculation, D = 0.1744 a.u. = 4.7457 eV, β = 1.027 64 a.u. = 1.941 96 × 1010 m−1 , and xe = 1.40201 a.u. = 0.74191 × 10−10 m.

2.5 Lagrange functions

2.5

21

Lagrange functions The simple grid used in finite difference calculations has all points equally spaced and equally important (i.e., equally weighted). In general one can choose the location and weighting of the points in a way that enhances numerical accuracy and/or efficiency. The Lagrange functions (LFs) are an example of this and are described in this section. The Lagrange functions Li (x) associated with a grid {xi } (i = 1, . . . , N) are defined as the product of a Lagrange interpolant [310] πi (x) and a weight function w(x), ) Li (x) = λi πi (x) w(x), (2.41) where the Lagrange interpolating polynomial is defined as πi (x) =

N  x − xk xi − xk

(2.42)

k=1 k=i

and the normalization factor λi will be specified below. Lagrange interpolants are often used in numerical calculations because a function f (x) can be simply and accurately interpolated by its values f (xi ) at the grid points xi (i = 1, . . . , N) f (x) =

N 

f (xi )π(x).

(2.43)

i=1

At this point we have complete freedom in choosing the grid points xi and the weight function w(x). The most advantageous choice is to use the Gaussian quadrature points (abscissas) associated with the weight function w(x). This choice not only guarantees that numerical integration over the LFs will be of Gaussian quadrature accuracy but also ensures that the LFs form an orthogonal basis. Gauss originally used continued fractions to find the most suitable abscissas xi and weights wi for calculating the integral (

b a

f (x)w(x)dx ≈

N 

wi f (xi ).

(2.44)

i=1

Christoffel later showed that the Gaussian quadrature points are the roots of the orthogonal polynomials associated with the weight function w(x). These orthogonal polynomials pi are generated by a three-term recurrence relation [310] bi+1 pi+1 (x) = (x − ai+1 )pi (x) − bi pi−1 (x)

(p−1 (x) = 0, p0 (x) = 1),

where ai+1 = (pi |x|pi ),

bi = (pi |x|pi−1 ),

(2.45)

22

Solution of bound state problems using a grid

the scalar product of two functions is defined as ( ( f |g) =

a

b

( f (x)g(x)w(x)dx

b

with a

w(x)dx = 1.

(2.46)

The recurrence relation can be rewritten in a more elegant, matrix, form: ⎛ a1 ⎜ ⎜ ⎜b1 ⎜ ⎜ ⎜0 ⎜ ⎜. ⎜ .. ⎝ 0

···

b1

0

a2

b2

0 .. .

b2

a3

0 ..

···

0 ⎛

.

bN−1

bN−1 p0 (x)

⎞⎛





⎟⎜ ⎟ ⎜ p (x) ⎟ ⎟ ⎟⎜ 1 ⎟ ⎟⎜ ⎟ ⎟⎜ .. ⎟ ⎟⎜ . ⎟ ⎟⎜ ⎟ ⎟⎜ ⎟ ⎝ pN−2 (x)⎟ ⎠ ⎠ pN−1 (x)

aN ⎞

p0 (x)

0



⎜ ⎟ ⎟ ⎜ p1 (x) ⎟ ⎜ 0 ⎟ ⎜ ⎟ ⎜ ⎜ ⎟ ⎜ ⎟ ⎜ .. ⎜ ⎟ ⎜ .. ⎟ = x⎜ ⎟−⎜ . ⎟ . ⎟. ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎜ pN−2 (x)⎟ ⎝ 0 ⎟ ⎠ ⎝ ⎠ pN−1 (x) pN (x)

(2.47)

This rearrangement shows that the most convenient way to find the roots xi of pN (x) is to diagonalize the above matrix, which we will call J [106]. The eigenvalues xi are the desired Gaussian quadrature points, while the corresponding weights wi are given by the squares of the first elements of the eigenvectors of J, defining the normalization of the LFs by 1 λi = √ . wi

(2.48)

By defining both xi and λi the Lagrange functions Li (x) are fully determined (see Eq. (2.41)). Using the polynomials pk one can give an equivalent definition. First set ) 1 ϕk (x) = √ pk (x) w(x), hk

(2.49)

where hk is the norm of pk . Using the Christoffel–Darboux formula [310] one can derive the relation N−1  k=0

* kN−1 ϕk (x)ϕk ( y) = kN

hN ϕN (x)ϕN−1 ( y) − ϕN−1 (x)ϕN ( y) , hN−1 x−y

(2.50)

2.5 Lagrange functions

23

where kn is the coefficient of xn in pn (x). As the grid is defined by the zeros of pN (xi ) = 0, using the above equation the Lagrange functions can be defined as Li (x) =

N−1  ϕN (x) 1 = λ ϕk (x)ϕk (xi ) i  ϕM (xi ) x − xi

(2.51)

k=0

with

* hN 1 .  hN−1 ϕN (xi )ϕN−1 (xi )

kN−1 λi = kN

(2.52)

The most important properties of the Lagrange functions Li (x) are as follows. 1. Orthogonality: (

b a

Li (x)Lj (x)dx = δij .

(2.53)

2. Cardinality: Li (xj ) = δij .

(2.54)

3. A wave function using LFs can be expanded as φ(x) =

N 

φ(xi )Li (x),

(2.55)

i=1

that is, the variational parameters are the values of the wave function at the grid points. 4. The kinetic energy (Laplacian) matrix can be calculated simply by differentiating the LFs:  d 2 Li = Dij Lj (x), (2.56) Li (x) = 2 dx j &b where Dij = a Li (x)Lj (x)dx can be calculated by higher-order Gaussian integration or, in some cases, analytically. 5. The matrix elements of the potential are simply the values of the potential at the grid points: Li |V |Lj  = V (xi )δij .

(2.57)

6. The Hilbert spaces of the LFs and of the polynomials pi is isomorphic: the Christoffel–Darboux relation connects the Lagrange functions to the orthogonal polynomials [309, 26, 27] (see Eq. (2.51)) Li (x) =

N−1 

ck pk (x),

ck = λi pk (xi ).

(2.58)

k=0

This equation shows that the LFs have the same accuracy as the Nth-order polynomials with the same weight function, and it can also be used to define

24

Solution of bound state problems using a grid

the Lagrange functions in terms of orthogonal polynomials. From a practical √ point of view, however, the difference is enormous. In the case of the pi (x) w(x) basis functions, both the kinetic energy and the potential energy matrix elements have to be calculated analytically or numerically and the Hamiltonian is dense. For the LFs the Hamiltonian is sparse and efficient iterative diagonalization techniques can be used. 7. Exponential convergence:

error ≈ O (1/N)N ,

(2.59)

where N is, as before, the number of grid points. The error decreases faster than any finite power of N because the power in the error formula is always increasing. This is called infinite-order or “exponential” convergence [37]. 8. The position operator is diagonal: the J-matrix is the matrix of the position operator x. In this way the Lagrange basis can also be considered to be a representation in which the position operator is diagonal. Next, we show two very important extensions [27] to the formalism presented above. The first is a coordinate transformation or mapping. The function u(x) is an invertible mapping to an auxiliary u-space defining the ui = u(xi ) points. The simplest choice is the identity transformation u(x) = x. In numerical calculations, where the wave function is rapidly varying in some regions and smooth in other regions, u(x) can be used to optimize the distribution of grid points, leading to improved efficiency. The mapping can also be used to transform infinite or semiinfinite intervals into finite intervals and vice versa. With the mapping u(x), the Lagrange function is defined as

ρ(u(x))λi Li (x) = πi (u(x)) du/dx

1/2 (2.60)

where ρ(u) is the transformed weight function in u-space. The second extension is the application of Gauss–Lobatto or Gauss–Kronrod quadrature. The inclusion in the grid of the endpoints of the interval is advantageous in many calculations, and it is especially useful when one needs to enforce particular boundary conditions. Gauss–Lobatto quadrature can be constructed analogously to the previously described Gauss quadrature by adding the endpoints of the interval to the grid points. Gauss–Kronrod quadrature is even more general: an arbitrary number of predefined points can be added to the grid, increasing the resolution at prescribed regions. The program ort_pol.f90 can be used to generate the xi and wi values. In the following example we show that the Lagrange function basis can be used to solve the same sorts of problem as the finite difference approach. In addition the Lagrange basis has the advantage of giving exact solutions for some potentials.

2.5 Lagrange functions

25

Using the Lagrange functions the Schrödinger equation can be written as   N−1  2 Dji + V (xj )δij φ(xi ) = Eφ(xj ) ( j = 0, . . . , N − 1). (2.61) 2m i=0

2.5.1

Special cases In the following we will show simple examples of Lagrange functions. For the definition of Lagrange functions we use Eq. (2.51).

Fixed-node basis functions As a first example we introduce fixed-node Lagrange functions [229]. These functions are defined on equidistant grid points and are zero at the boundary. They are constructed by using ) and u(x) = cos x (2.62) w(u) = 1 − u2 as the weight function and the mapping. This generates orthogonal functions + 2 pk (x) = sin kx (2.63) π and, using Eq. (2.58), 2  sin nx sin nxi , N+1 N

Li (x) =

0 ≤ x ≤ π,

(2.64)

n=1

with equally spaced grid points xi =

iπ . N+1

(2.65)

As is clear from their definition and as can be seen in Fig. 2.2, the basis functions pk are zero at the starting and ending points of the interval [0, π ]. For an interval [a, b] the grid is given by xi = a +

b−a i N+1

(i = 1, . . . , N),

(2.66)

and the Lagrange basis functions are kπ(xi − a) 2  kπ(x − a) sin . sin N+1 b−a b−a N

Li (x) =

(2.67)

k=1

Note that, for b − a → ∞ and N → ∞, x = and

b−a N

  sin π(x − xi )/ x , Li (x) = π(x − xi )

(2.68)

(2.69)

26

Solution of bound state problems using a grid

1.0 0.5 0 0

1

2

3

0

1

2

3

1.0 0.5 0

Figure 2.2 Lagrange functions from Eq. (2.64) for N = 10. The upper panel shows the basis functions for i = 1 (solid line) and i = 5 (broken line). The lower panel shows the complete set of functions.

which equals 1/ x for x = xi and zero otherwise and behaves like δ(x − xi ) for x → 0. For these Lagrange functions the kinetic energy matrix element is     N  2 d 2  2    L Lk (xi )Lk (xj ), (2.70) = − Tij = Li −  j  2m dx2  2m k=1

where Lk (xj )

=k

2

π b−a

2

2 Lk (xj ). N+1

(2.71)

This matrix element can be calculated analytically. The sum can be rewritten as N 

k2 sin

k=1

=

kjπ kiπ sin N+1 N+1

 N  k(i − j)π k(i + j)π k2 cos − cos 2 N+1 N+1 k=1



⎛ ⎡  N   1 ⎣ d2  = eikx  − 2 Re ⎝  2 dx k=1

x=(i−j)π/(N+1)

⎛  N   d2 ikx  ⎝ + 2 Re e   dx k=1

⎠ ⎞⎤ ⎠⎦ . x=(i+j)π/(N+1)

(2.72)

2.5 Lagrange functions

The geometric series above can be calculated as follows:

N  1 sin N +  2 x 1 1 Re eikx = − + . 2 2 sin 12 x

27

(2.73)

k=1

One then obtains for Tij , i = j, 0

−1 2 π2 2 sin π(i − j)/2(N + 1) (−1)i−j 2m 2(b − a)2

−1 1 − sin2 π(i + j)/2(N + 1) and, for Tii ,

0

−1 1 π2 2 2 2 2(N + 1) + 1/3 − sin π i/N + 1 2m 2(b − a)2

For the interval (−∞, ∞), a → −∞, b → ∞, and the grid spacing x = (b − a)/N remains finite as N → ∞. The grid is now specified as xi → i x with i = 0, ±1, ±2, . . . The kinetic energy becomes ⎧ 2 ⎪ ⎪ for i = j, ⎨ 2 i−j  (−1) (i − j)2 (2.74) Tij = 2 2m x2 ⎪ ⎪ ⎩π for i = j. 3 In the case of a radial grid, one uses n ∈ (0, ∞), a = 0, b → ∞, and N → ∞. The radial grid is ri = i r, i = 1, . . ., and ⎧ 2 2 ⎪ ⎪ i = j, ⎨ (i − j)2 − (i + j)2 , 2 i−j (2.75) (−1) Tij = 3 ⎪ 2m r2 ⎪ ⎩π − 1 , i = j. 3 2i2

Periodic functions Another set of Lagrange functions is generated by choosing w(u) = )

1 1 − u2

and

u(x) = cos x

as the weight function and the mapping. This generates the orthogonal functions 1 pn (x) = √ eikn x , 2π

kn =

2n − N − 1 , 2

(2.76)

with n = 1, . . . , N where N is an odd integer. Using Eq. (2.58), Li (x) =

N 1  cos kn (x − xi ), N n=1

−π ≤ x ≤ π ,

(2.77)

28

Solution of bound state problems using a grid

1.0 0.5 0 –2

0

2

–2

0

2

1.0 0.5 0

Figure 2.3 Lagrange functions from Eq. (4.13) for N = 10. The upper panel shows the basis functions for i = 1 (solid line) and i = 5 (broken line). The lower panel shows the complete set of functions.

with equally spaced grid points (2i − N − 1)π . (2.78) N The Lagrange Li functions defined in this way are periodic (see Fig. 2.3) and are also known as “periodic cardinals.” xi =

Hermite–Lagrange basis Using the weight function w(x) = e−x

2

(2.79)

on the interval (−∞, ∞) one can generate the Hermite polynomials Hn and define the Lagrange functions using Eq. (2.51). The kinetic energy matrix elements can be calculated as ⎧ 1 2 ⎪ ⎪ i = j, ⎨ 6 4N − 1 − 2xi , (2.80) Tij =

 ⎪ 2 1 ⎪ ⎩(−1)i−j , i = j. − 2 (xi − xj )2 The grid points are the zeros of Hn (x).

2.5.2

Examples: Lagrange-basis calculations The example of the anharmonic oscillator V (x) = 12 x2 + x4

(2.81)

can be used to show the accuracy of the Lagrange basis. Two different Lagrange bases, the fixed node basis (1dsch.f90) and the Hermite–Lagrange basis (sch-mesh.f90) are used in these calculations. The code for the fixed-node basis is

2.5 Lagrange functions

29

Table 2.7 Energies of the anharmonic oscillator in atomic units. The finite difference calculations used nine points. The computational interval is [−10, 10] N

Finite difference

Fixed-node

Hermite

20

0.696 053 382 936 2.322 214 687 752 4.310 370 040 933 5.411 413 158 511 6.494 237 068 252

0.696 176 234 602 2.324 411 468 629 4.327 482 826 974 6.577 662 444 153 9.028 638 035 174

0.696 175 820 745 2.324 406 351 957 4.327 524 984 580 6.578 402 014 652 9.028 778 628 005

100

0.696 175 714 151 2.324 404 376 708 4.327 507 547 656 6.578 306 925 285 9.028 405 197 299

0.696 175 820 765 2.324 406 352 106 4.327 524 978 879 6.578 401 949 024 9.028 778 718 151

0.696 175 820 765 2.324 406 352 106 4.327 524 978 879 6.578 401 949 024 9.028 778 718 151

Ln(error)

0

–10

–20 10

20 30 40 Number of basis functions

50

Figure 2.4 Exponential convergence of the Lagrange function basis. For each of the three

lowest eigenvalues, the natural logarithm of the error is plotted vs. the number of basis functions. The solid, broken, and dotted lines correspond to the first, second, and third eigenvalues, respectively.

shown in Listings 2.3–2.5. Table 2.7 shows for comparison the results obtained using the Lagrange basis and those using finite differences. The Lagrange basis results are more accurate than the finite difference results and also are better when the basis is smaller. The Hermite–Lagrange basis is more accurate than the fixednode basis for the anharmonic potential because the Hermite–Lagrange basis is constructed from harmonic oscillator functions, that is, these basis functions would be eigenfunctions if the x4 term were not present in the potential. Figure 2.4 shows the natural logarithm of the error versus the number of basis functions for the lowest three eigenvalues. By plotting the natural logarithm of the error, the exponential decrease in the error is evident.

30

Solution of bound state problems using a grid

Listing 2.3 Solution of the Schrödinger equation for the anharmonic oscillator potential using fixed-node basis functions 1 2 3 4 5 6 7

PROGRAM sch_1d implicit none real*8,parameter integer real*8 real*8,dimension(:),allocatable real*8,dimension(:,:),allocatable

:: :: :: :: ::

h2m=0.5d0 i,ndim a,b xr,wr,e tm,hm,vm

8 9 10

a=-10.d0; b=10.d0 ! left and right boundaries Ndim=40 ! number of mesh points

11 12 13 14 15

allocate(xr(Ndim),wr(Ndim),tm(Ndim,Ndim)) allocate(hm(Ndim,Ndim),vm(Ndim,Ndim),e(Ndim)) call sin_cardinal(Ndim,a,b,xr,wr,tm) hm=-h2m*tm ! kinetic energy part of Hamiltonian

16 17 18 19 20 21

do i=1,Ndim ! potential energy part hm(i,i)=hm(i,i)+0.50d0*(xr(i)**2+xr(i)**4) end do call diag(hm,ndim,e,vm) END PROGRAM sch_1d

Listing 2.4 Solution of the Schrödinger equation for the anharmonic oscillator potential using fixed-node basis functions 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

subroutine sin_cardinal(N,a,b,xr,wr,t) implicit none integer :: i,j,N real*8 :: xr(N),wr(N),t(N,N),x,a,b,wt do i=1,N x=a+i*(b-a)/(1.d0*(N+1)) xr(i)=x wr(i)=dfloat(N+1)/(b-a) end do do i=1,N x=a+i*(b-a)/(1.d0*(N+1)) do j=1,N wt=sinc(x,j,N,a,b,2) t(i,j)=wt end do end do end subroutine sin_cardinal

2.5 Lagrange functions

Listing 2.5 Solution of the Schrödinger equation for the anharmonic oscillator potential using fixed-node basis functions 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23

function sinc(x,i,N,a,b,id) implicit none real*8,parameter :: pi=3.141592653589793d0 integer :: k,N,i,id real*8 :: x,su,wi,wj,d,a,b,sinc su=0.d0 do k=1,N d=k*pi/(1.d0*(N+1)) select case(id) case(0) wi=sin(pi*k*(x-a)/(b-a))*sqrt(2.d0/(b-a)) case(1) wi=cos(pi*k*(x-a)/(b-a))*sqrt(2.d0/(b-a)) & *pi*k/(b-a) case(2) wi=-sin(pi*k*(x-a)/(b-a))*sqrt(2.d0/(b-a)) & *(pi*k/(b-a))**2 end select wj=sin(i*d)*sqrt(2.d0/(b-a)) su=su+wi*wj end do sinc=su*sqrt((b-a)/(1.d0*(N+1)))/(sqrt((N+1)/(b-a))) end function sinc

31

3

Solution of the Schrödinger equation for scattering states

The goal of this chapter is to show how to solve the time-independent Schrödinger equation (TISE) −

2 d 2 (x) + V (x)(x) = E(x) 2m dx2

(3.1)

for the eigenstates (x) with definite energy E representing the continuous part of the spectrum. In typical physical applications the potential is complicated only in a finite region around the origin (see Fig. 3.1) and we are interested in the scattering wave function which describes the reflection and transmission probability of a particle incident on that potential. For the simplest potentials V (x) (e.g., a square well, a potential step, etc.), these continuum eigenstates can be found exactly. Beyond the few simple analytically solvable problems the determination of continuum states is complicated. The major source of difficulty is that the wave function is nonzero in the entire space (−∞ < x < ∞) and basis function or finite difference expansions (see below) lead to infinite-dimensional representations. A typical scattering potential is shown in Fig. 3.1. In the left-hand asymptotic region (−∞ < x < a) and in the right-hand asymptotic region (b < x < ∞) the potential is constant (with values VL and VR , respectively). In these regions the Schrödinger equation has two linearly independent solutions, eikL x and e−ikL x in the left-hand region and eikR x and e−ikR x in the right-hand region, with + + 2m(E − VL ) 2m(E − VR ) kL = , kR = . (3.2) 2 2 Any linear combination of the two solutions is a solution of the Schrödinger equation in the corresponding asymptotic region, and so the general solution can be written as ⎧ ⎨AL (E)eikL x + BL (E)e−ikL x if −∞ < x ≤ a, (3.3) (x) = ⎩A (E)eikR x + B (E)e−ikR x if b ≤ x < ∞. R R Thus, in general there is a left incident (incoming) wave eikL x and a right incident wave e−ikR x with amplitudes AL and BR , respectively. These states are scattered by the potential and the scattered (outgoing) waves BL e−ikL x and AR eikR x each include the transmitted and reflected waves in the corresponding region. Two of the

Solution of the Schrödinger equation for scattering states

33

Figure 3.1 A one-dimensional scattering potential showing the central scattering region

(a ≤ x ≤ b) and asymptotic regions (x < a and x > b) where the potential is constant.

four coefficients are fixed by the boundary conditions. One can, for example, specify that there is an incoming wave from the left and no incoming wave from the right by choosing AL = 1 and BR = 0. Since the flux of the particles is conserved we have kL |AL (E)|2 + kR |BR (E)|2 = kR |AR (E)|2 + kL |BL (E)|2 ,

(3.4)

which implies that the incoming wave amplitudes are transformed into the outgoing ones by a unitary matrix:      BL S11 S12 AL = . (3.5) AR S21 S22 BR The S-matrix is fully determined by the scattering potential and is independent of the boundary conditions. Therefore, to clarify the physical meaning of the matrix elements we can consider special cases with simple choices of the wave amplitudes A and B. If AL = 0 and BR = 0 (incoming wave from the left) then rL = BL /AL and tL = AR /AL are the left-hand reflection and transmission amplitudes, respectively. Similarly, if AL = 0 and BR = 0 then rR = AR /BR and tR = BL /BR are the righthand reflection and transmission amplitudes. Substituting these special cases into Eq. (3.5) we see that the S-matrix consists of these transmission and reflection amplitudes:   ) kR /kL tR (E) rL (E) . (3.6) S(E) = ) kL /kR tL (E) rR (E) If E is real and the interaction is time-reversal invariant then the left- and righthand transmission amplitudes are related through kR tL (E) = kL tR (E).

(3.7)

34

Solution of the Schrödinger equation for scattering states

As the total current is conserved one also has |rL (E)|2 +

kR kL |tL (E)|2 = |rR (E)|2 + |tR (E)|2 = 1. kL kR

(3.8)

A detailed discussion of these relations can be found in Section 3.7.

3.1

Green’s functions The Green’s function is a mathematical tool for solving inhomogeneous differential equations subject to given boundary conditions. In this section we define the time-dependent and time-independent Green’s functions. The time-dependent Schrödinger equation, i

d(x, t) = H(x, t), dt

H = T + V (x),

(3.9)

with initial condition 0 (x) ≡ (x, t0 )

(3.10)

is a linear differential equation for which there exist functions, called Green’s functions, satisfying 

d i + H G± (t) = δ(t). (3.11) dt The Green’s functions G± differ in their boundary conditions: $ G+ (t) = 0, t < 0, retarded; G− (t) = 0,

t > 0,

advanced.

The formal solutions of Eq. (3.11) for these cases are ⎧ ⎨− i e−iHt/, t > 0, +  G (t) = ⎩ 0, t < 0, and G− (t) =

⎧ ⎪ ⎨0, i ⎪ ⎩− e−iHt/, 

(3.12)

(3.13)

t > 0, t < 0.

(3.14)

If the Hamiltonian does not depend on time then G+ is proportional to the timeevolution operator and the formal solution of the Schrödinger equation is (x, t) = iG+ (t − t0 )(x, t0 ),

(3.15)

that is, the retarded Green’s function G+ propagates the wave function in time. Conversely, the advanced Green’s function G− propagates the wave function from

3.1 Green’s functions

35

the present to the past. From the hermiticity of the Hamiltonian the following relation is true:  + † G (t) = G− (−t). (3.16) Time-independent Green’s functions can be obtained by Fourier transforming the time-dependent Green’s function: ( ∞ ( ∞ 1 G+ (E) = eiEt/e−/G+ (t) = eiEt/e−/e−iHt/ = E + i −H −∞ 0 and

(



G (E) =

∞ −∞

e

(

iEt/ / −

G (t) =

e

0

−∞

eiEt/e/e−iHt/ =

1 , E − i − H

where the infinitesimal real number  guarantees that the integrals are convergent. The discrete eigenstates i , for which Hi = Ei i ,

(3.17)

and the continuum eigenstates, for which HE = EE , of the Hamiltonian-form a complete set of states: (  |i i | + dE |E E |. 1=

(3.18)

(3.19)

i

Using these states the spectral representation of the Green’s function can be written as  |i i | ( |E  E  | G(z) = + dE  , (3.20) z − Ei z − E i

where z = E ± i for

G± .

One can define the spectral function

i lim [G+ (E) − G− (E)] 2π →0   (  1   = lim |i  i | + dE  |E    E  | 2 + 2 2 + 2 π →0 (E − E ) (E − E ) i i i (    |i δ(E − Ei )i | + dE  E  δ(E  − Ei )E  , (3.21) =

A(E) =

i

where the relation δ(x) =

1  lim 2 . π →0 x +  2

has been used. The spectral function A is a projector: for each eigenenergy E, A(E) projects out the eigenvector  = A|v for any vector |v. The spectral projector

36

Solution of the Schrödinger equation for scattering states

counts the number of eigenstates at a given energy, and thus it gives the density of states. In particular, using the position representation, the spectral projector A(x, E) =

i lim [G+ (x, x, E) − G− (x, x, E)] 2π →0

(3.22)

1 Im G+ (x, x, E) π then counts the number of states at a given energy at a given point x in space and so is called the local density of states. The retarded (outgoing) Green’s function in coordinate space, G+ , is defined as   2 d 2 +  − V (x) G+ (x, x , E) (E + i − H) G (x, x , E) = E + i + 2m dx2 =−

= δ(x − x ).

(3.23)

The Green’s function G+ (x, x , E) has the following properties. 1. It is a continuous function of x. 2. It is a differentiable function of x and the first derivative is continuous except at x = x , where   lim (∂1 G)+ (x + δ, x, E) − (∂1 G)+ (x − δ, x, E) = −2m2 , (3.24) δ→0+

where ∂1 means differentiation with respect to the first argument. 3. For x = x , it satisfies HG+ (x, x , E) = EG+ (x, x , E).

(3.25)

We may include the point x = x by writing (E − H)G+ (x, x , E) = δ(x − x ).

(3.26)

4. It can be constructed from two linearly independent solutions of the Schrödinger equation: G+ (x, x , E) =

2m 1  L (x, E)R (x , E)θ (x − x) 2 W (E)

 + L (x , E)R (x, E)θ (x − x ) ,

(3.27)

where L and R are two linearly independent solutions (e.g., solutions satisfying boundary conditions in the left- and right-hand regions), W (E) is the Wronskian  (x, E), W (E) = L (x, E)R (x , E) − L (x , E)R

(3.28)

and θ (x) is the step function. Using these properties we can construct the Green’s function in some simple cases. These analytically known Green’s functions are useful for testing numerical calculations.

3.1 Green’s functions

37

If the potential is constant, e.g. V (x) = 0, then the independent solutions are + 2mE ikx −ikx , k= . (3.29) L (x, E) = e , R (x, E) = e 2 The Wronskian is W (E) = −2ik and the Green’s function becomes

+

G+ (x, x , E) = i For the simple step potential V (x) =

$

(3.30)

m ik|x−x | e . 2E2

VL ,

x < 0,

VR ,

x > 0,

(3.31)

(3.32)

the left- and right-hand solutions of Eq. (3.18) are analytically known [218]; they are given by ⎧ ⎪ ikL x + kL − kR e−ikL x , ⎪ x < 0, ⎪ ⎨e k L + kR (3.33) L (x, E) = ⎪ ⎪ 2kL ikR x ⎪ ⎩ e , x > 0, k L + kR and

⎧ 2kR ⎪ −ikL x ⎪ , ⎪ ⎨k + k e L R R (x, E) = ⎪ ⎪ −ikR x kL − kR ikR x ⎪ ⎩e − e , k L + kR

x < 0, (3.34) x > 0.

The Green’s function G+ (x, x , E) can be constructed; for x < 0 we have

 ⎧ m 1 kL − kR −ikL (x+x ) ⎪ −ikL (x−x ) ⎪ e , x < x , + e ⎪ ⎪ 2 ik ⎪ k + k  L L R ⎪ ⎪ ⎪

 ⎨ m 1 kL − kR −ikL (x+x ) ikL (x−x ) e , x < x < 0, + e 2 ik ⎪ k + k  L L R ⎪ ⎪ ⎪ ⎪ ⎪ m 2  ⎪ ⎪ ⎩ e−ikL x +ikR x , x>0 2 ikL + ikR and for x > 0 we have ⎧ m 2  ⎪ ⎪ e−ikL x+ikR x , ⎪ 2 ⎪  ikL + ikR ⎪ ⎪ ⎪ ⎪  ⎨m 1 kR − kL −ikR (x+x ) ikR (x−x ) e , + e ⎪ k L + kR 2 ikR ⎪ ⎪ ⎪

 ⎪ ⎪ kR − kL ikR (x+x ) ⎪ ikR (x−x ) ⎪m 1 ⎩ e , + e k L + kR 2 ikR

(3.35)

x < 0, 0 < x < x , x > x .

(3.36)

38

Solution of the Schrödinger equation for scattering states

3.2

The transfer matrix method The transfer matrix method is one of the simplest approaches to solving scattering problems. It is based on wave function matching. We will take the example of scattering at a rectangular barrier, for which $ V, |x| ≤ a, V (x) = (3.37) 0 otherwise. For E > V the solution has the form ⎧ ⎪ A eikx + BL e−ikx , ⎪ ⎨ L   (x) = Aeik x + Be−ik x , ⎪ ⎪ ⎩ AR eikx + BR e−ikx , where

+ k=

+

2mE , 2



k =

x < −a, −a ≤ x ≤ a,

(3.38)

a < x,

2m(E − V ) . 2

(3.39)

By requiring the continuity of the wave function and its derivative at x = −a and x = a, one has four equations for the six unknown coefficients. Two more equations can be derived from the initial conditions, and the six coefficients can then be determined. This approach is based on the fact that the potential is constant in three regions and the solutions are known in those regions. The transfer matrix method generalizes this approach for an arbitrary potential. Just as in finite differencing, the space is divided into a grid, xi = a + ih

(i = 0, . . ., n),

(3.40)

the grid spacing having been chosen so that the potential remains approximately constant in each interval, that is 

xi−1 + xi , xi−1 ≤ x < xi . V (x) = Vi = V (3.41) 2 Then, in the interval [xi−1 , xi ] the solution can be written in the form i (x) = Ai eiki x + Bi e−iki x , with

+ ki =

xi−1 ≤ x < xi ,

2m(E − Vi ) . 2

(3.42)

(3.43)

The requirement for the continuity of the wave function and its first derivative i (xi ) = i+1 (xi ),

(3.44)

 i (xi ) = i+1 (xi ),

(3.45)

3.3 The complex-absorbing-potential approach

39

leads to the following expressions for the coefficients:



 1 0 kj+1 ikj+1 xj kj+1 −ikj+1 xj −ikj xj 1 Aj = e e e Aj+1 1 + + Bj+1 1 − 2 kj kj Bj =



 1 0 kj+1 ikj+1 xj kj+1 −ikj+1 xj ikj xj 1 e e e Aj+1 1 − + Bj+1 1 + . 2 kj kj

This can be written in matrix form as     Aj+1 Aj =T , Bj Bj+1

(3.46)

where T is known as the transfer matrix. If the boundary conditions are given in the right-hand region, that is, An+1 and Bn+1 are known, the equations can be solved by simple recursion and the wave function is known everywhere. One can also write down this equation in the opposite direction, that is, when j + 1 is calculated from j, simply by inverting the T-matrix. For example, if there is a wave incoming from the left, the boundary conditions are An+1 = 1,

Bn+1 = 0.

(3.47)

In this case the reflection and transmission coefficients are given by B0 A0

(3.48)

An+1 1 = . A0 A0

(3.49)

R= and T=

Listing 3.1 shows an implementation of the transfer matrix approach (transfer_ matrix.f90). Table 3.1 shows the transmission and reflection probabilities obtained by the transfer matrix method for the Morse–Feshbach potential V (x) = −V0

sinh2 (x − x0 )/d cosh2 [(x − x0 )/d − μ]

,

(3.50)

where μ is a dimensionless parameter.

3.3

The complex-absorbing-potential approach In this section we demonstrate how complex absorbing potentials [236, 237, 240, 188, 162, 238, 239, 284, 63, 339, 340, 203, 283, 155, 29, 19, 185, 271, 272, 135, 314, 186, 225, 221, 41] can be used to calculate the scattered wave functions. The main idea is to add a complex potential to the Hamiltonian to absorb the outgoing waves. In this way, the scattered wave function can be described in a finite region, which can be represented by a finite-dimensional basis.

40

Solution of the Schrödinger equation for scattering states

Listing 3.1 Simple implementation of the transfer matrix approach 1 2 3 4 5 6 7 8

implicit none complex*16,parameter :: zi=(0.d0,1.d0) integer,parameter :: n=5000,n_e=200 double precision,parameter :: xl=-15.d0,xu=15.d0,h2m=0.5d0 double precision :: x,xp,e,h integer :: j,i complex*16 :: a,b,ap,bp,k,kp,k0,ec,v,vp complex*16,external :: potential

9 10

h=(xu-xl)/dfloat(n+1)

11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

3.3.1

do i=1,N_e V=potential(xu) e=0.d0+0.5d0*i ec=(E-V)/h2m; k0=sqrt(ec) ap=(1.d0,0.d0); bp=(0.d0,0.d0) do j=n,0,-1 x=xl+j*h; V=potential(x) xp=x+h; Vp=potential(xp) ec=(E-V)/h2m; k=sqrt(ec) ec=(E-Vp)/h2m; kp=sqrt(ec) a=0.5d0*(ap*(1.d0+kp/k)*exp(zi*kp*x)+ & bp*(1.d0-kp/k)*exp(-zi*kp*x))*exp(-zi*k*x) b=0.5d0*(ap*(1.d0-kp/k)*exp(zi*kp*x)+ & bp*(1.d0+kp/k)*exp(-zi*kp*x))*exp(zi*k*x) ap=a; bp=b end do write(1,*)e,abs(b/a)**2 end do end

Complex absorbing potentials Assuming that the potential is real, the conservation of the probability current density j follows from the Hermiticity of the Hamiltonian. For a probability density ||2 that is constant in time, ∇ · j = 0,

j=

 (ψ ∗ ∇ψ − ψ∇ψ ∗ ). 2mi

(3.51)

If instead the potential is complex, the Hamiltonian is not Hermitian. As a result, using the same steps as in standard textbooks to derive the current conservation now yields a different result, as we now show. The time-dependent Schrödinger equation is i

2 2 ∂ =− ∇  + V . ∂t 2m

(3.52)

3.3 The complex-absorbing-potential approach

41

Table 3.1 Transmission and reflection probabilities for the Morse–Feshbach potential. The parameters used in the calculations are V0 = 2.5 hartree, d = 1 bohr, x0 = 3 bohr and the dimensionless parameter μ = 0.2 E

|r|2

|t|2

−1.0 −0.8 −0.6 −0.4 −0.2 0.0 0.2 0.4 0.6 0.8 1.0

0.9522 0.9021 0.8198 0.7022 0.5594 0.4137 0.2872 0.1905 0.1230 0.0785 0.0498

0.0478 0.0978 0.1802 0.2978 0.4406 0.5863 0.7128 0.8095 0.8770 0.9215 0.9502

Multiplying Eq. (3.52) by  ∗ and its conjugate by  and then subtracting the two resulting equations, one obtains i

∂||2 = −∇ · j + 2||2 Im V . ∂t

(3.53)

If the probability density ||2 is not changing in time then ∂||2 =0 ∂t

(3.54)

∇ · j = 2||2 Im V .

(3.55)

and so

This equation shows that the current is not conserved if the imaginary part of the potential is nonzero. Thus, depending on the sign of the imaginary part, a complex potential can represent sinks or sources, which allow some probability density to be absorbed or injected into the system. The usefulness of this concept has been realized in different areas of physics. Examples include chemical reaction rate studies [222], time-dependent wave packet calculations of reactive scattering [145, 291], optical model calculations in nuclear physics [144], the theory of atomic multiphoton ionization [23], simulations of scanning tunneling microscopy [111], and electron transport in nanostructures [46]. In the case of scattering, the addition of a pure imaginary potential to the scattering potential allows the wave function to be absorbed in the asymptotic regions, avoiding the need for infinite-dimensional representations. Various forms of complex absorbing potentials have been developed that efficiently absorb the wave function in the asymptotic regions [120, 206].

42

Solution of the Schrödinger equation for scattering states

One popular form is a power function W (x) = −iη(x − xK )n ,

xK < x < xk + L,

(3.56)

where xK (K = L, R) is the starting point of the complex potential (usually chosen in the asymptotic region where the potential is constant) and L is the range of the complex potential (this should ideally be larger than the de Broglie wavelength of the electron at the energy of interest). Linear and quadratic powers are the most popular choice but higher powers are also used. The optimal value of the parameter η can be determined by minimizing the reflection and transmission probabilities; it depends on the scattering energy. Another form, which depends only on the range of the complex potential, is proposed in [206]. For a position x the left-hand (right-hand) complex potential’s value −iWL (−iWR ) is determined by the following expressions. Let αK =

cK − x c cK − xK

(K = L, R),

c = 2.6,

(3.57)

and βK =

4 4 8 + − 2. (c − αK )2 (c + αK )2 c

(3.58)

These values of αK and βK are then used to compute the left- and right-hand complex potential values for this position: 2

2π i2 βK . (3.59) −iWK (x) = − 2m cK − xK The potential is shown in Fig. 3.2. Its range is L = cK − xK . One can calculate the reflection and transmission probabilities of such complex absorbing potentials by using, for example, the transfer matrix approach.

Figure 3.2 Use of complex potentials with a scattering potential. The solid line represents a

scattering potential, while the broken lines indicate pure imaginary potentials. These imaginary potentials absorb the wave function in the asymptotic regions, allowing a finite representation of the scattering problem.

3.3 The complex-absorbing-potential approach

43

100 10–3 10–6 10–9 10–12 10–15 10–18 0

20

40

60

80

100

E Figure 3.3 Transmission probability (solid line) and reflection probability (broken line) of a

quadratic complex absorbing potential. The values η = 6.2 and L = 6.25 were used in Eq. (3.56).

100 10–2 10–4 10–6 10–8 10–10 10–12 10–14

0

20

40

60

80

100

E Figure 3.4 Transmission probability (solid line) and reflection probability (broken line) for

the complex absorbing potential defined in Eq. (3.59). The range of the potential L = 6.25.

Figures 3.3 and 3.4 show calculated probabilities for a quadratic absorbing potential and for the potential defined in Eq. (3.59). Ideally, both the reflection probabilities should be very small. For a given energy range these probabilities can be minimized by optimizing the potential parameters (see cap_opt.f90).

3.3.2

Scattering wave functions Now we show how one can use complex absorbing potentials to calculate scattering wave functions. We want to calculate the scattering wave function ψC (x) in the

44

Solution of the Schrödinger equation for scattering states

central interval [xL , xR ] for a particle in a potential defined as ⎧ ⎪ −∞ < x < xL , ⎪ ⎨VL , V (x) = v(x), xL ≤ x ≤ xR , ⎪ ⎪ ⎩V , x < x < +∞, R

(3.60)

R

where VL and VR are constant potentials in the asymptotic regions and v(x) is the central scattering potential. The scattering wave function is the solution of the Schrödinger equation Hψ = Eψ,

H = T + V (x).

(3.61)

Assuming that a particle is incident from the left, the scattering wave function is of the form ⎧ ⎪ −∞ < x < xL , ⎪AeikL x + re−ikL x , ⎨ (3.62) ψ(x) = ψC (x), xL ≤ x ≤ xR , ⎪ ⎪ ⎩teikR x , x < x < +∞, R

where

+ kL =

2m(E − VL ) , 2

+ kR =

2m(E − VR ) . 2

(3.63)

Once ψC is known, requiring the continuity of the wave function and its derivatives at xL and xR yields the reflected wave amplitude r and transmitted wave amplitude t. The scattering wave function ψ is then known for all x. Exact calculation of the scattering wave function, however, is only possible for a few simple analytically solvable examples (e.g. a step potential, a square well potential, etc.). For a general scattering potential v(x) the calculation of the wave function ψ(x) is difficult because the boundary conditions at xL and xR are not known. This is the major complication in scattering problems that is not present in bound state problems. In bound state problems (e.g. the harmonic oscillator or hydrogen atom) the boundary conditions are given by ψC (xL ) = 0 and ψC (xR ) = 0 and simple finite difference or basis function expansion can be used. As we are only interested in the wave function ψ(x) in the interval [xL , xR ], we can modify the potential outside this region provided that we still obtain the same solution within [xL , xR ]. In particular, we can add a pure imaginary potential to the Hamiltonian outside the region [xL , xR ] to dampen the wave function. In the region of the imaginary potential the wave function will decay to zero at cL and cR (see Fig. 3.2). The imaginary potentials −iWL in the region [cL , xL ] and −iWR in the region [xR , cR ] are those given by (3.59). The resulting Hamiltonian is H = H0 + V − iWL − iWR ,

(3.64)

where H0 = T is the free particle Hamiltonian. We are looking for a wave function ψ for a given energy E that is the solution of the time-independent Schrödinger equation

3.3 The complex-absorbing-potential approach

(H0 + V − iWL − iWR )|ψ = E|ψ. Now define a left-incident solution in the left-hand asymptotic region by $ eikL x , −∞ < x < xL , φ(x) = 0, xL ≤ x < +∞.

45

(3.65)

(3.66)

This wave function obeys the time-independent Schrödinger equation, with free particle Hamiltonian H0 = T for a given energy E: H0 |φ = E|φ.

(3.67)

Subtracting Eq. (3.67) from Eq. (3.65) gives (H0 + V − iWL − iWR − EI)(|ψ − |φ) = −(V − iWL − iWR )|φ

(3.68)

where I is the identity operator. Now we can extract the scattering wave function ψ with the help of the Green’s function G defined as G ≡ (H0 + V − iWL − iWR − EI)−1 .

(3.69)

Multiplying both sides of equation (3.68) by G leads to |ψ = |φ − G(V − iWL − iWR )|φ,

(3.70)

which gives the desired wave function for the particle. In the region of interest, [xL , xR ], we have |φ = 0,

V |φ = 0,

iWR |φ = 0

(x ∈ [xL , xR ])

(3.71)

and Eq. (3.70) simplifies to |ψ = G(iWL )|φ.

3.3.3

(3.72)

Finite difference implementation In this section we use a simple finite difference representation to illustrate and implement the complex-absorbing-potential approach. In this representation the matrix elements of the Hamiltonian in Eq. (3.65) are 2 + 2δ − δ δ ij i,j−1 ij+1 2m( x)2   + V (xj ) − iWL (xj ) − iWR (xj ) δij

Hij = −

(3.73)

where x is the grid spacing. Inverting this matrix gives the Green’s function G, which can then be used to calculate the wave function using Eq. (3.72). By normalizing the incoming wave function to unit flux by an appropriate choice of A in Eq. (3.62), the incident current jI = 1, and so the transmitted current jT is

46

Solution of the Schrödinger equation for scattering states

Listing 3.2 Setting up the Hamiltonian and the complex absorbing potentials for a step potential 1 2 3 4 5 6 7 8 9 10 11

a=-20.d0 b=+20.d0 c1=-c2 dr=b-c2 c=2.62 h=(b-a)/(N_Lattice+1) vpl=-1.d0 vpr=1.d0 t=h2m/h**2 do i=1,N_Lattice x=a+i*h

! computational domain

! lattice spacing

! h2m=hbar**2/2m

12

! complex potential xl=c*(x-c1)/dr xr=c*(x-c2)/dr yr=4.d0/(c-xr)**2+4.d0/(c+xr)**2-8.d0/c**2 yl=4.d0/(c-xl)**2+4.d0/(c+xl)**2-8.d0/c**2 if(x.lt.c1) w_l(i)=-zi*h2m*(2*pi/dr)**2*yl if(x.gt.c2) w_r(i)=-zi*h2m*(2*pi/dr)**2*yr

13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33

! Hamiltonian hm(i,i)=2.d0*t if(i.gt.1) then hm(i,i-1)=-1.d0*t hm(i-1,i)=-1.d0*t endif if(i.le.N_Lattice/2) then pot(i)=vpl else pot(i)=vpr endif hm(i,i)=hm(i,i)+pot(i)+w_l(i)+w_r(i) end do

equal to the transmission coefficient T = JT /jI . Using the three-point approximation of the first derivative,

we find

ψ(xj+1 ) − ψ(xj−1 ) ∂ψ(xj ) ≈ , ∂x 2 x

(3.74)

  ∗ ψ(xi+1 ) − ψ(xi−1 ) jT (xi ) = Im ψ(xi ) . m 2 x

(3.75)

Note that better accuracy could be obtained by using more points to compute the first and/or second derivatives. Alternatively, one could keep the three-point versions and simply use a smaller grid spacing (i.e., more grid points for a fixed coordinate range).

3.3 The complex-absorbing-potential approach

47

Listing 3.3 Calculation of the scattering wave function and transmission coefficient 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28

do k=1,5 ! loop over the energy e=1.d0+k*0.1 ! energy am=-hm do i=1,N_Lattice am(i,i)=e-hm(i,i) end do call inv(am,N_Lattice,gm) ! Green’s function matrix t=(0.d0,0.d0) ! transmission coefficient do i=1,N_Lattice x=a+(i-1)*h do j=1,N_Lattice t=t+abs(gm(i,j))**2*w_r(i)*w_l(j)*4.d0 end do end do do i=2,N_Lattice-1 x=a+i*h ! incoming wave function phi(i)=exp(zi*kl*x)/sqrt(kl) su=(0.d0,0.d0) do j=1,N_Lattice su=su-gm(i,j)*w_l(j)*phi(j) end do ! scattering wave function psi(i)=su d=1.d0/2.d0*(psi(i+1)-psi(i-1)) cur=Conjg(psi(i))*d/h ! current end do end do

One can also use the transmission coefficient expression from the Green’s function formalism (Eq. (3.157) below) using complex potentials. In this case the transmission probability is T(E) = Tr(G+ WL GWR ) =



|Gii |2 WL (xi )WR (xi ).

(3.76)

i

Listings 3.2 and 3.3 show the relevant parts of the code cap1d.f90 that implements the scattering calculations using complex potentials.

3.3.4

Example: Scattering from 1D potentials The method described above was applied to three scattering potentials: a step, a constant potential barrier, and the Morse–Feshbach potential barrier [228, 196]. These potentials have known analytical solutions for the transmission coefficients,

48

Solution of the Schrödinger equation for scattering states

Figure 3.5 The Morse–Feshbach potential. See the text for the specific parameter values used.

and so the accuracy of the calculated results can be assessed. The step potential Vstep (x) used is as follows: $ Vstep (x) =

−1,

x < 0,

+1

otherwise,

and the barrier potential Vbarrier (x) is given by ⎧ ⎪ x < 0, ⎪ ⎨−1, Vbarrier (x) = +1, 0 ≤ x ≤ a, ⎪ ⎪ ⎩−1, x > a.

(3.77)

(3.78)

The Morse–Feshbach potential (shown in Fig. 3.5) can be expressed as [196]: V (x) = −V0

sinh2 [(x − x0 )/d] cosh2 [(x − x0 )/d − μ]

(3.79)

where, following [196], the various parameters are as follows: V0 = 2.5 hartree, d = 1 bohr, x0 = 3 bohr, and the dimensionless parameter μ = 0.2. For each of the three potentials, the same absorbing complex potentials and numerical methods (described above) were used to perform the calculations. The results were obtained using 800 grid points and the values cL = −20, cR = 20, xL = −3, xR = 3. Three-point first derivatives and second derivatives were used. In each case the wave function was calculated for the entire region. Using the step potential as an example, Figs. 3.6 and 3.7 show the real and imaginary parts of the calculated wave function along with their known analytical forms. In the central region [xL , xR ] the calculated wave function is in excellent agreement with the exact wave function. Outside the central scattering region the complex potentials absorb the wave function.

Real part of wave function

3.3 The complex-absorbing-potential approach

49

1

0

–1 –20

–10

0 Position

10

20

Figure 3.6 Real part of the wave function vs. position, for the step potential and particle

Imaginary part of wave function

energy = 1.5. The broken line shows the calculated data and the solid line indicates exact data from an analytical solution.

1

0

–1 –20

–10

0 Position

10

20

Figure 3.7 Imaginary part of the wave function vs. position, for the step potential and

particle energy = 1.5. The broken line represents the calculated data, while the solid line shows exact data from an analytical solution.

For each potential, plots were made of transmission coefficient vs. energy for both the calculated and analytical data. Since the calculated wave function is accurate in the central scattering region, the value for jT was obtained from this region. Figures 3.8–3.10 give these plots for the step, barrier, and Morse–Feshbach potentials, respectively. In each case there is very close agreement between the calculated and analytical values. We have shown the usefulness of complex absorbing potentials for solving the one-dimensional Schrödinger equation. By adding complex absorbing potentials to the original potential an accurate scattering wave function and transmission coefficient can be obtained. Beyond cases for which analytical solutions exist, this general method offers a simple way to solve the one-dimensional Schrödinger equation for arbitrary potentials.

Solution of the Schrödinger equation for scattering states

Transmission

1.0 0.8 0.6 0.4 0.2 0

0

2

4

6

8

10

Energy Figure 3.8 Transmission coefficient vs. energy for the step potential. The solid line shows

exact data from an analytical solution, while the markers indicate the calculated data. 1.0

Transmission

0.8 0.6 0.4 0.2 0

0

2

4

6 Energy

8

10

Figure 3.9 Transmission coefficient vs. energy for the barrier potential. The solid line

represents exact data from an analytical solution. The markers show the calculated data. 1.0 0.8 Transmission

50

0.6 0.4 0.2 0 –2

0

2

4 Energy

6

8

10

Figure 3.10 Transmission coefficient vs. energy for the Morse–Feshbach potential. The solid

line shows exact data from an analytical solution. The markers indicate the calculated data.

3.4 R-matrix approach to scattering

3.4

51

R-matrix approach to scattering One of the most efficient approaches to solving scattering problems is the R-matrix method [348]. The basic idea of R-matrix theory is to divide the system into asymptotic and interacting regions. The wave function in the asymptotic region is assumed to be known and the scattering potential is restricted to the interacting region (called the “box”). By assuming fixed but arbitrary boundary conditions on the surface of the box one can solve the Schrödinger equation inside the box. The box eigenfunctions obtained in this way form a complete and discrete set of states and can be used to expand the scattering wave function at any arbitrary energy inside the box. In order to extract the scattering information (the transmission probability, phase shift, etc.), the external and internal parts of the wave function are matched at the surface of the box. There are several different versions of the R-matrix approach. Here we will discuss the two most popular variants: Wigner’s original form of the theory, and the variational form.

3.4.1

Wigner’s R-matrix theory To describe Wigner’s R-matrix theory [348] we solve the Schrödinger equation −

2 d 2 (x) + V (x)(x) = E(x), 2m dx2

(3.80)

for a given particle energy E, assuming the following form for the potential: ⎧ ⎪ −∞ < x < a, ⎪ ⎨VL , (3.81) V (x) = v(x), a ≤ x ≤ b, ⎪ ⎪ ⎩V , b < x < ∞, R

where VL and VR are constants and v(x) is the scattering potential. Now we generate an auxiliary function set φi (x) inside the box (a ≤ x ≤ b). We assume that these functions satisfy the Schrödinger equation inside the box with prescribed boundary conditions φi (a) = φa ,

φi (a) = φa ,

(3.82)

φi (b) = φb ,

φi (b) = φb .

(3.83)

With these boundary conditions the Schrödinger equation for the box becomes a discrete eigenvalue problem, −

2  φ (x) + v(x)φi (x) = i φi (x), 2m i

(3.84)

and the eigenfunctions form a complete set of states. Multiplying Eq. (3.80) by φi (x) from the left and integrating over the box region gives ( b ( b ( 2 b  φi (x) (x)dx + φi (x)v(x)(x)dx = E φi (x)(x)dx. (3.85) − 2m a a a

52

Solution of the Schrödinger equation for scattering states

Similarly, by multiplying Eq. (3.84) by (x) from the left and integrating over the box region we get ( b ( b ( 2 b (x)φi (x)dx + (x)v(x)φi (x)dx = i (x)φi (x)dx. (3.86) − 2m a a a Subtracting Eq. (3.86) from Eq. (3.85), one obtains ( b (  2 b  φi (x)  (x) − (x)φi (x) = (E − i ) (x)φi (x)dx, − 2m a a

(3.87)

which can be further simplified, by integration by parts on the left-hand side, to −

 2  φi (b)  (b) − φi (a)  (a) − φi (b)(b) + φi (a)(a) 2m ( b = (E − i ) (x)φi (x)dx.

(3.88)

a

By expanding (x) in terms of φi (x) in the box region, so that (x) =

∞ 

ci φi (x),

(3.89)

i=1

the linear combination coefficients ci =

(

b

a

(x)φi (x)dx

(3.90)

can be expressed using Eq. (3.88). The wave function in the box is given by (x) = R(b, x)  (b) − R(a, x)  (a) − R (b, x)(b) + R (a, x)(a),

(3.91)

where the R-matrix is defined by R(x, x ) = −



2  φi (x)φi (x ) 2m E − i

(3.92)

i=1

and ∞

2  φi (x)φi (x ) R (x, x ) = − . 2m E − i 



(3.93)

i=1

Notice that R(x, x ) is symmetric. Once Eq. (3.84) is solved in the box region the R-matrix is completely known and then, through Eq. (3.91), the wave function in the box region can be calculated. Equation (3.91) contains the boundary values and the first derivative of the wave function on the boundary, but these are assumed to be available from the known asymptotic wave functions. The boundary conditions defined by Eqs. (3.82) and (3.83) are arbitrary and can be chosen to simplify the working. Assuming that the derivatives are zero at the boundary, Eq. (3.91) simplifies to

3.4 R-matrix approach to scattering

(x) = R(a, x)  (a) − R(b, x)  (b).

53

(3.94)

The simplest way to satisfy the boundary conditions is to solve Eq. (3.84) by expanding it using basis functions that satisfy the prescribed boundary conditions. In the numerical implementation we will use a finite difference representation for φi (x). As an example we will calculate the transmission probability of a particle incident from the left. The asymptotic wave function is given by ⎧ 1

ikL x −ikL x ⎪ e , −∞ < x < a, + re √ ⎪ ⎨ k L (3.95) (x) = ⎪ 1 ⎪ ⎩ √ teikR x , b < x < ∞, kR ) ) with kL = 2m(E − VL )/2 and kR = 2m(E − VR )/2 . Notice that the incoming flux has been normalized to unity. Matching this asymptotic wave function with the box wave function defined in Eq. (3.91) at the boundary we get the following expressions for the transmission and reflection probabilities: 4ka kb R2ab , |t|2 =  2 2 1 + ka kb Rab − Raa Rbb + (ka Raa + kb Rbb )2 2 |r|2 = A ka2 kb2 R4ab − 2kL2 kR Raa Rbb R2ab + ka2 kb2 R2aa R2bb

+1 − 2ka kb R2ab + ka2 R2aa + kb2 R2bb ,

(3.96)

(3.97)

where 1 A=  2 2 1 + ka kb Rab − Raa Rbb + (ka Raa + kb Rbb )2

(3.98)

and Rxy = R(x, y). When implementing this method the infinite sums are truncated to become finite sums, and convergence should be checked with respect to the number of terms included in the sum. The code rm1d.f90, shown in Listing 3.4, implements the R-matrix calculation of the transmission and reflection coefficients using a three-point finite difference representation of the kinetic energy. Note that the assumed boundary conditions are φ  (a) = 0 and φ  (b) = 0. For a three-point finite difference this means that φ(1) = φ(0),

φ(N) = φ(N + 1),

(3.99)

and φ  (1) =

φ(0) + φ(2) − 2φ(1) φ(2) − φ(1) = , 2 h h2

(3.100)

with a similar equation for φ  (N), which modifies the first and last diagonal elements of the Hamiltonian matrix. Another code, rmatrix_1d.f90, extends the R-matrix approach for higher-order finite differences, providing more accurate results.

54

Solution of the Schrödinger equation for scattering states

Listing 3.4 R-matrix calculation of the transmission coefficients 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46

N=400 a=-5.d0 ! computational region b=5.d0 h2m=0.5d0 h=(b-a)/(1.d0*(N-1)) va=potential(a) ! V left vb=potential(b) ! V right do i=1,N x=a+(i-1)*h t(i,i)=h2m/h**2 if(i.eq.1) t(1,1)=h2m/h**2 ! Boundary condition if(i.eq.N) t(N,N)=h2m/h**2 ! Boundary condition if(i.ne.1) then t(i,i-1)=-h2m/h**2 t(i-1,i)=-h2m/h**2 endif ! user defined potential hm(i,i)=hm(i,i)+potential(x) end do call diag(hm,n,e,v) do i=1,5 e=min(va,vb)+0.1d0*i ka=sqrt((energy-va)/h2m) kb=sqrt((energy-vb)/h2m) Raa=0.d0 ! R-matrix Rab=0.d0 Rbb=0.d0 do i=1,N psia=v(1,i) psib=v(N,i) Raa=Raa-h2m*psia*psia/(Energy-e(i))/h Rab=Rab-h2m*psia*psib/(Energy-e(i))/h Rbb=Rbb-h2m*psib*psib/(Energy-e(i))/h end do a11=1.d0-zi*ka*Raa a12=-zi*kb*Rab a21=-zi*ka*Rab a22=1.d0-zi*kb*Rbb b1=-2.d0*zi*ka*Raa*exp(zi*ka*a) b2=-2.d0*zi*ka*Rab*exp(zi*ka*a) d=a11*a22-a12*a21 x1=(a22*b1-a12*b2)/d x2=(a11*b2-a21*b1)/d reflection=(x1-exp(zi*ka*a))/exp(-zi*ka*a) transmission=x2/exp(zi*kb*b) end do

3.4 R-matrix approach to scattering

3.4.2

55

Variational R-matrix method The variational R-matrix method [180, 197, 112] is a powerful variant of the R-matrix approach described in the previous section. In this method the wave function in the inner region [a, b] is calculated by the variational principle. The scattering wave function is calculated by matching the wave function of the inner region to the known asymptotic wave functions. The difference between the variational and the Wigner R-matrix approaches is that the variational R-matrix method uses the variational principle to obtain the wave function in the inner region, while the Wigner approach expands the wave function into box eigenfunctions. The Wigner approach is simpler, but the boundary conditions for the true scattering function and the box eigenfunctions are different and so the convergence may be slow. In the variational R-matrix approach, the variational basis functions can be chosen arbitrarily and this gives more flexibility in the expansion of the scattering wave function. In the following introduction to the variational R-matrix approach we follow the pedagogical paper [196]. By multiplying the time-independent Schrödinger equation Hψ = Eψ by ψ and integrating over the interval [a, b] one gets ( ( b 2m 2 b  [E − V (x)] ψ 2 (x)dx = 0, ψ(x)ψ (x)dx + (3.101)  a a which, integrating by parts, becomes ( b (   2 2m 2 b [E − V (x)] ψ 2 (x)dx − ψ (x) dx +  a a + λ(b)ψ 2 (b) − λ(a)ψ 2 (a) = 0,

(3.102)

where ψ  (a) , ψ(a)

λ(a) ≡

ψ  (b) . ψ(b)

λ(b) ≡

(3.103)

Next we expand ψ using some appropriate basis set χi (x): ψ(x) =

N 

ci χi (x).

(3.104)

i=1

Upon substitution of ψ into Eq. (3.102) one obtains Q=

N 

Aij ci cj + λ(b)

i,j=1

N 

bij ci cj − λ(a)

i,j=1

where

( Aij = −

b a

2m − 2 

χi (x)χj (x)dx + (

b a

N 

aij ci cj = 0,

(3.105)

i,j=1

2mE 2

χi (x)V (x)χj (x)dx

( a

b

χi (x)χj (x)dx (3.106)

56

Solution of the Schrödinger equation for scattering states

and ij is the product of the basis functions at the boundaries, xij = χi (x)χj (x)

(x = a, b).

(3.107)

The stationary property of Q with respect to variation of the linear combination coefficients leads to the linear matrix equation

(3.108) A + λ(b) b − λ(a) a C = 0. To proceed from this point, we next consider how λ(a) and λ(b) are related to the wave function in the asymptotic regions. Assume that the wave function in these regions is given by ⎧ ⎪ −∞ < x ≤ a, ⎨AL eikL x + BL e−ikL x , (x) = (3.109) ⎪ ⎩AR eikR x + BR e−ikR x , b ≤ x < ∞, where AL , BL , AR , and BR are constants. We are free to specify arbitrary asymptotic behavior: for example, we can set AL = 1 and BR = 0 to represent a particle incident from the left-hand side. Using Eq. (3.109) with the above definitions for λ(a) and λ(b), we obtain λ(a) ≡

ψ  (a) ikAL eikR a − ikBL e−ikR a = ψ(a) AL eikR a + BL e−ikR a

(3.110)

λ(b) ≡

ikAR eikR b − ikBR e−ikR b ψ  (b) = . ψ(b) AR eikR b + BR e−ikR b

(3.111)

and

From this we can see that choosing an arbitrary value for λ(b) is equivalent to choosing values for AR and BR , which, as discussed above, we are allowed to do. For a chosen basis and potential the values of χi (x) and of the matrices A and (a,b) (see Eqs. (3.106) and (3.107)) are fixed. We also assume that we have chosen a value for λ(b). We can then define A ≡ A + λ(b) b ,

(3.112)

leading to the generalized eigenvalue problem AC = λ(a) a C.

(3.113)

This eigenvalue problem will have only one unique solution. To see this, first notice from Eq. (3.107) that a is just an outer product of a vector of basis functions with itself: x = v ⊗ v T

(x = a, b)

(3.114)

3.4 R-matrix approach to scattering

57

where v = (χ1 χ2 · · · χN )T . The characteristic equation that an eigensolution must satisfy is 3 2   (3.115) det A − λ(a) a = det A − λ(a)vv T = 0. Using the Sherman–Morrison formula [286], this becomes 2 3 1 − λ(a)v T A−1 v det A = 0.

(3.116)

Assuming that det A = 0, this means that λ(a) =

1 , v T A−1 v

(3.117)

confirming the uniqueness of the solution. The solution of this eigenproblem gives C, the vector of the expansion coefficients ci in Eq. (3.104), defining the wave function in the scattering region. The solution also yields λ(a), which shows how our choice of λ(b) constrains the form of the function in the left-hand asymptotic region. To form the scattering wave functions we need two linearly independent solutions. These can be generated by two different prescribed values of λ(bj ) (j = 1, 2) on the boundary. The solutions take the form ⎧ j j ⎪ AL eikL x + BL e−ikL x , −∞ < x < a, ⎪ ⎪ ⎨ 'N j (3.118) j (x) = ψj (x) = i=1 ci χi (x), a ≤ x ≤ b, ⎪ ⎪ ⎪ ⎩Aj eikR x + B j e−ikR x , b < x < ∞, R R j

for j = 1, 2. By knowing ci , the wave function in the middle region is fully deterj j j j mined. The four unknown coefficients AL , AR , BL , and BR can be found by matching the asymptotic wave function and its derivatives at the two boundaries at a and b: AL eikL a + BL e−ikL a = ψj (a),

j j ikL a AL eikL a − BL e−ikL a = ψj (a),

(3.119)

AR eikR b + BR e−ikR b = ψj (b),

j j ikR b AR eikR b − BR e−ikR b = ψj (b),

(3.121)

j

j

j

j

(3.120)

(3.122)

for j = 1, 2. Now the two linearly independent solutions 1 and 2 are known, and we can form linear combinations of them with the desired asymptotic form. In particular, we can require that (x) = α1 1 (x) + α2 2 (x)

(3.123)

58

Solution of the Schrödinger equation for scattering states

Listing 3.5 Setting up the Hamiltonian for the variational R-matrix method by calculating the matrix elements using numerical integration 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27

do i=1,Ndim xx(i)=cos(kappa(i)*a) do j=1,Ndim sv=0.d0 st=0.d0 ss=0.d0 do k=0,Nstep x=a+k*h f1=cos(kappa(i)*x) f2=cos(kappa(j)*x) df1=-kappa(i)*sin(kappa(i)*x) df2=-kappa(j)*sin(kappa(j)*x) ss=ss+h*f1*f2 sv=sv+h*f1*f2*pot(k) st=st+h*df1*df2 end do T(i,j)=st S(i,j)=ss V(i,j)=sv f1=cos(kappa(i)*a) f2=cos(kappa(j)*a) da(i,j)=f1*f2 f1=cos(kappa(i)*b) f2=cos(kappa(j)*b) db(i,j)=f1*f2 end do end do

behaves asymptotically as $ (x) =

eikL x + re−ikL x ,

−∞ < x ≤ a,

teikR x ,

b ≤ x < ∞.

(3.124)

From this condition the reflection and transmission coefficients can be calculated (again eliminating the unknown coefficients by matching the wave functions at the boundaries). The coefficients take the form r=

1 − B2 B1 BL2 BR R L , d

t=

1 − B 2 A1 A2R BR R R , d

(3.125)

where 1 2 1 − BR AL . d = A2L BR

(3.126)

3.4 R-matrix approach to scattering

59

Listing 3.6 Variational R-matrix calculation of the transmission and reflection coefficients 1 2 3 4 5 6 7 8 9 10

11 12 13 14 15 16

energy=-0.3d0 k_L=sqrt((energy-v_L)/h2m) k_R=sqrt((energy-v_R)/h2m) ! Calculation of two independent solutions do isec=1,2 if(isec==1) lam_b=+2.d0 if(isec==2) lam_b=-2.d0 do i=1,Ndim ! Hamiltonian do j=1,Ndim hm(i,j)=-T(i,j)+energy*S(i,j)/h2m-V(i,j)/h2m+lam_b*db (i,j) om(i,j)=da(i,j) end do end do am=hm ! Find the single eigenvalue call inv_r(am,ndim,ai) xxa=dot_product(xx,matmul(ai,xx))

17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37

am=hm-1.d0/xxa*om call inv_r(am,ndim,ai) ! Calculate the inverse evec=0.d0 ! Calculate the eigenvector evec(1)=1.d0 evec=matmul(ai,evec) xxa=sqrt(sum(evec(:)**2)) evec=evec/xxa xxa=sqrt(sum(evec(:)**2)) ! Match the wave function if(isec==1) then call wave_function_match(a,b,Nstep,Ndim,kappa,k_L,k_R, & evec,A_L1,B_L1,A_R1,B_R1) elseif(isec==2) then call wave_function_match(a,b,Nstep,Ndim,kappa,k_L,k_R, & evec,A_L2,B_L2,A_R2,B_R2) endif end do det=A_L2*B_R1-B_R2*A_L1 reflection=(B_L2*B_R1-B_R2*B_L1)/det transmission=(A_R2*B_R1-B_R2*A_R1)/det

The transmission and reflection probabilities are then R = |r|2 ,

T=

kR 2 |r| . kL

(3.127)

Listings 3.5 and 3.6 show relevant parts of a variational R-matrix code (var_rm1d.f90). The basis functions in these calculations are chosen as χi (x) = cos κi x and the matrix elements are calculated by numerical integration.

(3.128)

60

Solution of the Schrödinger equation for scattering states

2

1

0

–1

–2 –5

0

5 x (atomic units)

10

Figure 3.11 Two linearly independent solutions at E = −0.3 atomic units.

1.0 T(E) 0.8 0.6 0.4 0.2 0 –2

R(E) –1

0

1 2 E (atomic units)

3

4

Figure 3.12 Calculated transmission and reflection coefficients for the Morse–Feshbach

potential.

3.4.3

Example: scattering from the 1D Morse–Feshbach potential To demonstrate the variational R-matrix method, we will apply it to scattering from the Morse–Feshbach potential. This potential, Eq. (3.79), was used above to illustrate complex absorbing potentials. It is shown in Fig. 3.5. The parameters are V0 = 2.5 hartree, d = 1 bohr, x0 = 3 bohr, and the dimensionless parameter μ = 0.2. Figure 3.11 shows two independent solutions obtained for this potential. The calculated transmission and reflection coefficients are given in Fig. 3.12.

3.5

Green’s functions We have already used Green’s functions to calculate the scattering wave function with the help of complex potentials. In that case the complex potentials effectively close the system into a finite region and the finite-dimensional Green’s function can

3.5 Green’s functions

61

be calculated. In this section we treat the scattering problem in the whole infinite system, and the dimension of the Green’s function matrix is infinite. By assuming constant or periodically repeated potentials in the left- and right-hand asymptotic regions, however, these infinite Green’s functions can be calculated. In this section we show how Green’s functions can be used to calculate the scattering properties in infinite systems.

3.5.1

Calculation of scattering properties using Green’s functions For a scattering problem we again assume that the space is divided into lefthand, center, and right-hand regions. This setup is typical in quantum transport calculations. From now on we will also refer to the asymptotic regions as “leads.” To partition the total Hamiltonian into Hamiltonians describing the left-hand, right-hand, and centre regions, we first define a localized basis set φi (x). The basis state φi (x) is centered at point xi and the basis states are localized in space: φi |φj  = 0

if xc < |xi − xj |

for a given cutoff distance xc . The interval [−∞, xL ] is the left-hand region and the basis functions with −∞ < xi < xL are labeled as φiL (x). Similarly, the basis in the center region [xL , xR ] and right-hand region [xR , ∞] are denoted by φiC (x) and φiR (x), respectively. The localized basis function can be a simple finite difference or a Gaussian basis. The matrix elements of the Hamiltonian corresponding to the basis functions in the left-hand, center, and right-hand regions are (HL )ij = φiL |H|φjL , (HC )ij = φiC |H|φjC , (HR )ij = φiR |H|φjR , and the matrix elements coupling the regions are (τL )ij = φiL |H|φjC , (τR )ij = φiR |H|φjC . It is assumed that the left- and right-hand regions are not coupled (this can always be achieved by choosing an appropriate cutoff xc ), that is, φiL |H|φjR  = 0. If the basis functions are not orthogonal then one can define the overlap matrix elements in an analogous way.

62

Solution of the Schrödinger equation for scattering states

Using these definitions the Hamiltonian and Green’s function become ⎞ ⎛ 0 HL τL ⎟ ⎜ H = ⎝ τL† HC τR† ⎠ 0 τR HR and



GL ⎜ G = ⎝ GLC

GCL GC

⎞ GRL ⎟ GRC ⎠ .

GLR

GCR

GR

(3.129)

(3.130)

Note that the dimensions of the bases in the left- and right-hand regions are infinite, so HL , HR , τL , and τR are each infinite-dimensional matrices. Using the definition (E − H)G = I, where I is the identity matrix, we can write ⎞⎛ ⎞ ⎛ ⎞ ⎛ −τL 0 E − HL GL GCL GRL 1 0 0 ⎟⎜ ⎟ ⎜ ⎟ ⎜ † E − HC −τR† ⎠ ⎝ GLC GC GRC ⎠ = ⎝ 0 1 0 ⎠ . ⎝ −τL 0

−τR

E − HR

GLR

GCR

GR

0 0 1

One can write this as a linear equation and solve it for the elements of G: ⎛ ⎞ GL GCL GRL ⎜ ⎟ G = ⎝ GLC GC GRC ⎠ GLR GCR GR ⎞ ⎛ gL τL GC τR† gR gL (1 + τL GC τL† gL ) gL τL GC ⎟ ⎜ =⎝ GC τL† gL GC GC τR† gR ⎠, gR τR GC τL† gL

gR τR GC

(3.131)

gR (1 + τR GC τR† gR )

where every element is expressed in terms of GC , the Green’s function of the central part. For GC we have 3 2 (3.132) −τL† gL τL + (E − HC ) − τR† gR τR GC = I. In these equations gL,R ≡ (E − HL,R )−1

(3.133)

denote the Green’s functions of the isolated left- and right-hand leads. Defining the left and right self-energies as † L,R ≡ τL,R gL,R τL,R

(3.134)

and using Eq. (3.132), the Green’s function of the central region can be calculated as GC = (E − HC − L − R )−1 .

(3.135)

This equation shows that to calculate the Green’s function of the central region one needs to know the Green’s functions of the isolated left- and right-hand leads. Once

3.5 Green’s functions

63

GC is known, the whole Green’s function Eq. (3.131) can be expressed. Knowing GC also allows us to calculate the density in the central region, ρ(E, r) = −

1 Im GC (r, r). π

(3.136)

Now we can calculate the wave function of the system by assuming that there is an incoming wave function in one of the leads, say in the left-hand lead. The Schrödinger equation of the isolated left-hand lead is HL ψkL0 = EkL ψkL0 ,

(3.137)

where EkL is an eigenvalue and ψkL0 is an eigenstate corresponding to the assumption that the left-hand lead and the rest of the system are not coupled, that is, τL = 0: ⎞ ⎛ L0 ⎞ ⎛ L0 ⎞ ⎛ HL 0 0 ψk ψk ⎜ ⎟ ⎟ † ⎟⎜ L⎜ (3.138) ⎝ 0 HC τR ⎠ ⎝ 0 ⎠ = Ek ⎝ 0 ⎠ . τR

0

HR

0

0

The wave function of the whole system corresponding to a left-hand eigenstate ψkL0 can be written in the form ⎛ L ⎞ ⎛ L0 ⎞ ψ ψk ⎜ C⎟ ⎜ ⎟ + (3.139) ψ ⎝ ⎠ ⎝ 0 ⎠, ψR

0

and this ansatz satisfies the Schrödinger equation of the whole system: ⎞⎛ L ⎞ ⎛ L ⎞ ⎛ 0 HL τL ψ + ψkL0 ψ + ψkL0 ⎟ ⎜ ⎟ ⎜ † † ⎟⎜ ψC ψC ⎠=E⎝ ⎠. ⎝ τL HC τR ⎠ ⎝ 0

τR

HR

ψR

(3.140)

ψR

By subtracting Eq. (3.138) from Eq. (3.140), one obtains ⎞⎛ L ⎞ ⎛ L⎞ ⎛ L ⎞ ⎛ 0 (Ek − E)ψkL0 HL τL ψ ψ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ † † ⎟⎜ τL† ψkL0 ⎠, ⎝ τL HC τR ⎠ ⎝ ψ C ⎠ = E ⎝ ψ C ⎠ − ⎝ R R ψ ψ 0 τR HR 0 which, using the Green’s function, Eq. (3.131), can be solved as follows: ⎛ L ⎞ ⎛ L⎞ Ek − E ψkL0 ψ ⎟ ⎜ C⎟ ⎜ τL† ψkL0 ⎠ ⎝ψ ⎠ = G⎝ ψR 0 ⎞ ⎞ ⎛ ⎛ gL (1 + τL GLC ) gL τL ⎜ ⎟ L0 ⎜ ⎟ † L0 = EkL − E ⎝ GLC ⎠ ψk + ⎝ 1 ⎠ GC τL ψk . gR τR GLC

gR τR

(3.141)

64

Solution of the Schrödinger equation for scattering states

The solution for the left-hand lead is now 2 3 ψL + ψkL0 = (EkL − E)gL (1 + τL GLC ) + gL τL GC τL† + 1 ψkL0 ,

(3.142)

which, using the identity (E − HL )ψkL0 = E − EkL ψkL0

ψkL0 = E − EkL gL ψkL0

(3.143)

for E = EkL , can be brought into the form

ψL + ψkL0 = EkL − E gL τL GLC − gL τL GC τL† gL ψkL0 = EkL − E (gL τL GLC − gL τL GLC ) ψkL0 = 0.

(3.144)



This equation shows that we have nontrivial solutions only if E = EkL . At this energy, the full wave function of the whole system takes the form ⎞ ⎛ ⎞ ⎛ L (gL τL GC τL† + 1) ψ + ψkL0 ⎟ ⎜ ⎟ L0 ⎜ (3.145) ψC GC τL† ⎠=⎝ ⎠ ψk ⎝ ψR

gR τR GC τL†

That is, for a given incoming wave function ψkL0 we have calculated the total solution and, knowing the wave functions in the leads, we can calculate the physical properties of the system. The charge density in the central region induced by incoming states in the lefthand lead can be calculated as  † f EkL , μL GC τL† |ψkL0 ψkL0 |τL GC ρCL = e k

( =e

e = 2π =

e 2π

dE ( (

 † f EkL , μL GC τL† |ψkL0 δ E − EkL ψkL0 |τL GC k

† dE f EkL , μL GC τL† AL (E)τL GC † , dE f EkL , μL GC L GC

where f (E, μ) is the occupation number of the state with energy E,  AL (E) = |ψkL0 δ E − EkL ψkL0 |

(3.146)

(3.147)

k

is the spectral function of the left lead, and we have used the identity τL† AL (E)τL = iτL† gL − gL† τL = i L − L† = L .

(3.148)

The full charge density in the central region is the sum of the contributions from the left- and right-hand leads: ( 3 2 e † † L R . (3.149) + f EkR , μR GC R GC ρC = ρC + ρC = dE f EkL , μL GC L GC 2π

3.5 Green’s functions

65

Another quantity of interest is the transmission probability. It can be derived by invoking the time-dependent Schrödinger equation. Assuming that the charge density in the central region is stationary, ∂ψ C |ψ C  = 0, ∂t

(3.150)

and using the time-dependent Schrödinger equation in the central region

i † L ∂|ψ C  = τL |ψ  + HC |ψ C  + τR† |ψ R  , ∂t  one obtains

i −ψ L |τL |ψ C  − ψ R |τR |ψ C  + ψ C |τL† |ψ L  + ψ C |τR† |ψ R  = 0. 

(3.151)

(3.152)

One can identify the term

i C † j ψ |τj |ψ  − ψ j |τj |ψ C  

(3.153)

as the probability current from lead j = L, R at energy E. Using Eq. (3.145) one can define the current out of the system in lead i due to the incoming state k in lead j as

i j0 † † i † † j0 ψk |τj GC τi |ψ  − ψ i |τi GC τj |ψk   i j0 † † † † j0 τi (gi − gi† )τi GC τj |ψk  = ψk |τj GC  i j0 † † † j0  i GC τj |ψk . (3.154) = ψk |τj GC  Noticing that 2π



j0 j0 |ψk ψk | = i gj − gj†

(3.155)

k

and hence 2π τj†



j0

j0

|ψk ψk |τj = j ,

(3.156)

k

we can calculate the transmission probability by summing the currents due to the incoming states at energy E: † (3.157)  L GC  R T(E) = Tr GC where † . L,R = i L,R − L,R

(3.158)

Listing 3.7 shows how the code green.f90 calculates scattering properties using Green’s functions.

66

Solution of the Schrödinger equation for scattering states

Listing 3.7 Calculation of the density of states and the transmission probability using Green’s functions 1 2 3 4

e=energy ! green’s function of the left and right leads call green_decimation(e,hl00,hl10,m,gL00) call green_decimation(e,hr00,hr10,m,gR00)

5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37

SigmaL=(0.d0,0.d0) SigmaR=(0.d0,0.d0) ! These do loops are extremely inefficient ! for illustration only do i=1,n do j=1,n do l=1,m do k=1,m SigmaL(i,j)=SigmaL(i,j)+hlc(k,i)*GL00(k,l)*hlc(l,j) SigmaR(i,j)=SigmaR(i,j)+hrc(k,i)*GR00(k,l)*hrc(l,j) end do end do end do end do a=energy*um-hcc-SigmaL-SigmaR ! um: unit matrix call complex_matrix_inverse(a,n,GCC_R) ! Green’s function of the C region Gamma matrices do i=1,n do j=1,n GammaL(i,j)=zi*(SigmaL(i,j)-Conjg(SigmaL(j,i))) GammaR(i,j)=zi*(SigmaR(i,j)-Conjg(SigmaR(j,i))) GCC_A(i,j)=Conjg(GCC_R(j,i)) end do end do ! transmission probability T=matmul(GCC_R,matmul(GammaL,matmul(GCC_A,GammaR))) transmission=0.d0 DOS=0.d0 do i=1,n transmission=transmission+T(i,i) DOS=DOS-imag(GCC_R(i,i)) end do

From Eq. (3.134) we see that in addition to GC we still need to calculate gL,R . These Green’s functions, for the left- and right-hand regions, are more difficult to obtain since the regions are infinite in extent and so their Green’s functions are also infinite. Typically the left- and right-hand regions are semi-infinite leads, made up of identical layers. A simple approximation to this infinite number of layers would be just to truncate the lead after a single layer. To improve this approximation in a systematic way one adds more and more layers to the lead until convergence

3.5 Green’s functions

67

Figure 3.13 Division of a semi-infinite lead into boxes.

is obtained. The recursion and decimation methods [338], described below, both follow this strategy but differ in how many layers are added in each step and also in their convergence properties.

3.5.2

Calculation of Green’s functions for semi-infinite leads Let us divide a semi-infinite lead into two regions, A and B. Let region A represent a single lead layer at the interface with the center region. Region B is the remainder of the lead and has some number n of lead layers (see Fig. 3.13). Defining these regions allows us to partition the lead’s Hamiltonian and Green’s function matrices as  Hlead =

HAA

HAB

HBA

HBB

GAA

GAB

GBA

GBB

 (3.159)

and  Glead =

 .

(3.160)

The idea is to choose a certain number n of layers for region B and calculate the GAA term. We then increase n and recalculate GAA . This is repeated until convergence, when the elements of GAA no longer change. Recall that we need gL,R because they appear in L,R , which in turn appear in GC . In Eq. (3.134) for L,R , gL appears pre- and post-multiplied by the sparse interaction matrices τ . Owing to this structure, we need only the lower-right part of gL , since it is only this part that makes a nonzero contribution to L . A similar statement is true for gL . This means that we do not need to calculate the full gL,R matrices but just one block, the “surface Green’s function.” The GAA term can be calculated by inverting a partitioned matrix: GAA = (E − HAA − HAB gBB HBA )−1 ,

(3.161)

gBB ≡ (E − HBB )−1 .

(3.162)

where

In the next subsection we will discuss two different approaches to calculating the Green’s function of the leads.

68

Solution of the Schrödinger equation for scattering states

3.5.3

Recursion We will begin by assuming that we only have a single layer in the lead. This means that region A has one layer and region B is empty (i.e., n = 0). In this case Hlead = HAA and G = GAA = (E − HAA )−1 .

(3.163)

Next we add another layer to the system, so that now both regions have one layer and so n = 1. Equation (3.161) gives −1  GAA = E − HAA − HAB (E − HBB )−1 HBA −1 = E − HAA − HAB gBB HBA .

(3.164)

Adding another layer to region B gives an even better approximation to an infinite number of layers and yields −1 −1  GAA = E − HAA − HAB E − HAA − HAB gBB HBA HBA .

(3.165)

(n)

To generalize this approach, let gBB be the approximation to gBB found by using n layers in region B. Then we have (with HAA = HBB = H00 ; H10 = HAB , H01 = HBA ) for n = 1, for n = 2,

(1) gBB = (E − H00 )−1 ; 3−1 2 (2) gBB = E − H00 − H01 (E − H00 )−1 H10

−1 (1) = E − H00 − H01 gBB H10 ; for n = 3,

−1 (3) (2) gBB = E − H00 − H01 gBB H10 ;

(3.166)

and, for arbitrary n,

−1 (n) (n−1) = E − H00 − H01 gBB H10 gBB

(3.167)

(0)

with gBB ≡ 0. This recursion is repeated until convergence. Listing 3.8 illustrates an implementation of the recursion method.

3.5.4

Decimation The recursion approach, described above, increases the size of the matrix one layer at a time. The convergence of this approach is slow, as we will show in the numerical examples. The decimation method [279, 338] doubles the number of layers in each step of the iteration, leading to much faster convergence.

3.5 Green’s functions

69

Listing 3.8 Implementation of the recursion method to calculate the Green’s function of the leads 1 2 3 4

5 6 7

subroutine green_iteration(e,h00,h10,n,gBB) implicit none integer :: i,j,k,n real*8 :: h00(n,n),h10(n,n), h10t(n,n) real*8 :: a,b,su,t complex*16 :: e,gs,gb complex*16 :: am(n,n),gBB(n,n),ami (n,n),vv(n,n),u0(n,n),um(n,n)

8 9 10 11 12 13

um=(0.d0,0.d0) do i=1,n um(i,i)=(1.d0,0.d0) end do h10t=transpose(h10)

14 15 16 17 18 19 20 21 22 23

am=e*um-h00 vv=am do k=1,num_iterations call inv(vv,n,gBB) u0=matmul(gBB,h10) vv=am-matmul(h10t,u0) end do call inv(vv,n,gBB) end subroutine green_iteration

Expressed in matrix form, the definition (E − H)G = I becomes, for a lead, ⎛

E − H00

H01

0

0

H10 0

E − H00 H10

H01 E − H00

0 H01

0 .. . ⎛ G00 ⎜ ⎜ G10 ⎜ ⎜ × ⎜ G20 ⎜ ⎜ G30 ⎝ .. .

0

H10

E − H00

⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝

...

.. G01 G11

G02 G12

G03 G13

G21 G31

G22 G32

G23 G33

...





1 ⎟ ⎜ ⎟ ⎜0 ⎟ ⎜ ⎟ ⎜0 ⎟=⎜ ⎟ ⎜ ⎟ ⎜0 ⎠ ⎝ .. .. . .

⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠

.

0 0 0 1 0 0

...

0 1 0 0 0 1 ..

⎞ ⎟ ⎟ ⎟ ⎟ ⎟. ⎟ ⎟ ⎠

(3.168)

.

If we multiply the left-hand side out, we can see that the Hamiltonian’s tridiagonal structure causes the resulting equations to be sums of (at most) three terms. That is, we obtain

70

Solution of the Schrödinger equation for scattering states

H01 Gm+1,n + (E − H00 )Gmn + H10 Gm−1,n = δmn

(3.169)

by multiplying the mth row of E − H and the nth column of G. Since the H and G matrices are themselves infinite, there are an infinite number of such equations. Thus is useful to define a procedure for specifying subsets of these equations. Define a parameter i such that we multiply the first 2i + 1 rows of the E − H matrix by the first column of the G matrix. For a given i we then have 2i+1 equations, which involve 2i+1 + 1 unknowns, G00 , . . . , G2i+1 0 . We can therefore eliminate all but two of the unknowns and obtain G2i 0 = ui G00 + vi G2i+1 0 ,

(3.170)

where ui = (1 − ui−1 vi−1 − vi−1 ui−1 )−1 u2i−1 , 2 , vi = (1 − ui−1 vi−1 − vi−1 ui−1 )−1 vi−1

and u0 = −(E − H00 )−1 H01 , v0 = −(E − H00 )−1 H10 . As an example, Eq. (3.169) gives for i = 1 and n = 0 H10 G00 + (E − H00 )G10 + H01 G20 = 0,

(3.171)

H10 G10 + (E − H00 )G20 + H01 G30 = 0,

(3.172)

H10 G20 + (E − H00 )G30 + H01 G40 = 0.

(3.173)

Solving Eq. (3.172) for G30 yields −1 −1 G30 = − H01 H10 G10 − H01 (E − H00 )G20 and solving Eq. (3.171) for G10 yields G10 = −(E − H00 )−1 H10 G00 − (E − H00 )−1 H01 G20 . Substituting these into Eq. (3.173) gives G20 = u1 G00 + v1 G40 , where u1 = (1 − u0 v0 − v0 u0 )−1 u20 , v1 = (1 − u0 v0 − v0 u0 )−1 v02 ,

3.5 Green’s functions

71

At a sufficiently large i value, the vi term becomes negligible. Assuming that this occurs at i = N, setting vN = 0 in Eq. (3.170) gives G2N 0 = uN G00 . For i = 0, . . . , N − 1, Eq. (3.170) gives another N equations: for i = 0,

G10 = u0 G00 + v0 G20 ;

for i = 1,

G20 = u1 G00 + v1 G40 ; .. .

for i = N − 1,

G2N−1 0 = uN−1 G00 + vN−1 G2N 0 .

(3.174)

This set of N equations with N + 1 unknowns can then be solved to give G10 = TG00

(3.175)

where ⎡ ⎛ ⎞⎤−1 N i−1   T ≡⎣ ui ⎝ vj ⎠⎦ . i=0

j=0

If we multiply the first row of the matrix E − H by the first column of the matrix G we obtain another equation that involves G00 : (E − H00 )G00 + H01 G10 = 1.

(3.176)

Now using Eq. (3.175), Eq. (3.176) becomes G00 = (E − H00 + H01 T)−1 , which is the desired surface Green’s function of the lead. Listing 3.9 gives an implementation of the decimation method.

3.5.5

Analytically solvable example using finite differences As a simple example we will solve the step potential again using the Green’s function approach. Using a second-order finite difference representation the

72

Solution of the Schrödinger equation for scattering states

Listing 3.9 Implementation of the decimation method to calculate the Green’s function of the leads 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32

um=(0.d0,0.d0) do i=1,n um(i,i)=(1.d0,0.d0) end do h10t=Transpose(h10) am=e*um-h00 call complex_matrix_inverse(am,n,g00) u0=matmul(g00,h10) v0=matmul(g00,h10t) tv=v0 tu=u0 fv=v0 fu=u0 do j=1,Niter am=um-matmul(u0,v0)-matmul(v0,u0) call complex_matrix_inverse(am,n,ami) uu=matmul(u0,u0) vv=matmul(v0,v0) u1=matmul(ami,uu) v1=matmul(ami,vv) fu1=matmul(fv,u1) fv1=matmul(fu,v1) fu=matmul(fu,u1) fv=matmul(fv,v1) tu=tu+fu1 tv=tv+fv1 u0=u1 v0=v1 end do am=e*um-h00-matmul(h10t,tu) call complex_matrix_inverse(am,n,g00) ! g00 contains the desired Green’s function matrix

Hamiltonian H of this system is given by ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝

..

⎞ . −t

2t + VL

0

−t

−t 2t + VL

0 −t

0 0

0 0

0 0

0 0 0

0 0 0

−t 0 0

2t + VL −t 0

−t 2t + VR −t

0 −t 2t + VR

0 0 −t

0

0

0

0

0

−t

2t + VR ..

⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟, ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠

. (3.177)

3.5 Green’s functions

73

where t=

2 , 2m( x)2

(3.178)

VL (VR ) is the constant potential in the left-hand (right-hand) lead, and x is the grid spacing. Note that this Hamiltonian is identical to a tight-binding model Hamiltonian for a one-dimensional chain (e.g. a monoatomic wire). Using the notation of Eq. (3.129), we can define the Hamiltonian of the left-hand lead, ⎛ ⎞ .. . ⎜ ⎟ ⎜ ⎟ −t 2t + V −t 0 L ⎜ ⎟, (3.179) HL = ⎜ ⎟ ⎝ 0 −t 2t + VL −t ⎠ 0

−t

0

the Hamiltonian of the right-hand lead, ⎛ 2t + VR −t ⎜ −t 2t + VR ⎜ HR = ⎜ ⎜ 0 −t ⎝ and the Hamiltonian of the central region  2t + VL HC = −t

2t + VL ⎞

0 −t

⎟ ⎟ ⎟, ⎟ ⎠

2t + VR ..

−t 2t + VR

(3.180)

.

 .

(3.181)

The matrices HL and HR are infinite dimensional, and HC is 2 × 2. The left-hand coupling matrix is a ∞ × 2 matrix and is defined as ⎞ ⎛ .. .. . . ⎟ ⎜ ⎟ (3.182) τL = ⎜ ⎝ 0 0⎠. −t

0

Similarly, the right-hand coupling matrix is a 2 × ∞ matrix, ⎛ ⎞ 0 −t ⎜ ⎟ 0 0 ⎟. τR = ⎜ ⎝ ⎠ .. .. . .

(3.183)

To calculate the transmission probability, Eq. (3.157), we need the Green’s function of the central region, Eq. (3.135). To calculate this we first need to determine the surface Green’s functions of the leads. Using Eq. (3.167), we can calculate the surface Green’s function of the left-hand lead as −1 (n) (n−1) (3.184) gL = E − H00 − H10 gL H10 ,

74

Solution of the Schrödinger equation for scattering states

where H00 = 2t + VL ,

H10 = −t.

(3.185)

For an infinite lead we have (n)

(n−1)

gL = gL

≡ gL

(3.186)

since if one layer is removed one still has an infinite lead, and the surface Green’s function will be the same with or without the extra layer. In this simple onedimensional case we can solve Eq. (3.184) for gL to get * E − VL − 2t i (E − VL − 2t)2 1 e−iφL gL = − 1 − = − i sin φ = ) (cosφ L L t t t 2t2 4t2 where

 E − VL − 2t . φL = arccos 2t

(3.187)

Similarly, we find that gR = where

e−iφR , t

(3.188)

 E − VR − 2t φR = arccos . 2t

(3.189)

Now we can calculate the  matrices. The matrix L is 2 × 2 and has only one nonzero element:   te−iφL 0 . (3.190) L = 0 0 Similarly,

 R =

0

0

0

te−iφR

 .

(3.191)

Using the  matrices, the equation defining the Green’s function of the center, Eq. (3.135), becomes  −1 E − 2t − VL − te−iφL t . (3.192) GC = t E − 2t − VR − te−iφR To calculate the transmission probability, Eq. (3.157), we also need the  matrices, which are   2t sin φL 0 L = 0 0

3.5 Green’s functions

and

 R =

0 0

0 2t sin φR

75

 .

Since only one element of each  matrix is nonzero, the transmission probability becomes † (2, 1)L (1, 1)GC (1, 2)R (2, 2). T(E) = GC

(3.193)

Evaluating this expression in the limit t → ∞ (i.e., x → 0) gives √ √ 4 E − VL E − VR T(E) = √ 2 . √ E − VL + E − VR Using



√ E − Vx = kx / 2m, for x = L, R, gives the final result: T(E) =

4kL kR , (kL + kR )2

which is the familiar analytical result for the transmission probability of a potential step; this illustrates the correctness of the Green’s function approach.

3.5.6

Tight-binding model of electron conduction in molecular wires As mentioned in the previous subsection, the structure of a second-order finite difference Hamiltonian is very similar to that of a simple nearest neighbor tightbinding Hamiltonian of a one-dimensional chain. The one-dimensional tightbinding model has often been used to describe the conductance properties of molecular wires [170, 322, 230, 231, 232, 354]. We will consider a linear chain of 2N + 1 atoms (or molecules) connected to two leads each formed from an infinite linear chain of atoms. We assume that each atom is described by a single wave function (basis function) and that these wave functions are orthogonal and overlap only with nearest neighbors. Then the Hamiltonian of the system is H = HL + HC + HR + τL + τR , where HL =

HC =

−N−1 

α|nn| +

−N−1 

n=−∞

n=−∞

N 

N 

n=−N

an |nn| +

n=−N

β (|nn + 1| + |n + 1n|),

bn (|nn + 1| + |n + 1n|),

(3.194)

76

Solution of the Schrödinger equation for scattering states

and ∞ 

HR =

∞ 

α|nn| +

n=N+1

β (|nn + 1| + |n + 1n|)

n=N+1

are the Hamiltonians of the leads and the center, and τL = tL | − N − 1−N|,

(3.195)

τR = tR |NN + 1|

(3.196)

are the coupling terms. We have assumed that the left- and right-hand leads are identical and have allowed for different atoms in the central region. The coefficients α, β, tL , tR , ai , and bi are matrix elements (or parameters) of the basis functions. The Hamiltonian of the system can be written in matrix form as before: ⎞ ⎛ 0 HL τL ⎟ ⎜ H = ⎝ τL† HC τR† ⎠ , 0 τR HR with tridiagonal matrices ⎛ a−N b−N ⎜b a −N+1 ⎜ −N ⎜ ⎜ ⎜ 0 b−N+1 HC = ⎜ ⎜ . ⎜ .. ⎜ ⎜ ⎝

···

0

0

⎟ ⎟ ⎟ .. ⎟ . ⎟ ⎟, ⎟ 0 ⎟ ⎟ ⎟ bN−1 ⎠

b−N+1 a−N+2

···

0 ⎛ ⎜ ⎜ HL = ⎜ ⎜ ⎝

..



0 bN−1

aN

⎞ . α β

β α

⎟ 0⎟ ⎟, ⎟ β⎠

0

β

α

and ⎛

α ⎜β ⎜ HR = ⎜ ⎜0 ⎝

β α

0 β

β

α



..

⎟ ⎟ ⎟. ⎟ ⎠

(3.197)

.

Here HL and HR are infinite-dimensional and HC is (2N + 1) × (2N + 1) dimensional. The left-hand coupling matrix is a ∞ × (2N + 1) matrix with one nonzero element:

3.5 Green’s functions



.. ⎜ . τL = ⎜ ⎝0 tL

.. . 0 0

77

⎞ ⎟ ⎟ ···⎠. ···

Similarly, the right-hand coupling matrix is a 2N + 1 matrix: ⎞ ⎛ · · · 0 tR ⎜··· 0 0 ⎟ ⎟. τR = ⎜ ⎠ ⎝ .. .. . . The Green’s function of the central region now becomes GC = (E − HC − )−1 ,

(3.198)

where ⎛ ⎜ ⎜ ⎜ ⎜ =⎜ ⎜ ⎜ ⎝

L

0

···

0 .. .

0 .. .

···

0

0

...

0



⎟ ⎟ ⎟ ⎟ ⎟ ⎟ 0 ⎟ ⎠ R 0 .. .

has only two nonzero elements. Using the leads’ Green’s functions from the previous subsection, we obtain L =

t2L

, ) β γ + i 1 − γ2

R =

t2R

) β γ + i 1 − γ2

with γ =

α−E . 2β

(3.199)

The transmission probability for this model can now be calculated (see Eq. (3.193)) and it is T(E) = 4 |GC (−N, N)|2 L R , where

4 L = t2L β 1 − γ 2 , 4 R = t2R β 1 − γ 2 .

(3.200)

(3.201) (3.202)

Solution of the Schrödinger equation for scattering states

1.0 N=1 0.8 0.6 T(E)

N=3

0.4 0.2 0 –2

N=5

–1

0 E

1

2

Figure 3.14 Transmission as a function of energy for a linear chain with N = 1, 3, and 5 atoms. The parameter values are as follows: α = 1, β = 2, ai = α, bi = β, and tL = tR = β/2. Atomic units are used.

1.0 0.8 0.6 T(E)

78

0.4 0.2 0 –2

–1

0 E

1

2

Figure 3.15 Transmission as a function of energy for a linear chain of N = 21 atoms. The parameter values are as follows: α = 1, β = 2, ai = α, bi = β, and tL = tR = β/2. Atomic units are used.

To illustrate this approach numerically, we calculated the transmission probabilities for linear chains of N atoms. The results for N = 1, 3, 5 are presented in Fig. 3.14. Figures 3.15 and 3.16 show the results for N = 21. If the atoms in the leads and in the center are identical, that is, the parameters are chosen to be ai = 2α, bi = β, and tL = tR = β, then the transmission probability is equal to unity. Figures 3.14 and 3.15 show the result when the coupling between the lead and the middle region is weakened by taking tL = tR = β/2. If the coupling was weakened even more, the left, right, and center regions would be decoupled and the transmission would be zero.

3.6 Spectral projection

79

1.0 0.8

T(E)

0.6 0.4 0.2 0 –2

–1

0 E

1

2

Figure 3.16 Transmission as a function of energy for a linear chain of N = 21 atoms. The parameter values are as follows: α = 1, β = 2, ai = 2α, bi = β, and tL = tR = β. Atomic units are used.

We next changed the atomic on-site energies, ai = 2α, for the N = 21 system. The result is shown in Fig. 3.16. The transmission probability behaves like that of a rectangular barrier.

3.5.7

Numerical examples In this section we calculate the transmission probability and the density of states using Green’s functions. The Green’s functions of the leads are calculated by the decimation approach and by the recursion approach. The Hamiltonian is represented by three-point finite differences and the simple step potential that is used is given by $ −1, x < 0, (3.203) Vstep (x) = +1 otherwise. The code green.f90 implements these calculations. One can easily change the potential and extend the calculations to more complicated cases as well. Figure 3.17 shows the real and imaginary parts of the surface Green’s function as a function of energy. Figure 3.18 shows the same for the leads. Figure 3.19 shows the transmission probability for the step potential. The calculated values can be checked against the analytical results presented in the previous subsections.

3.6

Spectral projection We have seen (Eq. 3.147) that the spectral projection operator A(E) projects out the eigenstate of the Hamiltonian from an arbitrary state. First we will provide an

Solution of the Schrödinger equation for scattering states

0.02

0.01

0

–0.01

–0.02 –10

0 (E – 2t)/t

10

Figure 3.17 Real part (solid line) and imaginary part (dotted line) of the surface

Green’s function of a lead, as a function of energy. The energy has been rescaled by t = 2 /(2m x2 ), where x is the grid spacing.

10

0

–10 –3

–2

–1

0 (E – 2t)/t

1

2

3

Figure 3.18 Real part (solid line) and imaginary part (dotted line) of the Green’s function of a lead, as a function of energy. The energy has been rescaled by t = 2 /(2m x2 ) where x is

the grid spacing. 1.0 Calculated Analytical

0.8 Transmission

80

0.6 0.4 0.2 0

0

1

2

3 Energy

4

5

6

Figure 3.19 Transmission vs. energy for a step potential.

3.6 Spectral projection

81

alternative derivation of that statement and then numerical examples will be given to illustrate how spectral projection works. The Green’s function G(E) for the Hamiltonian H (see also Section 3.5) is defined by (E − H)G(E) = I.

(3.204)

From this definition it follows that, for any function |ν, the functions |ψR = −G|ν,

(3.205)

|ψA = −G† |ν

(3.206)

are the retarded and advanced solutions of the nonhomogeneous Schrödinger equation: (E − H)|ψA = −|ν

(3.207)

(E − H)|ψR = −|ν.

(3.208)

and

Since these are linear equations, |ψ = |ψR − |ψA

(3.209)

is the solution to the homogeneous Schrödinger equation. For any function |ν the vector |ψ = A|ν,

(3.210)

A = −(G − G† )

(3.211)

where

is called the spectral operator, solves the homogeneous Schrödinger equation. As an example we take the harmonic oscillator potential V (x) = 12 mωx2 and project out the eigenfunctions from the Gaussian function

1/4 π 2 e−βx /2 . (3.212) |ν = β We will use a Green’s function with a complex potential; this was considered in Section 3.3.2. The name of the computer code is spectral_ho.f90. The spectral projector acting on |ν gives zero except for the eigenstates of the harmonic oscillator Hamiltonian, with energies E=

1 2

+i

(i = 0, 1, 2, . . .);

(3.213)

here atomic units are used, and ω = 1. The first five eigenstates projected from |ν are shown in Fig. 3.20. Next we show an example with a continuous spectrum by calculating the eigenstates of the Hamiltonian for a step potential (see Fig. 3.21). In this case the wave

Solution of the Schrödinger equation for scattering states

1.0

0.5 Wave function

82

0

–0.5

–1.0

–4

–2

0 x (atomic units)

2

4

Figure 3.20 The lowest five eigenstates of the harmonic oscillator projected from a Gaussian

function by the spectral projection operator.

2

1

0

–1

–2 –30

–20

–10

0

10

20

30

Figure 3.21 The real (solid line) and imaginary (broken line) part of the E = 1.5 scattering wave function (see Eq. (3.214)) in a step potential (dotted line). A complex absorbing potential, which starts at x = −10 at the left and x = 10 at the right, is used in the calculation. The wave function is, therefore, only correct in the [−10, 10] interval.

function is complex, so a complex function is used for |ν; the latter is constructed by multiplying the previously used Gaussian by a complex-phase exponential, i.e., a plane wave factor:

1/4 π 2 e−βx /2 eikx . (3.214) |ν = β The predicted continuum eigenfunctions are shown in Fig. 3.21. If the calculation of the Green’s function is not too complicated then this spectral-projection approach is very useful since it gives the continuum (scattering) wave functions at any desired energy.

3.7 Appendix: One-dimensional scattering states

3.7

83

Appendix: One-dimensional scattering states In this appendix we list some important properties of the one-dimensional scattering states. It is assumed that outside the scattering region [a, b] the potential is constant: ⎧ ⎪ V , −∞ < x < a, ⎪ ⎨ L (3.215) V (x) = v(x), a ≤ x ≤ b, ⎪ ⎪ ⎩ b < x < ∞, VR , with VL < VR . For energies VR < E we define the real wave numbers + + 2m(E − VL ) 2m(E − VR ) kL = , kR = . 2  2

(3.216)

For energies VR < E, the scattering eigenfunctions are doubly degenerate. The eigenstate for an incoming wave from the left is $ eikL x + rL (E)e−ikL x , −∞ < x < a, + (3.217) ψL (kL ) = ik x b < x < ∞. tL (E)e R , The degenerate partner, the eigenstate for an incoming wave from the right, is $ tR (E)e−ikL x , −∞ < x < a, + ψR (kR ) = (3.218) b < x < ∞. e−ikR x + rR (E)eikR x , One can also define the complex conjugates of ψL+ and ψR+ . These are denoted by ψL− and ψR− and can be expressed as linear combinations of ψL+ and ψR+ : $ e−ikL x + rL (E)∗ eikL x , −∞ < x < a, + ∗ − ψL (kL ) = ψL (kL ) = (3.219) ∗ −ik x b